Image-sharing platform Instagram has announced the launch of a new feature which aids in tackling fake news, to be trialed first in the U.S.A.
Instagram, which was acquired by Facebook in April 2012, has not been immune from the ripple effects of privacy scandals and fake news allegations that have rocked its parent firm. To combat public concerns, the picture-sharing platform started to block hashtags that promoted misinformation surrounding vaccine safety in May. Now, it has stepped up measures to monitor content.
Users of the social networking site are now able to flag posts that spread misinformation, but only in America, as part of a pilot study. Depending on the feedback and efficiency of the tool, the feature will be later expanded to other regions. This action will result in the post being downplayed on the ‘explore’ and ‘hashtag’ pages. But it will not be removed.
By clicking the three-dot option on the right-hand side of the mobile app, a drop-down menu lists the flagging tool. If an Instagram user views content they believe is false, they can click the ‘Report’ button, followed with ‘It’s Inappropriate’ and then ‘False Information’ to report it to the social platform. Building this user interface allows Instagram to gather user input, which the firm will use to train artificial intelligence to detect similar unreliable information and posts going forward.
Instagram chief, Adam Mosseri, said in a tweet: "I'm proud that, starting today, people can let us know if they see posts on Instagram they believe may be false. There's still more to do to stop the spread of misinformation, more to come."
Nevertheless, this freedom to judge doesn’t solely rely on the public, as U.S.-based fact-checkers at the media corporation provide the final approval upon review. Repeatedly flagged items will further be reviewed by the company’s third-party fact-checkers, who will dictate if it should be removed from the network altogether.