After acknowledging that it has a problem with fake news, Facebook introduced a feature recently that flags certain posts as “disputed.” In some cases, however, this appears to be having the opposite effect to the one Facebook intended.
According to a report by The Guardian, the tagging of fake news is not consistent, and some stories that have been flagged continue to circulate without a warning. In other cases, traffic to fake news posts actually increased after Facebook applied the warning.
Facebook started rolling out the new feature last month, as part of a partnership with a group of external fact-checking sites, including Snopes.com, ABC News and Politifact.
When a user tries to share links that have been marked as questionable, an alert pops up that says the story in question has been disputed. The alert links to more information about the fact-checking feature and says that “sometimes people share fake news without knowing it.”
If the user continues to share the link or story anyway, the link is supposed to appear in the news-feeds of other users with a large note that says “disputed,” and lists the organizations that flagged it as fake or questionable.
The idea behind the effort was to try and decrease the visibility of hoaxes and fake news, which many Facebook critics believe are spread rapidly by the site’s news-feed algorithm.
In a number of cases, however, the Guardian said it appears that the fake-news warning is either being applied too late — after a story has already “gone viral” and been shared by large numbers of people — or is having the opposite effect to the one Facebook wants.
A site called Newport Buzz, for example, published a story about how thousands of Irish people were brought to the U.S. as slaves, and the story was flagged as untrue according to Snopes.com and Associated Press. But the editor of the site says that traffic to the story actually increased significantly after Facebook applied the warning.
“A bunch of conservative groups grabbed this and said, ‘Hey, they are trying to silence this blog – share, share share,'” Christian Winthrop told the Guardian. “With Facebook trying to throttle it and say, ‘Don’t share it,’ it actually had the opposite effect.”
Facebook hasn’t provided any data on the number of articles that have been flagged as disputed, or what effect that has on traffic, but a spokesman did tell the Guardian that a disputed tag “does lead to a decrease in traffic and shares.”
Another website owner whose articles have been flagged by the system as disputed said he hadn’t seen any sign of a traffic decline as a result of the warning. Robert Shooltz, who runs a site called RealNewsRightNow — which he argues is satire rather than fake news — said a flag on one of his stories “had absolutely no effect.”
One of the problems with the kind of fact-checking process Facebook has implemented, sociologists and psychologists say, is that it only works if users trust both the social network and the third-party fact-checkers that it has partnered with.
If a person doesn’t trust a specific information source, then arguments made by that source about the inaccuracy of a story can actually convince the person of the opposite, even if the source has facts and evidence to support their argument. This is sometimes called “the boomerang effect.”
In other words, for at least some news consumers, the fact that Facebook and Snopes have flagged something as untrue makes them more likely to believe it, not less.
The actor James Woods, who is known for making right-wing comments on Twitter and elsewhere, expressed exactly this sentiment recently, saying “The fact that @facebook and @snopes ‘dispute’ a story is the best endorsement a story could have.”
In a recent essay, sociologist danah boyd (who chooses to spell her name without using capital letters) argued that Facebook and Google can’t solve the fake news problem because it is being driven by human nature and a clash of cultures, and that can’t be changed through argument or the presentation of facts.