If you spend any time on Facebook then you’ve probably seen them, either in your main news feed or in the “trending topics” section — clearly fake news stories designed to get you to click, many of them playing on the latest conspiracy theory surrounding the 2016 election.
These are the kind of stories that Facebook’s trending editors used to weed out, but then the site fired all the editors it was using for that purpose, after a controversy over allegations of political bias, and since then it has been using algorithms.
According to a recent experiment by the Washington Post, however, getting rid of the human beings isn’t really working that well — at least not at keeping out the fakes.
In order to see what Facebook considered a trending topic and why, the newspaper’s Intersect team started tracking what stories and links were trending every hour, and sent those links out as an email news digest, as well as keeping a record of them in a database.
Note: This was originally published at Fortune, where I was a senior writer from 2015 to 2017
Not long after Facebook switched from using human editors to mostly algorithm-driven curation, the site suffered a black eye when a fake story about Fox News firing host Megyn Kelly started trending.
As if that wasn’t bad enough, the social network then highlighted a story from a 9/11 hoax website in the trending-topics section, which stated the collapse of the World Trade Center buildings was a result of “controlled explosions” rather than a terrorist attack.
After the Megyn Kelly incident, Facebook apologized, saying it met the criteria for a trending story but should have been flagged as fake.
“We’re working to make our detection of hoax and satirical stories quicker and more accurate,” a spokesman told CBS News at the time.
According to the Post‘s survey of topics, however, there is still a lot of work that needs to be done. Between August 31 and September 22, the paper says it found five trending stories that were “indisputably fake” and three that were “profoundly inaccurate.”
Facebook has made a point of denying that it is a media company, and its response to the Trending Topics controversy — getting rid of its human editors — can be seen in part as a desire to reject the media-entity status that others believe it deserves.
Regardless of what it calls itself, however, the reality is that Facebook is doing its best to host and distribute an increasing amount of news from companies (including the Washington Post) through its Instant Articles feature and other aspects of the site. And a growing number of users say they get their news from the social network, particularly millennials.
In that kind of news-consumption environment, Facebook arguably bears a responsibility to ensure that the news it is providing is accurate. But so far it seems to be failing.
The company claims that it cares about the fake news problem, but if it’s not a media entity then why should it? If a story is being shared a lot and clicked on a lot or is generating a lot of comments, what difference does it make to Facebook whether it’s fake or not?
The risk of that kind of approach, especially during an election like the current one in the United States, is that fake news designed to drive a specific agenda — pro-Donald Trump, anti-Hillary Clinton, etc. — can reach and influence a far greater number of news consumers if it hits the trending topics than it might have otherwise.
Many of these stories come from little-known websites that generate stories solely for the click traffic they know Facebook will send them even if something is a hoax.
In a campaign where the Republican candidate already has a somewhat fragile relationship with the truth, that could have significant repercussions, and Facebook should arguably be owning up to those responsibilities instead of trying to downplay them.
And maybe rather than pretending that the algorithm can do everything, the social network should try hiring some human editors again to help it spot a fake more quickly.