Here’s why Facebook and Google can’t fix the problem of fake news

There’s been a lot of attention focused over the past year on the rise of so-called “fake news,” a term that has even made its way into tweets by President Donald Trump. But the problem has proven to be fiendishly difficult to define, let alone solve.

What exactly qualifies as “fake news?” A story about secret child sexual-abuse rings operating underneath a pizza parlor? A Breitbart News item that suggests billionaire George Soros pays anti-Trump protesters? Or a New York Times report that says something the president doesn’t want people to believe? All of these have been defined as fake news.

After initially poo-poohing the suggestion that it plays a role in the spread of hoaxes and inaccurate information, Facebook has implemented a number of features designed to address the issue, including flagging stories as unverified or questionable. But will this actually correct the overall problem? Social-media researcher danah boyd (who chooses to spell her name using only lower-case letters) argues in a recent essay that it won’t. And the reasons for that have a lot less to do with Facebook and a lot more to do with human nature.

Note: This was originally published at Fortune, where I was a senior writer from 2015 to 2017

For one thing, boyd says, no one — not even experts in the area — can agree on a definition of what “fake news” is. The term is used to refer to every conceivable kind of problematic content, “including both blatantly and accidentally inaccurate information, salacious and fear-mongering headlines, hateful and incendiary rhetoric produced in blogs, and propaganda of all stripes.”

The worst kinds of false news aren’t even the most obvious kinds — the clear fakes or ridiculous assertions — the Microsoft researcher and former fellow at Harvard’s Berkman Center says:

“Much of the most insidious content out there isn’t in your face. It’s not spread widely, and certainly not by people who are forwarding it to object. It’s subtle content that is factually accurate, biased in presentation and framing, and encouraging folks to make dangerous conclusions that are not explicitly spelled out in the content itself.”

There are definitely some “low-hanging fruit mechanisms” that platforms like Facebook and Google can use, boyd says, including cutting off the economic incentive for fake news by blocking certain sites from ad networks. But at the end of the day, “these are rounding errors in the ecosystem.”

“I don’t want to let companies off the hook, because they do have a responsibility in this ecosystem,” boyd says. “But they’re not going to produce the silver bullet that they’re being asked to produce. And I think that most critics of these companies are really naive if they think that this is an easy problem for them to fix.”

In boyd’s view, the problem we think we are describing when we use the term “fake news” can’t be solved by blaming social-media platforms, or digital journalism and the rise of clickbait, or Macedonian teens. All of these things play a role, but they are just symptoms of a deeper issue. “No amount of ‘fixing’ Facebook or Google will address the underlying factors shaping the culture and information wars in which America is currently enmeshed,” she says.

In effect, the current obsession with fake news is just the latest version of a fight that the Internet has been waging for years against offensive content, whether it’s email spam or online bullying or black-hat SEO (search-engine optimization) techniques, says boyd.

“The short version of it all is that we have a cultural problem, one that is shaped by disconnects in values, relationships, and social fabric. Our media, our tools, and our politics are being leveraged to help breed polarization by countless actors who can leverage these systems for personal, economic, and ideological gain.”

If Facebook and Google crack down on “fake news” sites, the Microsoft researcher argues, those who have an interest in generating that kind of content will find ways around the restrictions — as many already have by using visual “memes” instead of links to news stories.

In the end, boyd says, the problems that lie beneath what we call fake news are social ones, political and cultural ones, and they can’t be solved with technological features or solutions. “Too much technology and media was architected with the idea that just making information available would do that cultural work,” she says. “We now know that this is not what happened.”

The issues that we need to deal with “are socially and culturally hard,” says boyd. They force us to confront “how people construct knowledge and ideas, communicate with others and construct a society. They are also deeply messy, revealing divisions and fractures in beliefs and attitudes. And that means that they are not technically easy to build or implement.”

In other words, Facebook and Google can make it harder to share (or make money from) false news reports or propaganda. But they can’t solve the underlying problems because those problems are fundamentally human ones. We are going to have to figure out how to fix those ourselves. And that is a much harder assignment than just changing some features on a social platform.

Leave a Reply

Your email address will not be published. Required fields are marked *