There’s plenty of blame to go around when it comes to the problem of “fake news,” and some of it quite rightly falls on social networks such as Facebook and Twitter. It’s not that these platforms don’t care about the truth, however. The problem goes deeper than that. It’s more of a structural problem, and it doesn’t come with an easy solution.
Social networks like Facebook and Twitter started out primarily as ways to connect with friends and other people with similar interests, and they did so by making it easy for you to share bits of text, along with hyperlinks to content worth reading on the web.
Eventually, however, text gave way to photos, with both Twitter and Facebook restructuring their news feeds or streams to allow for larger pictures. Then came video — both in the form of video clips (many of which auto-play because advertisers like it that way) and animated GIFs. Hyperlinks, which social networks noticed were rarely clicked on anyway, started to become less important.
Note: This was originally published at Fortune, where I was a senior writer from 2015 to 2017
Facebook has said that in the next few years, the vast majority of a user’s news feed will be made up of video. It has spent tens of millions of dollars getting media outlets like the New York Times to produce more and more video clips that it can publish. And Twitter has also worked hard to become a live-streaming hub for sports and TV-style content.
Thus, over time, both Facebook and Twitter have come to look less like social networks and more like television. And not necessarily just the good parts of television, but also the bad parts — like the part that values easy-to-consume entertainment over facts.
Iranian blogger Hossein Derakshan, who was imprisoned for six years by the Iranian government, has written eloquently about the impact of this transformation, because he didn’t see the gradual changes occur — he stepped out of prison in 2014 and was struck by how different things seemed, how blogging was dead and Facebook was a giant.
One of those changes was the death of the hyperlink and the open web, which Derakshan wrote about on Medium. But another was the growing primacy of television-style content and the decline of text. And the impact of that, he says, is potentially dangerous in a number of ways.
The main reason to be concerned, Derakshan says, is that television — as sociologist Neil Postman warned in his 1985 book Amusing Ourselves to Death — tends to favor emotion over thought, and that makes it a great tool for demagogues like Donald Trump. What matters isn’t the facts, it’s how the fake facts or misinformation makes you feel.
Like TV, it now increasingly entertains us, and even more so than television it amplifies our existing beliefs and habits. It makes us feel more than think, and it comforts more than challenges. The result is a deeply fragmented society, driven by emotions, and radicalized by lack of contact and challenge from outside.
Postman said that a TV-centered environment is ripe for the spread of disinformation. But disinformation doesn’t necessarily mean fake news, he said. Instead, it means “misleading information — misplaced, irrelevant, fragmented or superficial information — information that creates the illusion of knowing something.”
The social web exacerbates this problem, both because it favors short bursts of content, in many cases without links, and because it is also an emotional medium. And algorithm-driven personalization on networks like Facebook increases the filter-bubble effect. Says Derakshan:
Social media uses algorithms to encourage comfort and complaisance, since its entire business model is built upon maximizing the time users spend inside of it. The outcome is a proliferation of emotions, a radicalization of those emotions, and a fragmented society.
BuzzFeed founder Jonah Peretti, who has spent years studying the way that content works online and why people share certain things and not others, says that posts or updates get shared because they trigger a strong emotion in the reader — fear, hate, love, etc. Very few people (apart from journalists perhaps) share things because they are accurate.
Sociologists note that networks like Facebook work in part by making people feel part of a tribe or group — not just family or friend groups, but ideological groups as well. And that means content gets shared not because it’s true, but because it confirms a person’s membership in a group.
Since they are desperate both for readers/viewers and for advertising revenue, many media outlets have played into this emotion-first environment by relying on clickbait and trumped-up scandals, essentially adopting all the worst aspects of cable television. And that in turn has reduced the amount of trust that would-be readers have in those outlets.
The combination of these factors has arguably helped create the “post truth” media landscape we currently find ourselves in. And Donald Trump was able to take advantage of that environment more effectively than anyone in recent memory.
The more the media fact-checked Trump’s every claim, the more his fans and supporters dismissed these outlets as biased, and instead focused on the news sources that catered to their existing beliefs. Facebook’s emphasis on friends and family, combined with its preference for emotional content, makes it a powerful engine for confirmation bias.
Is there any way out of this situation? Derakshan recommends that users try to confuse the Facebook algorithm by liking things they don’t actually like, in an attempt to broaden the range of content they get. And he also wants Facebook — or even third parties with plug-ins — to add new buttons to the social network, such as “agree/disagree” or “trust/suspect.”
In the long run, if we don’t want everything to turn into short-form cable television, perhaps all we can do is try to be a bit more discriminating in our own choices, and hope that others will follow. If enough do so, maybe social networks will have to adapt to those demands and change the way they operate.