I know that it’s tempting to blame what happened on Tuesday night — the re-election of a former game-show host and inveterate liar with 34 felony counts and two impeachments as president of the United States — on social media in one form or another. Maybe you think that Musk used Twitter to platform white supremacists and swing voters to Trump, or that Facebook promoted Russian troll accounts posting AI-generated deepfakes of Kamala Harris eating cats and dogs, or that TikTok polarized voters using a combination of soft-core porn and Chinese-style indoctrination videos to change minds — and so on.
In the end, that is too simple an explanation, just as blaming the New York Times’ coverage of the race is too simple, or accusing more than half of the American electorate of being too stupid to see Trump for what he really is. They saw it, and they voted for him anyway. That’s the reality.
It’s become accepted wisdom that platforms like Twitter and Facebook and TikTok spread misinformation far and wide, which convinces people that the world is flat or that birds aren’t real or that people are selling babies and shipping them inside pieces of Wayfair furniture. And it’s taken as fact that these tools increase the polarization of society, turning people against each other in a number of ways, including by inflating social-media “filter bubbles.” We all know this. And particularly when there is an event like a federal election, concern about both of these factors tends to increase. That’s why we see articles like this one from Wired, which talks about how social platforms have “given up” on things like fact-checking misinformation on their networks.
But is there any proof that social media either convinces people to believe things that aren’t true, or that it increases the levels of polarization around political or social issues? I don’t want to give away the ending of this newsletter, but the short answer to both of those questions is no. While social media may make it easier to spread misinformation farther and faster, it hasn’t really changed human nature itself all that much. In other words, social media is more of a symptom than it is a cause.
Note: This is a version of my Torment Nexus newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.
In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus (you can find out more about me and this newsletter — and why I chose to call it that — in this post.)
The Russians are coming!
Anyone who followed the 2016 election in the US probably remembers how the topic of misinformation — and its cousin disinformation, which is misinformation that has been created deliberately to mislead — became a kind of frenzy, with everyone looking over their shoulders to see whether Russian disinfo was distorting reality for the purposes of electing Trump as president. The poster child for this phenomenon was the Internet Research Agency, an innocuously-named entity that was created and run by a close friend of Russian dictator Vladimir Putin and employed dozens of agents whose sole job was to create disinformation aimed at American social-media users.
One of the first things I did after I joined the Columbia Journalism Review as its chief digital writer in 2017 was to fly to Washington to interview senators and sit in on Congressional hearings into Russian disinformation. As with so many of these government hearings, however, very little of any consequence actually happened; most of the time was taken up by members of Congress showing mockups of Facebook disinformation on pieces of giant posterboard so that they could grandstand for the TV cameras.
Despite a number of articles drawing a direct link between social-media disinformation and the 2016 election, and suggestions from US intelligence sources that the IRA helped get Trump elected, this was a lot of sound and fury, signifying very little (Russian hacking and release of documents and emails is a somewhat different story). A study published in Nature last year looked at data from 1,400 respondents and found that only one percent of Twitter users accounted for 70 percent of the cases of exposure to Russian disinformation. In the end, the study said that it found “no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior.”
A study in 2017 tested people’s recall of “fake news” on Facebook, but in addition to using real fake stories, the researchers also created their own fakes and asked people if they remembered seeing them. The result? Just as many said they saw those ones as the ‘real’ fake news stories. As the New York Times put it, this suggests that it’s not so much that misinformation is re-shaping people’s views of the world, but rather that some proportion of social-media users “are willing to believe anything that sounds plausible and fits their preconceptions about the heroes and villains in politics.”
Misinformation doesn’t work the way you think it does
Jennifer Allen, a post-doctoral researcher at the University of Pennsylvania and an expert in digital persuasion and misinformation, told the Reuters Journalism Institute recently that we often believe that others are far more susceptible to false content than we are, despite evidence showing this isn’t the case, a phenomenon known as the third-person effect. In reality, content designed to influence or persuade has very small effects on people’s political attitudes or voting choices or behaviour. Alex Stamos, former director of the Stanford Internet Observatory, says there has been a “massive overestimation of the capability of mis- and disinformation to change people’s minds.”
Carl Miller, research director at the Centre for the Analysis of Social Media at Demos in the UK, told me earlier this year that when it comes to misinformation, most people have “fairly naive ideas” about how it works. People imagine that bad actors spread convincing yet untrue images about the world in order to get people to change their minds on important political or social topics, Miller said, but in reality, most such influence operations are designed not to spread misinformation but rather to “agree with people’s worldviews, flatter them, confirm them, and then try to harness that.” In other words, misinformation only works if there is an existing belief or tendency to play off, which means that it doesn’t create beliefs so much as confirm them.
Last year, four scientific studies looked at how — or whether — Facebook’s news feed influences the political beliefs or behavior of users, and found little evidence of any impact. One study involved more than twenty thousand users of Facebook and Instagram, and replaced the normal recommendation algorithms used by both services with a reverse chronological feed, or one in which the most recent posts appear first (this was one of the reforms suggested by Frances Haugen, who leaked hundreds of internal documents that she said showed Facebook was hiding how unhealthy its apps were for users’ mental health). Other papers tried limiting whether certain types of content could go viral or not, and one looked at the news stories that made it into a user’s feed and correlated that with how liberal or conservative the user was.
Meta, Facebook’s parent company, crowed about these results, not surprisingly, although some critics — including Haugen — pointed out that all four research papers were written after the social network had implemented a number of news feed changes aimed at quelling disinformation in the runup to the 2020 election. David Garcia, a professor at the University of Konstanz in Germany, wrote in Nature that, as significant and broad-reaching as the studies may have been, they didn’t eliminate the possibility that Facebook’s algorithms contribute to political polarization because the research was done at the individual level and polarization is “a collective phenomenon.”
This is the part I think a lot of people miss. As Casey Newton noted in his Platformer newsletter, the studies were consistent with the idea that Facebook is “only one facet of the broader media ecosystem,” one that includes networks like Fox News and Newsmax and dozens of other outlets. Yochai Benkler — co-director of the Berkman Klein Center for Internet and Society — has argued that distribution of misinformation and partisan arguments by Fox News and other networks, part of an evolution of conservative media that began with Rush Limbaugh in the early 1990s — played a far bigger role in what happened in 2016 than anything that Twitter or Facebook did. In effect, they are echo chambers that reflect something that has emerged elsewhere.
So why does this myth of fake news on social media swinging elections persist, despite an almost complete lack of evidence to support it? Sociologist Brendan Nyhan’s theory is that it’s a little like the myth that Orson Welles’s radio program “War of the Worlds” caused widespread panic in 1938. The program was likely only heard by a tiny number of people, and there’s no actual evidence that it caused any kind of panic, and yet the myth persists. If you are blaming social media or “disinformation” for what happened in the election, I think you are barking up the wrong tree. At best, social media reflected or amplified what was going on in the “real world.” It didn’t create it.
Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.