
Until a couple of years ago, you couldn’t turn around without running into an academic article, university department, or even an entire nonprofit organization devoted to the evils of misinformation and its more sinister cousin, disinformation. Over the past decade or so, Facebook and YouTube have both been accused of distributing massive quantities of both misinformation (accidentally false facts) and disinformation (deliberately false facts), that some say have played a key role in a host of different problems, from the Rohingya genocide in Myanmar or the election of Donald Trump to anti-vaccine hysteria. YouTube in particular has been accused of “radicalizing” the vulnerable by sending them down disinfo “rabbit holes” about everything from 9/11 to the pyramids. And Facebook was widely criticized for saying in 2020 that it didn’t want to become an “arbiter of truth” by taking down posts with misinformation.
The amount of research in this area has declined of late, in part because of the chilling effect from lawsuits and threats issued by members of the Trump administration, who see the topic of disinformation as a cover for censorship of conservative views. Academic entities — including the Stanford Internet Observatory — have either been shut down or dramatically downsized as a result. This is obviously bad, but I think in some ways it represents the chickens coming home to roost, after years of focusing on the sources of disinformation rather than trying to think about the root cause — in other words, seeing it primarily as a supply issue rather than a demand issue. This results in everyone spending most of their time beating up on suppliers like X and Facebook and YouTube, none of which helps to solve the underlying problems.
As Trump and his right-wing acolytes gained traction, they reversed the polarity on the disinformation debate: instead of a well-intentioned attempt to convince Facebook and X and other platforms that they should care about the spread of false information about important topics like COVID, it became about a “woke” mob — including the Biden administration — that was forcing Facebook and the other platforms to censor free speech (I’ve written about the lawsuits launched by the Trump government and Republicans in Congress over the First Amendment implications of what is known as “jawboning,” or attempts by officials to influence the decisions made by platforms). And once Trump was elected, companies like Meta and Google have shown that they are more than happy to jump on that bandwagon and apologize for their prior behavior.
Note: This is a version of my Torment Nexus newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.
I wrote recently for The Torment Nexus about the rise of dedicated fact-checking entities, which paralleled the growing concern about misinformation and disinformation leading up to and following the 2016 election. The question I asked was “Does fact-checking even work?” and by work I mean: Can it actually change people’s minds? Scientific research on this question shows that for the most part fact-checking does not do this. People believe and say and share things for a host of reasons, but a critical appraisal of all the relevant facts is pretty low on the list. This is not to say that checking facts has no impact, or that we shouldn’t do it — just that we shouldn’t get our hopes up too high about its effectiveness on people’s desire to believe certain kinds of misinformation.
Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.
Why do people share false information?

This is just one of the problems with mis- or disinformation: Since believing it is rarely based on the facts, correcting those facts doesn’t really do much to counteract it. Why do people believe and say and share things that aren’t true? In some cases, they may not know the actual facts, and these are the easiest kinds to counteract. But in plenty of cases — including some of the most powerful kinds of mis- and disinformation — people believe or say or share this kind of content because they want to believe it, either because they see some personal benefit, or (more likely) because it conforms to some pre-existing belief, and that belief makes them a) angry; b) sad; c) happy; or d) feel smart — in other words, it makes them feel that they know something that other people don’t.
In a piece I wrote for the Columbia Journalism Review — where I was the chief digital writer — I talked about how disinformation works, and also how politically-motivated campaigns work. It’s not as simple as “Russian troll farm creates fake posts on Facebook and then people vote for Trump.” I wish it was that simple (and I’m sure the Russians do too) because it would make the solutions easier. In some cases, the disinformation is designed without any specific political purpose — it is simply designed to cause information chaos, or to get people to mistrust the mainstream news they are getting from traditional sources. But as Carl Miller of the Centre for the Analysis of Social Media in the UK explained, it is rarely just a stream of fake facts designed to persuade:
Many of us have “a fairly naive idea about how influence operations actually work,” Miller said. People may imagine that bad actors will spread convincing yet untrue images about the world to get them to change their minds, but In reality, influence operations are primarily designed to “agree with people’s worldviews, flatter them, confirm them, and then try to harness that.” That’s why, according to Renée DiResta, the most common type of fake account on X is what is known as a “reply guy” — a persona with no thoughts or opinions that simply shows up to echo a post. Such accounts AI can create a “majority illusion,” DiResta explained, giving the impression that a certain view is more common than it really is.
In a piece for Harper’s about what he called Big Disinfo, Joe Bernstein wrote that the most comprehensive survey of the field to date — a 2018 scientific literature review titled Social Media, Political Polarization, and Political Disinformation — pointed out that disinformation research has to date “mostly failed to explain why opinions change, lacks solid data on the prevalence and reach of disinformation, and declines to establish common definitions for the most important terms in the field, including disinformation, misinformation, online propaganda, hyperpartisan news, fake news, clickbait, rumors, and conspiracy theories.” There is a sense, Bernstein goes on to say, that “no two people who research disinformation are talking about quite the same thing.”
How do you correct a belief?

In some cases, incorrect beliefs don’t even start with false facts, but instead an incorrect interpretation of accurate ones. Adam Kucharski wrote in The Guardian recently that a study last year found that only 0.3% of the vaccine-related links that were viewed on Facebook in 2021 were flagged as false, and “the posts that had the biggest overall impact on vaccine confidence were factually accurate, but open to misinterpretation.” In talking to conspiracy theorists, he noted, one of the surprising things was “how much of the evidence they have to hand is technically true. In other words, it’s not always the underlying facts that are false, but the beliefs that have been derived from them.” And checking facts is child’s play compared to correcting someone’s beliefs.
Speaking of beliefs, many people seem to believe that misinformation or disinformation can have a significant impact on elections, but as Cambridge researcher Magda Osman pointed out in a piece for The Conversation, most academic studies don’t support this conclusion. A study published in 2023 looked at the role of misinformation in the Italian general elections in 2013 and 2018, and found that it had only a small effect on voting patterns, and mostly confirmed voters’ intentions. Similar studies of the effect of Twitter misinformation on the 2016 election in the US also showed only a small effect. Osman notes that other studies have looked at the potential influence of disinformation on elections in the Czech Republic in 2021, Kenya in 2017, South Korea in 2017, Indonesia in 2019, Malaysia in 2018, and the Philippines in 2022, and found that “it is hard to establish a reliable causal influence of fake news on voting.”
To get back to the supply vs. demand problem, a report from the Aspen Institute’s Commission on Information Disorder made what I think is a strong point, namely that disinformation is not the root cause of society’s ills but rather, “expose society’s failures to overcome systemic problems, such as income inequality, racism, and corruption.” Tom Rosenstiel of the University of Maryland has said that misinformation is “not like plumbing, a problem you fix. It is a social condition, like crime.” Deutsche Welle calls disinformation a “wicked problem,” meaning a problem that can’t be clearly defined or definitively solved due to its interrelated or interdependent factors. And in a 2022 piece for the Carnegie Endowment, Gavin Wilde said disinformation is “deeply entangled with other systemic issues, where causal relationships are poorly understood, and where interventions to correct one harmful aspect create unwanted ripple effects.”
As Wilde notes, there are many tools that can be applied to the supply-side of false information, to quantify them, to measure the speed with which they spread, etc. But as they “encounter the demand-side — the behavioral economics, political psychology, cognitive science, and other drivers of human susceptibility to untruth — prevailing notions about countering disinformation tend to lose steam.” Policymakers often have reductive solutions, he says, and these often wind up “colliding with systems that are complex by nature.” Bellingcat founder Eliot Higgins says the biggest failure in countering disinformation is the idea that “it’s the result of outside actors influencing communities, when it’s really about the communities that form organically.” It’s easier to blame Russia, he said, than to “address the fundamental social issues that lead to this, especially when a lot of it is caused by real betrayals of the public trust.”
As I tried to point out in my CJR piece, the most difficult part about fighting disinformation is that its not really about facts, or the truth, or anything like that. As Wilde points out, researchers at Cambridge University recently concluded that in many cases, people find that ignorance and self-deception have what psychologists and sociologists like to call “greater subjective utility than an accurate understanding of the world.” In other words, for a variety of reasons, people often prefer things that aren’t true — even if they are presented with the actual facts. This is fairly depressing, but it suggests that focusing solely on the supply of inaccurate information is a mistake, and that what people describe as a marketplace of ideas is often just a “market for rationalizations, a social structure in which agents compete to produce justifications of widely desired beliefs in exchange for money and social rewards such as attention and status.”
Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.
Likes