On Russian bots and Twitter’s influence

In the wake of Donald Trump’s victory in the 2016 presidential election, a virtual cottage industry—or possibly even a real, full-sized industry—emerged, bent on laying the blame for that victory somewhere, and social media was one of the primary targets. The argument, both in Congressional hearings and in academic treatises, was that misinformation and “fake news” spread by Russian trolls helped get Trump elected. More recently, however, research has poked some significant holes in this argument. The most recent is a study that was published recently in Nature, entitled: “Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior.”

Six researchers from universities in New York, Ireland, Denmark, and Germany co-authored the study. It correlated survey data from about 1,400 respondents with Twitter data and found a number of things, including: 1) Exposure to Russian disinformation accounts was heavily concentrated, with only one percent of users accounting for 70 percent of exposures. 2) Exposure was concentrated among users who strongly identified as Republicans, and 3) Exposure to the Russian influence campaign was vastly eclipsed by content from domestic news media and politicians. In sum, it said: “We find no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior.”

To some, the study was a vindication of their belief that the anguish over foreign disinformation was a fraud from the beginning, an excuse to force social media to censor information. Glenn Greenwald, a noted Twitter gadfly, said: “Russiagate was – and is – one of the most deranged and unhinged conspiracy theories in modern times. It wasn’t spread by QAnon or 4Chan users but the vast majority of media corporations, ‘scholars,’ think tank frauds, and NYT/NBC’s ‘disinformation units.'” (to which Elon Musk, Twitter’s owner and CEO, responded: “True.”) Others noted that looking to Twitter for foreign influence didn’t make any sense, since Facebook was the primary engine for such things.

According to the study, we should skeptical about more than just the Russians and Twitter when it comes to influencing behavior. The authors note that election campaigns in general have a poor record of doing so. “The large body of political science research that examines the effects of traditional election campaigns on voting behavior finds little evidence of anything but minimal effects,” they say. However, they note that the Russian bot activity could have created second-order effects. Debate about the 2016 election and whether it was rigged has “engendered mistrust in the electoral system,” they argue. “In a word, Russia’s foreign influence campaign on social media may have had its largest effects by convincing Americans that its campaign was successful.”

Others argue that social media’s influence, even if its exists, is just a small part of a much broader political ecosystem problem. In 2018, several academics including Yochai Benkler of Harvard’s Berkman Center published a book titled: Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. They argued that it’s misleading to try to pinpoint a specific actor or vector such as Twitter or Facebook, or Russian agents, as the cause of the election outcome. As Mike Masnick of Techdirt summarized it: “It’s not that the social media platforms are wholly innocent. But the seeds of the unexpected outcomes in the 2016 U.S. elections were planted decades earlier, with the rise of a right-wing media ecosystem that valued loyalty and confirmation of conservative values and narratives over truth.”

In 2019, I wrote about a study by Brendan Nyhan from the University of Michigan that looked at the effect of misinformation. Nyhan said his data showed that so-called “fake news” reached only a tiny proportion of the population before and during the 2016 election. Nyhan also said at the time that “no credible evidence exists that exposure to fake news changed the outcome of the 2016 election.” In November of 2020, my CJR colleague Jon Allsop noted that Joshua Yaffa, the Moscow correspondent at the New Yorker, also raised the question of whether the threat of Russian disinformation was as dangerous as many seemed to think. Yaffa argued that the online trolling tactics of the Internet Research Agency seemed to be aimed primarily at “scoring points with bosses and paymasters in Russia as much as influencing actual votes.”

Whatever we might think of its conclusions, the Nature study is just a small part of a much broader discussion about how (or whether) social media of any kind, including disinformation, affects our behavior—and what (if anything) social media platforms should do about it. In November of 2021, not long after the story of Hunter Biden’s alleged laptop broke, former BuzzFeed writer Joe Bernstein wrote a piece for Harper‘s magazine about how much of the discourse around misinformation tries to paint users of social-media platforms as gullible rubes who are being manipulated by sophisticated algorithms, and how terms like disinformation “are used to refer to an enormous range of content, ranging from well-worn scams to viral news aggregation.”

Alex Stamos, the former head of digital security for Facebook and now director of the Stanford Internet Observatory, said in a recent interview with Peter Kafka of Vox that he thinks there has been a “massive overestimation of the capability of mis- and disinformation to change people’s minds.” That doesn’t mean disinformation isn’t a problem, Stamos argues, but it does suggest that we need to reframe how we look at the issue — to see it as less of something that is done to us and more of a supply and demand problem. “We live in a world where people can choose to seal themselves into an information environment that reinforces their preconceived notions,” he said.

Stamos said he believes there is a legitimate complaint behind the so-called Twitter Files—internal documents that Elon Musk has released through journalists like Matt Taibbi and Bari Weiss. Rather than removing content or banning accounts, Stamos said that what Twitter and Facebook and YouTube and other companies should focus on is whether their algorithms are actively making things worse. “The focus should be on the active decisions they make, not on the passive carrying of other people’s speech,” Stamos argues. “So if somebody is into QAnon, you do not recommend to them, ‘Oh, you might want to also storm the Capitol.’ That is very different than going and hunting down every closed group where people are talking about ivermectin.”

If nothing else, the Nature study and experts like Stamos are a useful corrective to what often seems like an unhealthy obsession with Russian bots and “fake news” as the source of every evil. Paying attention to foreign agencies that are meddling in US politics is important, and so is tracking and debunking dangerous online misinformation. But blaming Twitter and Facebook for all of our political and social ills is reductive in the extreme. As the Wall Street Journal put it in an editorial on the Nature study: “Maybe the truth is that Mr. Putin’s trolls are shouting into the hurricane like everybody else.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

3 Replies to “On Russian bots and Twitter’s influence”

  1. @mathewi Needless to say, they studied the wrong part of the Russian influence campaign. The email dump orchestrated by WikiLeaks almost certainly was enough to swing a very close election from Clinton to Trump.

Leave a Reply

Your email address will not be published. Required fields are marked *