By now, most of us are probably familiar with the idea that large numbers of fake and automated Twitter and Facebook accounts, many of them run by trolls linked to the Russian government, created and amplified misinformation in an attempt to interfere with the 2016 election. But this wasn’t just a one-off incident—trolls of all kinds continue to use bots to try and influence public opinion in all kinds of ways.
To take one of the most recent examples, there is some evidence that automated Twitter accounts have been distributing and promoting controversial race-related content during the gubernatorial race in Virginia, which is currently underway. According to a study by Discourse Intelligence, whose work was financed by the National Education Association, more than a dozen either partially or fully automated bots were involved.
The activity relates to a video advertisement produced by the Latino Victory Fund, which shows a child having a nightmare in which a supporter of Republican candidate Ed Gillespie chases immigrant children in a pickup truck that is decorated with a Confederate flag. The study said the accounts had the potential to reach over 650,000 people.
One of the biggest problems with this kind of misinformation, from a media point of view, is that because of the way the media industry functions now—and particularly the focus on traffic-generating clickbait and other revenue-based behavior—if the message being promoted by fake and automated becomes loud or persistent enough, it is often picked up by traditional media outlets, which can exacerbate the problem by giving it legitimacy.
In one prominent case, a fake and largely automated Twitter account belonging to someone who pretended to be Jenna Abrams, a Trump-loving young woman, was widely quoted not just on right-wing news sites such as Breitbart or on conservative-leaning networks like Fox News, but in plenty of other places as well, such as USA Today and even the Washington Post. The account was created by a Russian “troll factory.”
In each of these kinds of cases, the life-cycle or trajectory of such bits of misinformation reinforces just how fragmented and chaotic the media landscape has become: Misinformation from notorious troll playgrounds like 4chan or Reddit makes its way to Twitter and/or Facebook, gets promoted there by both automated accounts and unwitting accomplices, and then gets highlighted on news channels and websites.
Mainstream media outlets like Fox News, for example, helped promote the idea that “anti-fa” or anti-fascist groups were planning a weekend uprising in an attempt to overthrow the US government, an idea that got traction initially on Reddit and 4chan and appears to have been created by alt-right and fake news sites such as InfoWars.
After the Texas church shooting on the weekend, tweets from alt-right personality Mike Cernovich—who was also instrumental in promoting the so-called “Pizzagate” conspiracy theory that went viral during the 2016 election—were highlighted in Google search (in the search engine’s Twitter result “carousel” that appears at the top of search results). The tweets contained misinformation about the alleged shooter’s background, including reports he was a member of an “anti-fa” group and that he had recently converted to Islam.
Google has come under fire—and deservedly so—for a number of such cases, including one in which a misleading report from 4chan appeared at the top of search results for information on the mass shooting in Las Vegas. The company apologized, and senior executives have said privately that they are trying hard to avoid a repeat of such behavior, but the misinformation showing up about the Texas shooting tweet shows there is still much work to be done.
The search giant got off relatively easily at the recent hearings before both the Senate and the House intelligence committees, with most of the criticism and attention focused on the behavior of social networks like Twitter and Facebook. And while Google might argue that it’s Twitter’s fault if misinformation is promoted by trolls during election, if those results show up high up in search, then that also means it’s Google’s problem.
The giant tech platforms all say that they are doing their best to make headway against misinformation and the fake and automated accounts that spread it, but critics of the companies note that until recently they denied that much of this activity was even occurring at all. Facebook, for example, initially denied that Russian-backed accounts were involved in targeting fake news and divisive ads at US voters.
At the Congressional hearings, representatives for Google, Facebook and Twitter all maintained that fake and automated activity is a relatively small part of what appears on their networks, but some senators were skeptical.
Twitter, for example, reiterated to Congress the same statistic it has used for years, which is that bots and fake accounts represent less than 5% of the total number of users, or about 15 million accounts. But researchers have calculated that as much as 15% of the company’s user base is made up of fake and automated accounts, which would put the total closer to 50 million. And a significant part of their activity appears to be orchestrated.
Whether any of this activity is actually influencing voters in one direction or another is harder to say. Some Russian-influenced activity during the 2016 election appeared to be designed to push voters towards one candidate or another, but much of it—as described in Facebook’s internal security report, released in April—seemed to be designed to just cause general chaos and uncertainty, or to inflame political divisions on issues like race.
As with most things involving this kind of behavior, it’s also difficult (if not impossible) to say exactly how much of this was organized by malicious agents intent on disrupting the election in favor of one candidate or another, and how much of it was simply random bad actors trying to cause trouble.
The Internet Research Agency, a Kremlin-linked entity that employed a “troll army” to promote misleading stories during the election, is the most well-known of the organized actors employing these methods. But there are undoubtedly more, both within and outside Russia, and all three of the tech giants admitted at the Congressional hearings that they have only scratched the surface when it comes to finding or cracking down on this kind of behavior.