Note: This is something I originally published in the daily newsletter sent out by the Columbia Journalism Review, where I’m the chief digital writer
Most people are probably familiar by now with the idea that there are “trolls” on the Internet—thanks in part to events like GamerGate, but also to the rise of Donald Trump, the Troll-in-Chief who occupies the White House. Many trolls have an agenda of some kind, as the infamous Russian Internet Research Agency did, while others seem to just get a kick out of creating chaos. As Alfred said to Bruce Wayne in The Dark Knight, “some men just like to watch the world burn.” But regardless, there are some similarities in how trolls work, and how they are able to capture the attention of both regular Internet users—and, in some cases, professional journalists—in order to spread their disinformation far and wide. An attempt to create a taxonomy of trolling tactics is the aim of a new report published by the digital think tank Data & Society, written by Joan Donovan, Director of the Technology and Social Change Research Project at Harvard University’s Kennedy School, and senior researcher Brian Friedberg.
The report focuses on a subset of online manipulation that Donovan calls “source hacking.” The report describes this as a set of techniques for hiding the sources of problematic information, in order to permit its circulation in mainstream media—an indirect method for targeting journalists, by planting false information in places they are likely to encounter it. The report breaks down the tactics used by trolls into four categories: 1) Viral Sloganeering, which consists of repackaging reactionary talking points for social media and press amplification; 2) Leak Forgery, which involves prompting a media spectacle by sharing forged documents; 3) Evidence Collages, which are documents (usually images) made up of information or misinformation from multiple sources so as to make them easily shareable, and 4) Keyword Squatting, the strategic domination of keywords and “sock-puppet” accounts in order to misrepresent the behavior of specific groups or individuals.
Donovan and Friedberg use recent case studies to illustrate each of their sub-categories. For example, one of the most successful viral sloganeering success stories was the “Jobs Not Mobs” hashtag from October 2018. The slogan emerged first on Reddit threads, where users came up with visual memes that would help the hashtag spread, including video clips showing decontextualized riots and migrant caravans. “Easily shareable audiovisual material, alongside the deployment of a hashtag, created opportunities for a swarm of participation, and the slogan quickly grew past its point of origin in far-right online hubs,” the report says. The slogan moved to Twitter and Facebook, where automated or bot-like accounts helped it spread even further, and finally the hashtag was used by the president of the United States in a tweet—the Mount Everest of trolling.
The fact that Donald Trump would willingly repeat a viral slogan engineered by right-wing trolls reinforces just how quickly such campaigns can go mainstream, in ways that can actually affect the national dialogue about important issues. Donovan and her co-author point out that journalists can make this problem worse by reporting on these kinds of campaigns, which in turn spreads the virus to new potential carriers. In many cases, this is done for purely journalistic reasons—because something is seen as legitimate news if the president says it, even if it’s on Twitter—but in other cases, media outlets are happy to use such campaigns as revenue-generating clickbait, or something to fill the 6 o’clock news hole for the local TV broadcast. One of the central points of the Data & Society report is that many professional trolls understand these kinds of motivations, and know exactly which levers to pull to get attention.
In the report’s recommendations, Donovan and Friedberg point out that it is incumbent upon both journalists and social platforms to understand how these viral slogans grow and evolve, in order to determine if the content’s spread is “organic or operational”—in other words, whether it is a natural outgrowth of social activity or a planned campaign. “Journalists must also understand their role in an amplification network and look out for instances where they may unwittingly call attention to a slogan that is popular only within a particular, already highly polarized community online,” the report says. In general, Donovan recommends that media outlets do their best to find corroborating evidence before reporting on social media campaigns, and that newsrooms invest more resources in information security, including creating a specific job that involves verifying chains of evidence connected to social-media campaigns.
Here’s more on disinformation and trolling:
Platform power: Joan Donovan, co-author of the trolling report, spoke with me for an extended interview that ran on CJR’s Galley platform recently. We talked about disinformation tactics, and how journalists ought to respond, but also about the collision between the First Amendment and our traditional notions of free speech, and the power of platforms like Facebook and Google to censor content.
Oxygen for trolls: Data & Society, a think tank run by sociologist and social-media expert danah boyd (who spells her name without upper case letters), also published a report last year called The Oxygen of Amplification that looks at one aspect of the trolling phenomenon, namely the spreading of troll messaging by journalists and regular social-network users. I talked with the report’s author, Whitney Phillips, for a Galley interview earlier this year.
The tipping point: Claire Wardle, who runs First Draft News, in 2017 published one of the first comprehensive reports about misinformation and how it occurs and spreads on social networks, and one of the main points of the report was that journalists need to think about when they amplify information and when they don’t. Wardle says the media should wait until it hits a “tipping point,” which is when it moves from Reddit sub-threads or 4chan boards to the mainstream.
Other notable stories:
Reporters working for the subscription-only business newsletter The Information got hold of an internal memo they said contains instructions for the editors that Facebook is hiring to curate the news headlines for its new News tab, which is coming soon. Editors have been instructed to allow articles that are critical of Facebook to appear, and have also been told that they shouldn’t post a story based on anonymous sources unless they can find two other reputable outlets that have reported the same information.
Emily Tamkin, who acts as public editor for CNN on behalf of CJR, writes about reporter Daniel Dale, the former Toronto Star journalist who has been on a fact-checking mission since Donald Trump was elected. Some people are skeptical of the value of such repeated fact-checking, but Dale tells Tamkin: “My attitude is that it matters when the president is dishonest. And it should be pointed out every time, whether it’s a big lie or a relatively trivial exaggeration.”
The Bureau of Investigative Journalism, a non-profit media outlet in the UK, has hired an “impact editor,” whose job it is to ensure that the organization’s reporting has an impact on the topics and communities it writes about, and to track and record that impact. The Bureau routinely collaborates on investigative reporting projects with major outlets like The New York Times and The Guardian.
Express, a free commuter-focused newspaper that The Washington Post has published for 16 years, is shutting down, the paper itself reported. According to reporter Paul Farhi, “managers of the paper cited its deteriorating financial condition for the decision to cease publishing. Although they declined to cite specific figures, they said the printed paper had recently begun to lose money.”
According to a report from Michael Calderone in Politico, Democratic strategists are upset that fact-checkers like The Washington Post‘s Fact-Checker column are taking what they believe to be a partisan approach to their jobs. Among other things, these Democratic critics say fact-checkers are implying that small errors committed by candidates such as Bernie Sanders are somehow equivalent to the massive and repeated falsehoods that come from the president.
The Athletic, a subscription-based sports news service that has been growing rapidly of late, is launching an ad-supported podcast later this month called The Lead, according to Bloomberg. The company is working with Wondery, a podcast production company known for true-crime programs like “Dirty John.” that relies on reader subscriptions, is making its first major foray into ad-supported content. The Athletic says it hopes to replicate the success of the popular New York Times podcast “The Daily.”
Journalism needs to move beyond endless discussions about business models and come up with some radical solutions for how journalism works in a digital age, writes journalist and entrepreneur Christopher Wink in an essay on Medium. “Mostly we have spent the last 10–15 years managing our decline by slightly tweaking centuries-old advertising and subscription revenue models,” he writes, “and then hand-wringing their inevitable failure.”
Four Republican senators sent a letter to Facebook CEO Mark Zuckerberg on Wednesday, criticizing the social-media platform’s recent “fact check” of pro-life organization Live Action, according to a report in The National Review, which obtained a copy of the letter. Senators Josh Hawley, Ted Cruz, Kevin Cramer, and Mike Braun condemned what they called Facebook’s “pattern of censorship.” The social network’s external fact-checkers rated two videos shared by Live Action president Lila Rose as “false,” prevented her from promoting or advertising them, and alerted users who had shared the two videos that they had spread “false news.”
A new study reports that mass shooters seeking notoriety tend to receive more media coverage than their non-fame-seeking counterparts, according to a psychology news site. The study, which has been published in the journal Aggression and Violent Behavior, found that 96 percent of mass killers who were looking for notoriety got at least one mention in The New York Times, while only 74 percent of the non-fame-seeking shooters got one.
A top YouTube toy reviewer is facing accusations of misleading preschoolers into watching ads by not disclosing sponsorship deals, NBC reports. Ryan Kaji is 7 years old, but his Ryan ToysReview channel is one of the most popular YouTube programs, with billions of views and more than 21 million subscribers. It was the top earning channel in 2018, with ad revenue of $22 million. But the watchdog group Truth in Advertising claims the channel has made much of its profits using “deceptive native advertising,” or product placements that youngsters don’t realize is a sales pitch.