In October of last year, the New York Post dropped what looked like a bombshell story, in the middle of the runup to the presidential election. It alleged that a laptop belonging to Joe Biden’s son Hunter had been found in a repair shop, and that emails taken from this laptop allegedly implicated the Bidens in an influence scheme in Ukraine. The story started to weaken under close scrutiny, however: the owner of the repair shop contradicted himself and referenced conspiracy theories in an interview, the emails made their way to the Post via some questionable sources — former Trump advisor Steve Bannon and Trump lawyer Rudy Giuliani — and the story was co-written by a former producer on Sean Hannity’s Fox News program, in her first published article.
In his Sunday media column, Ben Smith, the media writer for the New York Times, noted that this story was used in a session about misinformation that Harvard’s Shorenstein Center held recently for media executives. Although Twitter and Facebook blocked or restricted the spread of Biden laptop story out of concern that it might be misinformation, Smith argued the Post report was just “an old-fashioned, politically motivated dirty tricks campaign,” and that describing it as misinformation doesn’t add much to our understanding of it. Misinformation is “a technocratic solution to a problem that’s as much about politics as technology,” he said, and a reporter’s job isn’t to “put neat labels on the news. It’s to report out what’s actually happening, as messy and unsatisfying as that can be.”
In questioning the desire to label things as misinformation, Smith is in sync with some other critics, including BuzzFeed writer Joe Bernstein, who wrote a recent piece for Harper‘s magazine (which Smith linked to in his column) about the movement he calls “Big Disinfo.” Believers, he argues, want users of social-media platforms to think they are gullible rubes who are being manipulated by social targeting and advertising algorithms. The terms misinformation and disinformation, he says, “are used casually and interchangeably to refer to an enormous range of content, ranging from well-worn scams to viral news aggregation.” Bernstein argues that these terms are often just jargon that means “things I disagree with” (I spoke with Bernstein about his piece and some of the conclusions he reached in a discussion on CJR’s Galley platform).
There’s no question that the terms misinformation and disinformation get thrown around without much attention to whether they fit a given scenario. And Smith is right that stories like the one about Hunter Biden’s alleged laptop fit quite well into the “old-fashioned political dirty tricks” category, which has been with us as long as politics itself. Benjamin Franklin, someone who is as close to a journalistic hero as you can find in the US, invented stories about his political opponents and printed them in a fake newspaper. But as widely shared as those stories might have been, they didn’t get instantaneously transmitted to billions of people via an algorithm. Until the internet came along, disinformation was artisanal, hand-crafted by people like Franklin — now it is mass-produced and distributed by Russian troll farms like the Internet Research Agency.
Political “dirty tricks” also doesn’t begin to describe something like the impact that disinformation about the Rohingya people in Myanmar — amplified by Facebook’s algorithms — had on the genocide there, or the impact it has had in countries like Brazil, where president Jair Bolsonaro and his supporters mobilized armies of disinfo spreaders. Disinformation in India, spread via Facebook-owned WhatsApp, has reportedly led to dozens of deaths in that country. Smith seems to be arguing that all we really need to fight misinformation is good old-fashioned reporting, and he may be right. But there aren’t enough reporters in the world to check all the misinformation that flies through social media every day. Even Facebook can’t keep up, and it has 15,000 moderators.
It’s true that there are deep-seated social and political reasons why people invent and share disinformation, and therefore technological solutions will never be enough to solve that problem. But it seems naive to suggest that disinformation doesn’t have a technological aspect to it, or that technology isn’t going to have to play a role in whatever solution we as a society come up with. Tracing the interconnections between, and the behavior of, people who understand and use these technologies for nefarious purposes is a complicated process, which is why it’s useful to have help from experts like Joan Donovan, who runs the Technology and Social Change project at Harvard’s Shorenstein Center, and led the misinformation program Smith mentions in his column.
Here’s more on misinformation:
Incentives: I spoke with Donovan as part of a discussion series on CJR’s Galley platform in 2019, and she talked about why she does the work she does. “I wouldn’t do this research if I did not believe deeply in the right to free and open communication, which includes the right to communicate free from hate speech, harassment, and violence,” Donovan said. Commitments from the social platforms to help stop these problems, she added, don’t really address “the fundamental incentives behind how hate groups are financed and resourced online by having access to payment processIng and broadcast technologies.”
Squatting: In a piece published in CJR earlier this year, Donovan and Brandi Collins-Dexter, a colleague at the Shorenstein Center, wrote about some of the tactics that right-wing groups used to spread disinformation about the 1619 Project and critical race theory. “Our research reveals that the popularity of “1776” owes in part to keyword squatting—a tactic by which right-wing media have dominated the keywords “1619” and “critical race theory” and enabled a racialized disinformation campaign, waged by Trump and his acolytes, against Black civil rights gains,” Donovan and Collins-Dexter wrote.
Definition: In July, I wrote about some of the challenges Facebook faces when it comes to defining what qualifies as disinformation, including a flip-flop on whether posting rumors about COVID-19 escaping from a lab would fit that definition. “Not that long ago, this was defined by almost everyone — including Facebook — as disinformation, and sharing theories to that effect would get your account blocked. In recent months, however, experts have started to entertain those theories more than they did in the past” and therefore discussing such a possibility “is no longer seen as a direct ticket to Facebook or Twitter oblivion.”
Other notable stories:
Scott Hechinger writes for The Nation about what he calls “a massive fail on crime reporting” by the New York Times and NPR, related to stories both published about data showing a significant rise in homicides in 2020. “I write this not to attack the Times or NPR or the reporters of these stories, nor to take away or distract from the very real and disturbing tragedy of every single one of these murders,” Hechinger wrote. His intention, he said, was to “call attention to an insidious and historically rooted contributor to the system of policing and prison in our country: a pro-police worldview deeply ingrained in journalism.”
Julie K. Brown, the Miami Herald investigative reporter who helped break the story about Jeffrey Epstein, the former billionaire, and his abuse of under-age girls, argued on Twitter that the trial of Ghislaine Maxwell — Epstein’s alleged associate, who the state says procured under-age girls for him — shows why cameras should be permitted in federal courtrooms. “Perhaps… it’s time to let cameras into federal courtrooms,” Brown wrote. “There is no better example of why this is important than in the Epstein/Maxwell case, where important information has never seen the light of day.”
Jack Dorsey, Twitter’s co-founder and CEO, announced Monday that he has stepped down, and will be replaced by Parag Agrawal, the company’s former Chief Technology Officer. Dorsey’s exit “marks a significant shift at the company,” the New York Times reported, saying the company “has navigated years of pressure from investors who thought it did not make enough money and criticism from Washington, particularly from Republican lawmakers who have complained Twitter has helped stifle conservative voices in social media.”
Chris Cuomo, the CNN host, “used his sources in the media world to seek information on women who accused his brother Andrew Cuomo, then the governor of New York, of sexual harassment,” according to a report by CNBC that was based on documents released Monday by the New York Attorney General’s office. “While Chris Cuomo has previously acknowledged advising his brother and his team on the response to the scandals,” CNBC reports, “the records show that his role in helping the then-governor was much larger and more intimate than previously known.” According to the documents, Chris Cuomo dictated statements for his brother, the then-governor, to use.
Behind the external success at Politico “lies a series of burgeoning newsroom conflicts,” the Daily Beast reports. “From personnel issues, including complaints about internal ‘woke police,’ to a divisive unionization drive, to increasing competition in the profitable D.C. newsletter space, tensions appear to be growing within Politico,” the report states. The Daily Beast says it spoke with 22 current and former staffers for its story, and that most of the conflicts have to do with Playbook, the company’s high-profile newsletter. Politico was recently acquired by Axel Springer, a German media giant, for more than $1 billion.
Greg Sargent, a Washington Post columnist, argues that journalists have to change the way they cover politics if Donald Trump decides to run again in 2024, and says he agrees with a Twitter thread posted by Jay Rosen, a journalism professor at New York University, that argues for a different way of covering a fundamentally anti-democratic candidate. Sargent suggests a number of principles, including: “When bad actors manufacture an issue, it isn’t necessarily news. Sometimes news organizations amplify political attacks by treating them as inherently newsworthy.” CJR editor and publisher Kyle Pope also wrote about how the media needs to change the way it reports on Trump.
Ariana Pekary, CJR’s public editor for CNN, writes about how the network’s “exaggerated tone and graphic content increasingly pushes it into the realm of tabloid-like material” as it tries to boost its viewership numbers, including using paparazzi video and photos for lurid stories like the accidental shooting of a film crew member by actor Alec Baldwin. “That is not ethical journalism,” Pekary writes. “TV news producers didn’t stop to think about the lives that are at stake in their race to get their video on the air. What’s more, by airing that material, reputable outlets like CNN encourage and enable similar tactics in the future.”
An episode of “The Simpsons” that ridicules Chinese government censorship appears to have been censored on Disney’s newly launched Disney+ streaming service in Hong Kong, according to a report from the New York Times. The episode was critical of former Chinese leader Mao Zedong, as well as the government’s efforts to suppress any memory or evidence of the 1989 Tiananmen Square massacre. Episodes of the show are available on Disney+, which made its debut in Hong Kong this month, the Times reports, “but in season 16, the archive skips directly from episode 11 to episode 13, omitting episode 12, “Goo Goo Gai Pan,” in which the Simpson family travels to Beijing.”