Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer
After Donald Trump was elected in 2016, misinformation—and its more toxic cousin, disinformation—stopped being just an academic concept, and started feeling like a social and political emergency. Concerns about Russian trolls meddling in American elections were soon compounded by hoaxes and conspiracy theories involving COVID-19. Even those who could agree on the definition of misinformation, however, debated what to do about it: should Facebook and Twitter be forced to remove “fake news” and disinformation, especially about something as critical as a pandemic? Should they be forced to “deplatform” repeated disinfo spreaders such as Trump and his ilk, so as not to infect others with their dangerous delusions?
After coming under pressure to do so, both from the general public and from president Biden and members of Congress, Facebook and Twitter—and to a lesser extent, YouTube—started actively removing this kind of content. They began by banning the accounts of people such as Trump and Alex Jones, who runs a disinformation operation known as InfoWars, and later started blocking or “down-ranking” COVID-related misinformation that appeared to be deliberately harmful. But was this the right way of handling the problem? Some argue that it is, and that “deplatforming” people like Trump—or even blocking entire platforms, such as the right-wing Twitter clone Parler—works, in the sense that it removes the problem. But not everyone agrees.
The Royal Society, a scientific organization founded in the UK in 1660, recently released a report on the online information environment, which states that “censoring or removing inaccurate, misleading and false content, whether it’s shared unwittingly or deliberately, is not a silver bullet and may undermine the scientific process and public trust.” Frank Kelly, a professor at the University of Cambridge and the chairman of the report, wrote that the nature of science includes uncertainty, especially when it is trying to deal with an unprecedented medical crisis like the pandemic. “In the early days of the pandemic, science was too often painted as absolute and somehow not to be trusted when it corrects itself,” he wrote, “but that prodding and testing of received wisdom is integral to the advancement of science, and society.”
Early in 2021, Facebook and other social platforms said they would remove any posts or other content that suggested the virus that causes COVID-19 came from a laboratory, since this was judged to be harmful misinformation. Later that year, however, a number of reputable scientists raised exactly that possibility, and said it couldn’t be ruled out. Facebook and other platforms were forced to reverse their initial policies. Blocking or removing content that is outside the scientific consensus may seem like a wise strategy, but it can “hamper the scientific process and force genuinely malicious content underground,” Kelly wrote, in a blog post published as in conjunction with the report.
Among the key findings of the report are that while misinformation is commonplace, “the extent of its impact is questionable.” For example, the Society surveyed the British public and said it found that the vast majority of respondents “believe the COVID-19 vaccines are safe, that human activity is responsible for climate change, and that 5G technology is not harmful.” In addition, the report states that the existence of echo chambers “is less widespread than may be commonly assumed, and there is little evidence to support the filter bubble hypothesis (where algorithms cause people to only encounter information that reinforces their own beliefs).”
What should platforms like Facebook do instead of removing misinformation? The report suggests that a more effective approach is to allow it to remain on social platforms with “mitigations to manage its impact,” including demonetizing the content—by disabling ads, for example—or reducing distribution by preventing them from being recommended by algorithms. The report also suggests that adding fact-checking labels could be helpful, something that both Facebook and Twitter have implemented, although there is still some debate in research circles about whether fact-checks can actually stop people from believing misinformation they find on social media.
Here’s more on misinformation:
Section 230: In November, the Aspen Institute’s Commission on Information Disorder—which included Katie Couric, a former news network anchor, and Prince Harry, the Duke of York—released its final report, which contained 15 recommendations to stamp out misinformation. They include financial support for local journalism and for other sources of accurate information, such as libraries. The commission also recommended changes to Section 230, a clause in the Communications Decency Act that protects platforms such as Facebook and YouTube from legal liability for the content they host. The report recommended that the clause be amended to “withdraw platform immunity for content that is promoted through paid advertising and post promotion.”
Types: The Royal Society report defines several groups that tend to share misinformation, with different motives: “Good Samaritans” often unknowingly produce and share misinformation because they believe it to be true; “Profiteers” distribute it because they generate revenue from the content somehow; “Coordinated Influence Operators” are trying to sway public opinion for a political purpose; and “Attention Hackers” are engaged in what is often called “trolling” — they share misinformation because they enjoy creating chaos and/or gaining attention, whether positive or negative.
Research: One of the report’s recommendations is that social media platforms should “establish ways to allow independent researchers access to data in a privacy compliant and secure manner.” As the Royal Society notes, platforms such as Facebook and Twitter have promised to provide data to researchers, but have dragged their feet in doing so, and in Facebook’s case have disabled the accounts of scientists doing research on the platform. I recently spoke with Nate Persily, a social scientist and former co-chair of Social Science One, a research partnership with Facebook, which he quit after it failed to produce much useable data. He has since helped draft legislation that would force the platforms to provide research and allow researchers to access their services.
Other notable stories:
Natalie Mayflower Sours Edwards, a government official who blew the whistle on money laundering at a number of Western banks, was released from federal prison on Monday night, after spending about five months there for disclosing government documents to a journalist, according to BuzzFeed News. Edwards was released about a month earlier than expected, the site reported, and will be on probation for three years. Edwards was a key source for the FinCEN Files, an investigative series produced more than 100 news organizations working in partnership to uncover financial wrongdoing.
During a Monday briefing with President Joe Biden, Peter Doocy, a White House correspondent for Fox News, asked a question about inflation that Biden responded to sarcastically, before muttering “stupid son-of-a-bitch” into the microphone. Doocy said the president later called him to apologize for his language, the New York Times reported. “He said, ‘It’s nothing personal, pal,’” Mr. Doocy told host Sean Hannity. “We were talking about just, kind of, moving forward. And I made sure to tell him that I’m always going to try to ask something different than what everybody else is asking, and he said, ‘You got to.’”
Politico reports that Grid, a news startup that just launched earlier this month, “has a unique origin story, one that involves early ties to a global consulting firm best known for its crisis communications management and lobbying work on behalf of foreign governments, most notably the United Arab Emirates.” The report says that a member of Grid’s board, former CNN journalist John Defterios, is a senior adviser at APCO, a communications firm that has done work for the UAE, and also represents International Media Investments, a UAE-based investment fund that invested in Grid’s initial funding round.
Republican leaders on the House Energy and Commerce Committee have sent a letter to NBC Universal expressing their concern about “the extent of influence the Chinese Communist Party may have over NBCUniversal’s coverage of the games,” according to a report from Axios. “The letter, addressed to NBCUniversal CEO Jeff Shell and NBC Olympics President Gary Zenkel, asks NBC how it plans to use its ‘investment in the Games to shed light on China’s history of human rights abuses.'” Axios said the committee members also want to know whether, as part of NBC’s rights to broadcast the Games, the network is “in any way precluded by the IOC or CCP from coverage that would be critical.”
Substack plans to launch a built-in video player in an effort to draw new creators to its newsletter-publishing platform, a spokesperson confirmed to Axios. “The new native video embed will allow Substack creators to upload or record a video onto a Substack post directly,” Axios reported. “In the past, creators had to embed videos from other sites like YouTube in their newsletters or blog posts. Writers can chose to make videos available to everyone or only paid subscribers only. The videos will be playable within Substack posts online. If a creator wants to include a video in an email, they can embed it as a clickable image.”
The New York Times profiles Fabrizio Romano, an Italian journalist who has become the go-to source for news and information about player transfers in the international soccer market. “A transfer has not happened until it bears Romano’s imprimatur. (‘Here We Go’ is, in some cases, now used as a noun: Correspondents now regularly ask Romano if he is in a position to ‘give the here we go.’)” The Times writes that his power is now so great that he has “made the leap from being merely a reporter covering soccer’s transfer market to something closer to a force within it. And in doing so, he has blurred the line between journalist and influencer, observer and participant.”
Vice reports that Fight Club, the popular 1990s film starring Brad Pitt and Edward Norton, has a very different ending in the version currently streaming on a video service in China. The original movie, based on a book by Chuck Palahniuk, ends with the protagonist watching as a number of buildings explode, signifying that his attack on consumer society has begun. In the Chinese streaming version, however, the movie ends before the buildings explode, and a message on screen says that “the police rapidly figured out the whole plan and arrested all criminals,” and “after the trial, Tyler was sent to lunatic asylum.”