The tech platforms have surrendered in the fight over election-related misinformation

Last week YouTube announced that it will no longer remove videos that say the presidential election in 2020 was fraudulent, stolen, or otherwise illegitimate. The Google-owned video platform wrote in a blog post that it keeps two goals in mind when it develops policies around content, one of which is to protect users, and the other to provide “a home for open discussion and debate.” Finding a balance between the two is difficult when political speech is involved, YouTube added, and in the end, the company decided that “the ability to openly debate political ideas, even those that are controversial or based on disproven assumptions, is core to a functioning democratic society.” While removing election-denying content might curb some misinformation, the company said, it could also “curtail political speech without meaningfully reducing the risk of real-world harm.”

YouTube didn’t say in its blog post, or in any of its other public comments about the change, why it chose to make such a policy decision now, especially when the US is heading into another presidential election in which Donald Trump, the man who almost single-handedly made such policies necessary, is a candidate. All the company would say is that it “carefully deliberated” about the change. It’s not the only platform to decide that the misinformation guardrails it erected after the Capitol riots in 2021 are no longer required. Twitter and Meta, Facebook’s parent company, dismantled most of their restrictions related to election denial some time ago.

Twitter announced in January of 2022 that it would no longer take action against false claims about the legitimacy of the election. At the time, a spokesperson told CNN that Twitter had not been enforcing its “civic integrity misleading information” policy, under which users could be suspended or even banned for such claims, since March of 2021. The spokesperson said the policy was no longer being applied to election denial because it was intended to be used during an election or campaign, and Joe Biden had already been president for over a year at that point. Twitter added that it was still enforcing its rules related to misleading information about “when, where, or how to participate in a civic process.”

Note: This was originally published as the daily newsletter at the Columbia Journalism Review, where I am the chief digital writer

Since Elon Musk took control of Twitter in April of 2022, he has made a number of statements about election-related disinformation, in some cases sharing links containing dubious claims (prominent election deniers have also had their accounts restored). Last month, however, he assured a CNBC reporter that tweets containing false claims about the 2020 election being stolen “would be corrected.” According to some reports, that doesn’t appear to be happening. The same week Musk made his promise, the Associated Press noted that “Twitter posts that amplified those false claims have thousands of shares with no visible enforcement.” The most widely shared included false claims from Republican congresswoman Marjorie Taylor Greene, whose account was suspended after she shared misinformation about the COVID-19 pandemic. Her account was reinstated by Musk last November.

In January, Meta announced that it would reinstate Trump’s Facebook and Instagram accounts, arguing that “the risk to [public safety] has sufficiently receded.” Trump’s team has been posting to Facebook regularly since then, including claims that an investigation into his possession of classified documents is “a continuation of the greatest witch hunt of all time.” Twitter reinstated Trump’s account last November, something Musk had promised to do even before he acquired the company, but Trump has not posted anything since. That could be a result of an agreement to post primarily on Truth Social, the Twitter alternative he cofounded. That deal expires this month, and Trump has suggested that he may move back to Twitter, which was a crucial part of his 2016 campaign.

Other politicians have also been given a get-out-of-suspension-free card by the platforms. Earlier this month, Instagram—which is owned by Meta—reinstated an account belonging to Robert F. Kennedy Jr., an anti-vaccine lobbyist. The account was suspended in February of 2021 for sharing misinformation about COVID-19, including claims about the alleged harmfulness of the vaccines against it. In 2022, the Instagram and Facebook accounts belonging to Kennedy’s nonprofit Children’s Health Defense organization were removed for spreading medical misinformation (both remain suspended as of the publication of this article). Andy Stone, a spokesman for Meta, said in a statement to the Washington Post that Kennedy’s account was restored because “he is now an active candidate for president.”

When Meta announced its decision to reinstate Trump’s account, the move was widely criticized for ignoring the potential risks to democracy. Democratic congressman Adam Schiff said in a tweet that restoring Trump’s ability “to spread his lies and demagoguery” was dangerous, since he had shown “no remorse” for his actions related to the January 6 riots. David Graham wrote in The Atlantic that Meta’s statement about the danger having receded was disingenuous. The 2020 election might be in the past, he said, but “one reason it can’t be relegated to history is that Trump continues to surface it—and the direct harms continue.” The arrest of Solomon Peña, a failed Republican candidate and Trump supporter, for shootings in New Mexico “shows how Trump’s election denial reverberates,” Graham wrote.

In the case of YouTube’s policy change, Imran Ahmed, chief executive officer of the nonprofit Center for Countering Digital Hate, told The Guardian this week that the move is “fundamentally dangerous.” American democracy “cannot survive wave after wave of disinformation that seeks to undermine democracy, consensus and further polarizes the public,” he said. Critics argue that election disinformation can be especially dangerous on YouTube, since the service’s recommendation algorithms tend to suggest related videos, and this can compound the problem. One study found that users who were already skeptical of election results were shown three times as many election denial videos as those who were not.

Others have defended the moves by the platforms. Anthony Romero, executive director of the American Civil Liberties Union, said in a statement that social media companies “are central actors when it comes to our collective ability to speak—and hear the speech of others—online” and therefore they should “err on the side of allowing a wide range of political speech, even when it offends.” Similar defenses have been put forward in the case of YouTube’s policy reversal on election denial. Kathleen Hall Jamieson, director of the Annenberg Public Policy Center and founder of FactCheck.org, argues that fact-checking is better than blocking speech, and a number of researchers say that the influence of disinformation or “fake news” on the 2016 election and politics in general has been overstated.

But Casey Newton argued in his Platformer newsletter that while other media outlets such as Fox News have also given credence to or platformed election denial claims, social media can accelerate those claims in ways that traditional media cannot. “It’s one thing to host a single ill-considered town hall, and another to volunteer to serve in perpetuity as a digital library for all the election lies that candidates and their surrogates see fit to upload,” Newton wrote. After January 6, the platforms all took steps to join the battle against election misinformation and in favor of fact-based news, he said, and then “one by one, platforms got tired of fighting it” and simply gave up. Whether Trump and other candidates take advantage of that defensive gap—and how they do so—remains to be seen.

Leave a Reply

Your email address will not be published. Required fields are marked *