Researchers under attack as platforms cut back and AI-powered disinfo grows

Misinformation and disinformation have arguably never been as prominent or widely distributed as they are now, thanks to smartphones, the social web, and apps such as Facebook, X (formerly Twitter), TikTok, and YouTube. Unfortunately, as the US draws closer to a pivotal election in which trustworthy information is likely to be more important than ever, various researchers and academic institutions are scaling back or even canceling their misinformation programs, due to legal threats and government pressure. At the same time, a number of large digital platforms have laid off hundreds or even thousands of the employees who specialized in finding and removing hoaxes and fakes, in some cases leaving only a skeleton staff to handle the problem. And all of this is happening as the quantity of fakes and conspiracy theories is expanding rapidly, thanks to cheap tools powered by artificial intelligence that can generate misinformation at the click of a button. In other words, a perfect storm could be brewing.

Over the weekend, Naomi Nix, Cat Zakrzewski, and Joseph Menn described, in the Washington Post, how academics, universities and government agencies are paring back or even shutting down research programs designed to help counter the spread of online misinformation, because of what the Post calls a “legal campaign from conservative politicians and activists, who accuse them of colluding with tech companies to censor right-wing views.” This campaign—which the paper says is being led by Jim Jordan, the Republican Congressman from Ohio who chairs the House Judiciary Committee, and his co-partisans—has “cast a pall over” programs that study misinformation online, the Post says. Jordan and his colleagues have issued subpoenas demanding that researchers turn over their communications with the government and social-media platforms as part of a Congressional probe into alleged collusion between the White House and the platforms.

The potential casualties of this campaign include a project called the Election Integrity Partnership, a consortium of universities and other agencies, led by Stanford and the University of Washington, that has focused on tracking conspiracy theories and hoaxes about voting irregularities. According to the Post, Stanford is questioning whether it can continue participating because of ongoing litigation. (“Since this investigation has cost the university now approaching seven [figure] legal fees, it’s been pretty successful, I think, in discouraging us from making it worthwhile for us to do a study in 2024,” Alex Stamos, a former Facebook official who founded the Stanford Internet Observatory, said.) Meanwhile, the National Institutes of Health shelved a hundred-and-fifty-million-dollar program aimed at correcting medical misinformation because of legal threats. In July, NIH officials reportedly sent a memo to employees warning them not to flag misleading social-media posts to tech companies.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

As I wrote for CJR last week, contacts between government agencies and the platforms are also at the heart of a lawsuit that is currently working its way through the court system. The case began last year, when the attorneys general of Louisiana and Missouri sued the Biden administration, alleging that their discussions with Meta, X, and YouTube breached the First Amendment in that they coerced those platforms into removing speech. In July, a Louisiana appeals court judge ruled in favor of the states, and ordered the administration to stop talking with the platforms; he also ordered that government agencies stop working with academics who specialize in disinformation. That order was amended by the Fifth Circuit Court of Appeals, and the Biden administration has asked the Supreme Court to hear the case, but the brouhaha appears to have contributed to an atmosphere of fear about the repercussions of misinformation research.

In addition to this case and the House investigation, Stephen Miller, a former Trump adviser who runs a conservative organization called the America First Legal Foundation, is representing the founder of Gateway Pundit, a right-wing website, in a lawsuit alleging that researchers at Stanford and other institutions conspired with the government to restrict speech. And Elon Musk, the owner of X, is suing the Center for Countering Digital Hate, a non-profit advocacy group that Musk alleges has scraped large amounts of data from the platform without proper permission, as part of what Musk calls a conspiracy to convince advertisers not to spend money there. A researcher who asked not to be named told the Post that as a result of such attacks, the whole area of misinformation research “has become radioactive.”

While lawsuits and investigations are chilling research into misinformation, the platforms are simultaneously devoting fewer resources to finding or removing fakes and hoaxes. Earlier this month, Nix and Sarah Ellison wrote in the Post that tech companies including Meta and YouTube are “receding from their role as watchdogs” aimed at protecting users from conspiracy theories in advance of the 2024 presidential election, in part because layoffs have “gutted the teams dedicated to promoting accurate information” on such platforms. Peer pressure may have played a role, too: according to the Post, Meta last year considered implementing a ban on all political advertising on Facebook, but the idea was killed after Musk said he wanted X, a key Meta rival, to become a bastion of free speech. As Casey Newton wrote in his Platformer newsletter in June, one function that Musk seems to have served in the tech ecosystem is to “give cover to other companies seeking to make unpalatable decisions.”

Emily Bell, the director of the Tow Center for Digital Journalism at Columbia University, told the Post that Musk “has taken the bar and put it on the floor” when it comes to trust and safety. Not to be outdone, Meta has reportedly started offering users the ability to opt out of Facebook’s fact-checking program, which means false content would no longer have a warning label. And YouTube announced in June that it would no longer remove videos claiming that the 2020 presidential election was stolen. The Google-owned video platform wrote in a blog post that while it wants to protect users, it also has a mandate to provide “a home for open discussion and debate.” While removing election-denying content might curb the spread of misinformation, the company said, it could also “curtail political speech without meaningfully reducing the risk of real-world harm.” Citing similar reasons, Meta and other platforms have reinstated Trump’s accounts after banning him following the January 6 insurrection.

In a report released last week, the Center for Democracy and Technology, a DC-based nonprofit, wrote that the platforms have become less communicative since the 2020 election, especially after the widespread layoffs, and in some cases have loosened safeguards against election misinformation to such an extent that they have “essentially capitulate[d] on the issue.” At Meta, for example, the Center said interviews with researchers indicated that Mark Zuckerberg, the CEO, at some point “stopped considering election integrity a top priority and stopped meeting with the elections team.” The New York Times reported in early 2023 that cuts of more than twelve thousand staff at Alphabet, Google’s parent company, meant that only a single person at YouTube was in charge of misinformation policy worldwide.

While all this has been going on, researchers who specialize in artificial intelligence say that the ubiquity of such tools threatens to increase the supply of misinformation dramatically. At least half a dozen online services using variations on software from OpenAI or open-source equivalents can produce convincing fake text, audio, and even video in a matter of minutes, including so-called “deepfakes” that mimic well-known public figures. And this kind of content is cheap to produce: last month, Wired talked to an engineer who built an AI-powered disinformation engine for four hundred dollars. 

Earlier this month, the BBC wrote about how YouTube channels that use AI to make videos containing fake content are being recommended to children as “educational content.” The broadcaster found more than fifty channels spreading disinformation, including claims around the existence of electricity-producing pyramids and aliens. Sara Morrison, of Vox, has written about how “unbelievably realistic fake images could take over the internet” because “AI image generators like DALL-E and Midjourney are getting better and better at fooling us.” When Trump was charged in New York earlier this year, fake photos showing his arrest went viral; Trump even shared an AI-generated image of his own. (Ironically, some of the fake pictures were created by Eliot Higgins, the founder of the investigative journalism outfit Bellingcat, as a warning that such images are easy to create.) Bell wrote for the Guardian that “ChatGPT could be disastrous for truth in journalism” and create a “fake news frenzy.” Sam Gregory, program director of Witness, a human rights group with expertise in deepfakes, told Fast Company of an emerging combined risk of “deepfakes, virtual avatars, and automated speech generation,” which could produce large quantities of fake information quickly. The list goes on.

It should be noted that not everyone is as concerned about misinformation (or AI, for that matter) as these comments might suggest; in January, researchers from Sciences Po, a university in Paris, published a study saying that the problem is often overstated. (“Falsehoods do not spread faster than the truth,” they wrote, adding that “sheer volume of engagement should not be conflated with belief.”) And content moderation—and the government’s role in it, in particular—raises some legitimately thorny issues around freedom of speech. But misinformation is a real problem, even if one can debate the extent, and thorny debates are no excuse for political intimidation. We don’t want a world in which those who are best equipped to fight misinformation, and answer the thorny questions, have either lost their jobs or are too scared to speak out for fear of a lawsuit.

Leave a Reply

Your email address will not be published. Required fields are marked *