Last week, fake pornographic images of singer Taylor Swift started spreading across X (formerly known as Twitter). Swift fans quickly swarmed the platform, calling out the images as fakes generated by AI software, and demanding that X remove them and block the accounts sharing them. According to a number of reports, the platform removed some of the images and the accounts that posted them, but not before certain photos had been viewed millions of times, and images continued to circulate across the service even after the bans were implemented. X then blocked the term “Taylor Swift” from its search engine, so that trying to search for the singer produced an error telling users that “something went wrong.” Despite this attempt to block people from seeing the content, reporters for The Verge found that it was relatively easy to get around the search block and find the fake images anyway.
Some observers noted that X’s inability to stop the proliferation of Swift porn was likely caused in part by Elon Musk’s dismantling of the company’s trust and safety team, most of whom were fired after he acquired Twitter in 2022. In the wake of the Taylor Swift controversy, Joe Benarroch, head of business operations at X, told Bloomberg that the company is planning a new “trust and safety center of excellence” in Texas to help enforce its content moderation rules, and that X intends to hire a hundred full-time moderators. Bloomberg also noted that the announcement came just days before executives from X and the other major social platforms and services are set to appear before the Senate Judiciary Committee for a hearing on child safety online.
On Monday, X restored the ability to search for Taylor Swift, but said in a statement that it would “continue to be vigilant” in removing similar AI-generated nonconsensual images. (According to a report from The Verge, some of the original Swift images were seen forty-five million times before they were removed.) The White House even weighed in on the controversy: Karine Jean-Pierre, the White House press secretary, told ABC News that the Biden administration was “alarmed by the reports,” and that while social media companies are entitled to make their own content decisions, the White House believes it has a role in preventing “the spread of misinformation, and non-consensual, intimate imagery of real people.”
Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer
According to 404 Media, the Taylor Swift images were generated by Designer, an AI text-to-image tool that is owned by Microsoft, and then traded on 4chan, a somewhat lawless online community, as well as through a private channel on Telegram, an encrypted chat app based in Dubai. On Monday, Microsoft announced that it has “introduced more protections” in its software to make generating such images more difficult. However, 404 Media noted that the Telegram channel where the images appeared is still sharing AI-generated images of real people produced with other tools, and that it is quite easy to download an AI software model from a site called Civitai and run it on a home PC and generate pornographic imagery of celebrities.
In order to get Designer and other tools to generate such photos, all users have to do is describe sexual acts without using sexual terms, instead referring to positions, objects, and composition. Other AI-powered engines, many of which are available online for free, offer to take publicly-available photos of celebrities (or anyone for that matter) and generate nudes by removing their clothing. 404 Media also noted that since it first started writing about deepfakes in 2017, when a fake sex video of actress Gal Gadot circulated on social media, Taylor Swift has been a prime target for people using the technology to generate non-consensual pornography, and was one of the first celebrities targeted by DeepNude, an app that generated nudes of women. The image generator was taken down after 404 Media published an investigative report on it.
In her briefing about the Swift images, Jean-Pierre said that the White House believes that Congress “should take legislative action” to prevent nonconsensual pornography created by AI. Joe Morelle, a Democratic New York congressman, is trying to do just that: he used the Swift controversy to promote a bill called the Preventing Deepfakes of Intimate Images Act, which would criminalize the non-consensual sharing of digitally altered images. Under the law, anyone sharing deepfake pornography without an individual’s consent faces damages of up to one hundred and fifty thousand dollars and ten years in prison. Morelle introduced the law in December 2022, but it failed to pass that year or in 2023; he reintroduced it after gaining some support during a House Oversight subcommittee hearing on deepfakes last November.
Nora Benvenidez, senior counsel to Free Press, noted on Twitter that some of these laws would likely fail a First Amendment challenge because they could penalize “a wide array of legitimate speech,” including political commentary and satire, and in some cases would breach the First Amendment rights of the platforms to moderate content. A report from the Center for News Technology and Innovation found that laws targeting the broad category of “fake news” have increased significantly over the last few years, particularly after COVID-19, and that while most are technically aimed at curbing disinformation, the majority of these laws would have the effect of lessening the protection of an independent press and weakening public access to information.
Pornographic deepfakes of celebrities may be the most popular category of AI-generated content, but political content is not far behind. Some voters in New Hampshire recently received an AI-generated robocall imitating President Joe Biden, telling them not to vote in the state’s primary election. According to Wired, it’s not clear who created the robo-fake, but two separate teams of audio experts told the magazine that it was likely created using technology from ElevenLabs, a startup whose technology allows almost anyone’s voice to be duplicated. The company markets its tools to video game and audiobook creators, and Wired reports that it is valued at more than a billion dollars. The company’s safety policy says cloning someone’s voice without permission is acceptable when it’s for “political speech contributing to public debates.”
In some cases, political campaigns are using artificial intelligence to generate their own content: OpenAI, which created ChatGPT, the popular AI text engine, recently banned a developer who created a “bot” that mimicked the conversational style of Dean Phillips, a Democratic presidential candidate. The bot was created by an AI startup called Delphi, in response to a request from a couple of a couple of Silicon Valley entrepreneurs who supported Phillips’ run for president. Although the bot came with a disclaimer saying it was powered by AI, and users had to agree to use it, OpenAI’s terms of service ban the use of ChatGPT in connection with a political campaign.
Brandy Zadrozny, a reporter for NBC News, wrote recently that disinformation poses an unprecedented threat in 2024, and that the US is “less ready than ever.” Claire Wardle, co-director of Brown University’s Information Futures Lab, which studies misinformation and elections, said that despite the similarities to the 2020 election given the candidates and parties involved, the current situation feels very different because of a combination of the COVID pandemic, the attack on Congress on January 6, and what Wardle called “a hardening of belief” that the election was stolen. Zadrozny argues that while research shows that disinformation has little immediate effect on voting choices, it can impact how people make up their minds about issues, and “provide false evidence for claims with conclusions that threaten democracy.” A World Economic Forum survey named misinformation and disinformation from AI as the top global risk over the next two years, ahead of climate change and war.
On top of the AI technology involved, researchers and other experts are concerned about a lack of transparency and cooperation among academics when it comes to such issues because of a sustained campaign by certain members of Congress accusing the government, tech platforms, and researchers of colluding to censor right-wing content under the guise of fighting disinformation, something I wrote about for CJR last year. According to Zadrozny, some researchers say these partisan campaigns, which have included threats of lawsuits and other actions, have had a “chilling effect” on new research going into 2024. And that’s on top of the cutbacks that many platforms have made in their disinformation teams. So there may be more AI-powered disinformation on the horizon, but those fighting it may be even less prepared.