AI-generated deepfakes are not the problem

In December, the Financial Times described how a video that was posted on X in September by BD Politico, a pro-government news site in Bangladesh, showed a news anchor for something called “World News” accusing US diplomats of interfering in Bangladesh elections; the video was later shown to have been fabricated. According to the FT, it was made using HeyGen, a video generator that can create news-style video clips featuring AI-generated avatars for as little as twenty-four dollars a month. 

It’s unclear whether this deepfake or any other misinformation—AI generated or otherwise—had an impact on the Bangladesh election. Prime Minister Sheikh Hasina and her party were re-elected and won an overwhelming majority of the seats in parliament, although voter turnout was also reported to be lower than in previous elections.

Whether it’s fabricated news clips like the one in Bangladesh, or fake audio clips like the one in January where a fake Joe Biden told Democrats not to vote, deepfakes and hoaxes continue to draw a lot of attention, as does the use of AI in creating them. But there are good reasons to be skeptical—not just about the amount of AI-generated deepfakes, but about the impact they are having on people’s beliefs, voting behavior, etc.—and some experts say that focusing on the role of AI is a mistake.

In much of the media coverage of these deepfakes, there’s an undercurrent of fear — in some cases expressed outright and in other cases implied. The fear seems to be that AI-generated deepfakes and hoaxes are so realistic and convincing (or soon will be) that they will distort the way that people think about elections—or just about anything else. But fake photos and videos have been around for a while, long before AI came along, and it’s not clear that any of them have had much of an impact (although they have had an effect on the individuals involved in some cases, such as revenge porn.)

Note: this post was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Experts like Adam Thierer, a senior research fellow at George Mason University, argue that some of our fears about this kind of technology making things dramatically worse are a “techno-panic,” in the same way that parents used to be afraid that playing video games would turn their children into degenerates, and some believe that smartphones are making teenagers depressed. Fears about what AI research could lead to—including a discussion of how it might become smarter than us and decide to destroy the human race—could also be playing into some of the concern, along with skepticism about technology companies and whether they have our wellbeing in mind.

Renee DiResta of the Stanford Internet Observatory told me there is no question that ubiquitous and cheap AI tools have made it easier than ever to create misinformation, whether it’s photos or video or audio, although whether this has actually increased the overall supply of misinformation is the subject of some debate even among experts. In that sense, DiResta says, AI definitely “increases the scale of the problem, and decreases the cost, and therefore increases the number of actors” who will try to create this kind of disinformation for a variety of reasons. But she and other disinformation experts don’t feel that scale or volume is the main problem we should be focused on.

Carl Miller, research director at the Centre for the Analysis of Social Media at Demos, a UK political think tank, told me that for the most part, there hasn’t been an explosion of AI-generated fakes trying to change people’s political views. And that’s because most people have “a fairly naive idea about how inference operations actually work,” he said. Many people imagine that bad actors will spread “convincing yet untrue images about the world to get them to change their minds,” Miller said, but in reality, influence operations don’t lie about the world to get people to change their minds, they “agree with people’s worldviews, flatter them, confirm them, and then try to harness that.”

That’s why, DiResta says, the most common type of AI-generated “chatbot” or fake Twitter account is what is known as a “reply guy”—someone who has no real thoughts or opinions of their own, but merely shows up to agree with someone else’s post. DiResta says that this allows the AI chatbots to create a “majority illusion” in some cases, giving the impression that a certain view is more common than it really is. Modern social media, she said, is a blend of the old broadcast model and the personal gossip networks that people have always belonged to, combining the reach of broadcast and the social power of gossip via sources that are seen as “influencers.”

Because of what DiResta calls the “mechanics of influence,” how realistic a deepfake might be isn’t the most critical part of what makes it convincing. The more important aspects of a given piece of disinformation are who it comes from, and how it makes people feel, and whether this plays into their pre-existing beliefs. Influence of this kind, Miller says, is not about truth or facts, but is far more likely to talk to people “at the level of kinship and belonging and friendship. It’s going to talk to them about meaning in their lives, where they fit in the world. It’s going to confirm the grievances they have [and] it’s all to do with identity and emotion and social links.”

Other experts agree that even with the help of AI, creating high-quality video deepfakes is going to be the exception rather than the norm, for the simple reason that even low-quality imagery and other types of content will often work on the right audience. Henry Ajder, an expert on synthetic media and AI, told The Atlantic that it’s “far more effective to use a cruder form of media manipulation, which can be done quickly and by less sophisticated actors, than to release an expensive, hard-to-create deepfake, which actually isn’t going to be as good a quality as you had hoped.”

Meta and other platforms have taken action against large networks of fake accounts, some of which appear to come from China, which seem to be aimed at influencing people’s feelings about issues. But Miller says this is not what he is worried about with AI. While AI could make it faster and easier to create these kinds of fake networks, a more powerful kind of influence will be largely unseen (and therefore difficult to moderate) because it will take place in private groups and one-on-one chats. Says Miller: “Maybe you’d recruit friendships on Facebook groups, but you could easily move that to direct chat, whether on WhatsApp or Instagram or Signal.”

Some AI experts argue that AI may actually help in the fight against disinformation rather than contribute to it. Yann LeCun, a leading thinker in modern AI and a chief scientist at Meta, made that argument in Wired recently; five years ago, he says, about one quarter of all the hate speech and other content—including disinformation— that Facebook removed was identified by AI, and last year it was closer to 95 percent. But Miller is not as confident that AI will help fix the problems it is creating: given what he has seen in the field so far, Miller said he has virtually no confidence that “any kind of automated model we can deploy would reliably spot either generated imagery or text.”

In terms of tangible steps that might reduce the amount and/or impact of AI-generated misinformation, even some of those who are skeptical about the dangers believe that transparency rules around the use of the technology—which Meta and some other platforms have recently instituted—are a good idea. If a political candidate or a supporter uses AI to create a video or audio message, that content must be labeled as such. Given the sheer quantity of content that gets uploaded to Facebook, however—billions of videos and photos every day—this could be difficult to enforce. And of course, this wouldn’t impact the kind of one-to-one influence operations Miller describes.

A related problem is what some call “the liar’s dividend,” in which politicians and other bad actors benefit by claiming that something is a deepfake even if they know that it isn’t, gambling on the public’s general mistrust of online content. Miller told me that in contrast to what seems to be a growing concern that “everyone’s going to spend the next year having their worldview warped and destroyed by this avalanche of fake imagery,” what is more likely is that people’s trust in almost any source of information—apart from a few trusted sources, friends, conspiracy theorists, etc. — will disappear.

In other words, the real risk is not that Americans will view a growing tsunami of convincing AI-created deepfakes without knowing that they’re fake and therefore be more likely to believe and share them, it’s that they may know (or suspect) that they are fake but they won’t care, and will decide to share them anyway, because the message is just too funny, or rage-inducing, not to—and others they are connected to via their social networks will feel similarly, and do the same. In other words, this is fundamentally a human problem rather than a technological one.

Leave a Reply

Your email address will not be published. Required fields are marked *