Deepfakes aren’t the real problem

Note: This is something I originally wrote for the New Gatekeepers blog at the Columbia Journalism Review, where I’m the chief digital writer

When it comes to disinformation, the latest buzzword on everyone’s lips is “deepfake,” a term used to refer to videos that have been manipulated using computer imaging (the word is a combination of “deep learning” and “fake”). Using relatively inexpensive software, almost anyone can create a video that makes the person in a video appear to be saying or doing something they never said or did. In one of the most recent examples, a Slovakian video artist named Ctrl Shift Face modified a video clip of comedian Bill Hader imitating Robert De Niro, so that Hader’s face morphs into that of the actor while he is doing the imitation. Another pair of artists created a deepfake of Facebook co-founder and CEO Mark Zuckerberg making sinister comments about his plans for the social network.

Technologists have been warning about the potential dangers of deepfakes for some time now. Nick Diakopolous, an assistant professor at Northwestern University, wrote a report called Reporting in a Machine Reality last year about the phenomenon, and as the US inches closer to the 2020 election campaign, concerns have continued to grow. The recent release of a doctored video of House Speaker Nancy Pelosi—slowed down to make her appear drunk—also fueled those concerns, although the Pelosi video was what some people have called a “cheapfake” or “shallowfake,” since it was obvious it had been manipulated. At a conference in Aspen this week, Mark Zuckerberg defended the fact the social network didn’t remove the Pelosi video, although he admitted it should not have taken so long to add a disclaimer and “down rank” the video.

Riding a wave of concern about this phenomenon, US legislators say they want to stop deepfakes at the source. So they have introduced something called the DEEPFAKES Accountability Act (in a classic Congressional move, the word “deepfakes” is capitalized because it is an acronym—the full name of the act is the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act). The law would make it a crime for anyone to create and distribute a piece of media that makes it look as though someone said or did something they didn’t say or do without including a digital watermark and text description that states it has been modified. The act also gives victims of “synthetic media” the right to sue the creators and “vindicate their reputations.”

Mutale Nkonde, a fellow with the Berkman Klein Center at Harvard and an expert in artificial intelligence policy, advised Congress on the Deepfakes Accountability Act and wrote in a post on Medium that the technology “could usher in a time where the most private parts of our lives could be outed through the release of manipulated online content — or even worse, as was the case with Speaker Pelosi, could be invented [out of] whole cloth.” In describing how the law came to be, Nkonde says that since repealing Section 230 of the Communications and Decency Act (which protects the platforms from liability for third-party content) would be difficult, legislators chose instead to amend the law related to preventing identity theft, “putting the distribution of deepfake content alongside misappropriation of information such as names, addresses, or social security numbers.”

Not everyone is enamored of this idea. While the artists who created the Zuckerberg video and the Hader video might be willing to add digital watermarks and textual descriptions to their creations identifying them as fakes, the really bad actors who are trying to manipulate public opinion and swing elections aren’t likely to volunteer to do so. And it’s not clear how this new law would force them to do this, or make it easier to find them so they could be prosecuted. The Zuckerberg and Hader videos were also clearly created for entertainment purposes. Should every form of entertainment that takes liberties with the truth (in other words, all of them) also carry a watermark and impose a potential criminal penalty on creators? According to the Electronic Frontier Foundation, the bill has some potential First Amendment problems.

Some believe this type of law attacks a symptom rather than a cause, in the sense that the overall disinformation environment on Facebook and other platforms is the problem. “While I understand everyone’s desire to protect themselves and one another from deepfakes, it seems to me that writing legislation on these videos without touching the larger issues of disinformation, propaganda, and the social media algorithms that spread them misses the forest for the trees,” said Brooke Binkowski, former managing editor of fact-checking site Snopes.com, who now works for a similar site called Truth or Fiction. What’s needed, she says, is legislation aimed at all elements of the disinformation ecosystem. “Without that, the tech will continue to grow and evolve and it will be a never-ending game of legislative catch-up.”

A number of experts, including disinformation researcher Joan Donovan of Harvard’s Shorenstein Center (who did a recent interview on CJR’s Galley discussion platform), have pointed out that you don’t need sophisticated technology to fool large numbers of people into believing things that aren’t true. The conspiracy theorists who peddle the rampant idiocy known as QAnon on Reddit and 4chan, or who create hoaxes such as the Pizzagate conspiracy theory, haven’t needed any kind of specialized technology whatsoever. Neither did those who promoted the idea that Barack Obama was born in Kenya. Even the Russian troll armies who spread disinformation to hundreds of millions of Facebook users during the 2016 election only needed a few fake images and plausible-sounding names.

There are those, including Nieman Lab director Joshua Benton, who don’t believe deepfakes are even that big a problem. “Media is wildly overreacting to deepfakes, which will have almost no impact on the 2020 election,” Benton said on Twitter after the Pelosi video sparked concern about deepfakes swamping voters with disinformation. Others, including the EFF, argue that existing laws are more than enough to handle deepfakes. In any case, rushing forward with legislation aimed at correcting a problem before it becomes obvious what the scope of the problem is—especially when that legislation has some obvious First Amendment issues—doesn’t seem wise.

5 Replies to “Deepfakes aren’t the real problem”

Leave a Reply

Your email address will not be published. Required fields are marked *