AI-generated deepfakes are not the problem

In December, the Financial Times described how a video that was posted on X in September by BD Politico, a pro-government news site in Bangladesh, showed a news anchor for something called “World News” accusing US diplomats of interfering in Bangladesh elections; the video was later shown to have been fabricated. According to the FT, it was made using HeyGen, a video generator that can create news-style video clips featuring AI-generated avatars for as little as twenty-four dollars a month. 

It’s unclear whether this deepfake or any other misinformation—AI generated or otherwise—had an impact on the Bangladesh election. Prime Minister Sheikh Hasina and her party were re-elected and won an overwhelming majority of the seats in parliament, although voter turnout was also reported to be lower than in previous elections.

Whether it’s fabricated news clips like the one in Bangladesh, or fake audio clips like the one in January where a fake Joe Biden told Democrats not to vote, deepfakes and hoaxes continue to draw a lot of attention, as does the use of AI in creating them. But there are good reasons to be skeptical—not just about the amount of AI-generated deepfakes, but about the impact they are having on people’s beliefs, voting behavior, etc.—and some experts say that focusing on the role of AI is a mistake.

In much of the media coverage of these deepfakes, there’s an undercurrent of fear — in some cases expressed outright and in other cases implied. The fear seems to be that AI-generated deepfakes and hoaxes are so realistic and convincing (or soon will be) that they will distort the way that people think about elections—or just about anything else. But fake photos and videos have been around for a while, long before AI came along, and it’s not clear that any of them have had much of an impact (although they have had an effect on the individuals involved in some cases, such as revenge porn.)

Note: this post was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “AI-generated deepfakes are not the problem”

Jerry Seinfeld on why standup is popular again

“Stand-up is like you’re a cabinetmaker, and everybody needs a guy who’s good with wood. There’s trees everywhere, but to make a nice table, it’s not so easy. So, the metaphor is that if you have good craft and craftsmanship, you’re kind of impervious to the whims of the industry. Audiences are now flocking to stand-up because it’s something you can’t fake. It’s like platform diving. You could say you’re a platform diver, but in two seconds we can see if you are or you aren’t. That’s what people like about stand-up. They can trust it.”

(via this interview in GQ)

Apple’s censorship of apps in China is just the tip of the iceberg

Last week, the Chinese government ordered Apple to remove several widely used messaging apps—WhatsApp, Threads, Signal, and Telegram—from its app store. According to the Wall Street Journal, these apps have about three billion users globally, and have been downloaded more than a hundred and seventy million times in China since 2017. In a statement, Apple said that it was told to remove the apps because of “national security concerns,” adding that it is “obligated to follow the laws in the countries where we operate, even when we disagree.” Although new downloads are now blocked, some reports said that Chinese users who had already installed the apps were still able to use them, though doing so requires the use of a virtual private network, or VPN, in order to get around the country’s “Great Firewall.”

Beyond Apple’s allusion to “national security,” why exactly the apps were removed is unclear. An anonymous source told the Journal that the Chinese Cyberspace Administration asked Apple to remove WhatsApp and Threads because both are home to content that includes “problematic mentions” of Xi Jinping, China’s president. The New York Times also quoted a source as saying that the apps were removed because they platformed “inflammatory” content about Xi and violated China’s cybersecurity laws. An Apple spokesperson, however, told the Journal that the apps were not removed because of content about Xi. A spokesperson for the Chinese embassy in the US didn’t say why the apps were targeted, but told the Washington Post that foreign companies must obey Chinese laws aimed at maintaining an “orderly” internet.

Some China experts have their own theories as to why the apps were ordered removed. As the Post noted, the move came just a few days after the US Congress resurrected a bill aimed at forcing ByteDance, the Chinese owner of TikTok, to either sell the app or be banned from the US (the Senate passed the bill on Tuesday, and President Biden signed it into law yesterday)—timing that suggests possible retaliation on China’s part. Dan Wang, a visiting China scholar at Yale Law School, told the Post that the removal of WhatsApp is largely symbolic since the platform is already banned in China—but that the Chinese government’s playbook is to reply in-kind to “every American provocation,” a dynamic that might only accelerate should the US successfully impose its TikTok ban. (I wrote last week about the prospects for this, which depend on more than simply passing legislation.)

Note: this post was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Apple’s censorship of apps in China is just the tip of the iceberg”