Facebook, free speech, and political ads: An interview series

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

A number of Facebook’s recent decisions have fueled a storm of criticism that continues to follow the company, including the decision not to fact-check political advertising and the inclusion of Breitbart News in the company’s new “trusted sources” News tab. These controversies were stoked even further by some of the things CEO Mark Zuckerberg brought up in his speech at Georgetown University last week, where he tried — mostly unsuccessfully — to portray Facebook as a defender of free speech. CJR thought all of these topics and more were worth discussing with free-speech experts and researchers who focus on the power of platforms like Facebook, so we convened an interview series this week on our Galley discussion platform, featuring guests like Alex Stamos, former chief technology officer of Facebook, veteran tech journalist Kara Swisher, Jillian York of the Electronic Frontier Foundation, Harvard Law professor Jonathan Zittrain, and Stanford researcher Kate Klonick.

Stamos, one of the first to raise the issue of potential Russian government involvement on Facebook’s platform while he was the head of security there, said he had a number of issues with Zuckerberg’s speech, including the fact that he “compressed all of the different products into this one blob he called Facebook. That’s not a useful frame for pretty much any discussion of how to handle speech issues.” Stamos said the News tab is arguably a completely new category of product, a curated and in some cases paid-for selection of media, and that this means the company has much more responsibility when it comes to what appears there. Stamos also said that there are “dozens of Cambridge Analyticas operating today collecting sensitive data on individuals and using it to target ads for political campaigns. They just aren’t dumb enough to get their data through breaking an API agreement with Facebook.”

Ellen Goodman, co-founder of the Rutgers Institute for Information Policy & Law, said that Mark Zuckerberg isn’t the first to have to struggle with tensions between free speech and democratic discourse, “it’s just that he’s confronting these questions without any connection to press traditions, with only recent acknowledgment that he runs a media company, in the absence of any regulation, and with his hands on personal data and technical affordances that enable microtargeting.” Kate Klonick of Stanford said Zuckerberg spoke glowingly about early First Amendment cases, but got one of the most famous — NYT v Sullivan — wrong. “The case really stands for the idea of tolerating even untrue speech in order to empower citizens to criticize political figures,” Klonick said. “It is not about privileging political figures’ speech, which of course is exactly what the new Facebook policies do.”

Continue reading “Facebook, free speech, and political ads: An interview series”

YouTube takedowns are making it hard to document war crimes

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Like every other large social platform, YouTube has come under fire for not doing enough to remove videos that contain hate speech and disinformation, and the Google-owned company has said repeatedly that it is trying to get better at doing so. But in some cases, removing videos because they contain graphic imagery of violence can be a bad thing, at least when it comes to documenting war crimes in a country like Syria. That’s the case that Syrian human-rights activist and video archivist Hadi Al Khatib makes in a video that the New York Times published on Wednesday in its Opinion section. Khatib co-produced the clip with Dia Kayyali, who works for Witness, an organization that helps people use digital tools to document human rights violations. In the video, Khatib notes that videos of bombings the Syrian government has carried out on its own people—including attacks with barrel bombs, which Human Rights Watch and other groups consider to be a war crime—are important evidence, but that YouTube has removed more than 200,000 such videos.

“I’m pleading for YouTube and other companies to stop this censorship,” Khatib says in the piece. “All these takedowns amount to erasing history.” There are similar policies at Facebook and Twitter, both of which have also removed videos because they were flagged as being violent or propaganda, when those videos included evidence of government attacks in Syria and elsewhere. The problem, Kayyali says, is that most of the large social platforms use artificial intelligence to detect and remove content, but an automated filter can’t tell the difference between ISIS propaganda and a video documenting government atrocities. Many of the platforms have been placing even more emphasis on using automated filters because they are under increasing pressure from governments in the US and elsewhere to act more quickly when removing content. Facebook CEO Mark Zuckerberg bragged to Congress last year that the company’s automated systems take down more than 90 percent of the terrorism-related content posted to the service before it is ever flagged by a human being.

Khatib runs a project called The Syrian Archive, which has been tracking and preserving as many videos of war crimes in that country as it can. But YouTube’s policies are not making it easy, he says. And user-generated content is a crucial part of the documentation of what is happening in Syria, Khatib notes, because getting access to parts of the country where such attacks are taking place is extremely dangerous, even for experienced aids agencies, journalists, and human rights organizations. YouTube hasn’t just been removing videos either: Since 2017, it has taken down a number of accounts that were trying to document the Syrian conflict, including pages run by groups such as the Syrian Observatory for Human Rights, the Violation Documentation Center, and the Aleppo Media Center. Khatib says YouTube reinstated some of the videos it took down after he complained earlier this year, but that hundreds of thousands still remain unavailable.

Continue reading “YouTube takedowns are making it hard to document war crimes”

Zuckerberg wants to eat his free-speech cake and have it too

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Facebook’s relationship to speech is complicated. The giant social network routinely takes down hate speech provided it meets certain criteria (although critics say it misses a lot more), along with gratuitous nudity, and other content that breaches its “community standards.” And it hides or “down-ranks” misinformation, although only in certain categories, including anti-vaccination campaigns. But it refuses to do anything about obvious disinformation in political content, including political ads, saying it doesn’t want to be an arbiter of truth. One of the most interesting things about Mark Zuckerberg’s speech Thursday at Georgetown University was listening to the Facebook CEO try to justify these conflicting decisions. The speech, which was livestreamed on Facebook and YouTube and published in the Wall Street Journal, was at times a passionate defense of unfettered free speech, and how it played a crucial role in social movements like the Vietnam War and the civil-rights era.

If nothing else, Zuckerberg’s emotional investment in this idea came through, despite some awkward phrasing (he wrote the speech himself, and wouldn’t let anyone see or edit it because he wanted to “maximize for sincerity,” according to a Facebook source). Zuckerberg warned about a number of countries that are moving to restrict speech, and even trying to censor speech that occurs elsewhere on the internet, and his voice became almost strident as he talked about the repressive regime in China (a market Facebook has repeatedly tried to enter) and the fact that most of the top internet services used to be American, but now six of the top 10 are Chinese. “While our services, like WhatsApp, are used by protesters and activists everywhere due to strong encryption and privacy protections, on TikTok mentions of these protests are censored, even in the US,” Zuckerberg said. “Is that the internet we want?”

But the Facebook CEO also defended the network’s decision not to fact-check political ads, despite the fact that the Trump campaign has already used its ad campaigns to circulate lies about Joe Biden and his alleged involvement in corruption in Ukraine. “We don’t fact-check political ads, because we think people should be able to see for themselves what politicians are saying,” Zuckerberg said. “I know many people disagree, but, in general, I don’t think It’s right for a private company to censor politicians or the news in a democracy.” The Facebook founder also noted that similar ads appear on other services, and also run on analog TV networks. “I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100 percent true,” Zuckerberg said, despite having just described how the social network routinely takes down or “down-ranks” misinformation of various kinds.

Continue reading “Zuckerberg wants to eat his free-speech cake and have it too”

On Facebook, disinformation, and existential threats

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

There has been a steady stream of Facebook-related news over the past couple of weeks: First, The Verge published transcripts of two hours of leaked audio from a town hall with CEO Mark Zuckerberg. His comments included a reference to Elizabeth Warren and her plans to break up the company, which Zuckerberg called “an existential threat.” For some, these remarks brought up the specter of potential political interference. Would Facebook try to put its thumb on the scale by using the all-powerful news feed algorithm? And while that question was still swirling, the company continued to get blowback on its recent decision to no longer fact-check political ads, triggered in part by a Trump advertising campaign running on Facebook that repeats unsubstantiated claims about Joe Biden.

In an attempt to grapple with these and other issues, CJR convened a series of interviews on our Galley discussion platform with journalists and others who follow the company. First was Casey Newton of The Verge, who got the town-hall audio scoop. Although Zuckerberg’s comments about Warren got a lot of attention, Newton said one of the most interesting things about the town hall was what the questions said about the company’s employees—that they are concerned about a breakup, but also about how they and Zuckerberg are perceived. One of our next interviewees, veteran Recode media writer Peter Kafka, said that for him, one of the most interesting things about the leak is that it happened at all—Facebook has been doing town halls for over a decade, and this is the first time an insider has leaked one. Does that mean employees are growing restless? Perhaps!

I also spoke with Dina Srinivasan, a former advertising industry executive and antitrust expert who wrote an academic paper entitled “The Antitrust Case Against Facebook,” which has been cited by several members of Congress who want to break the company up. Her argument is that antitrust law doesn’t have to focus solely on the effect a monopoly has on consumer prices (a difficult case to make for Facebook, since the service is free). Facebook could also be accused of using its monopoly to degrade the quality of its service, she says, by removing privacy protections it promised would never be weakened, and by using customer data without permission.

Continue reading “On Facebook, disinformation, and existential threats”

Some lessons from the MIT Media Lab controversy

Note: This is something I originally published on the New Gatekeepers blog at the Columbia Journalism Review, where I’m the chief digital writer

When the news first broke that the MIT Media Lab had a close relationship with deceased billionaire and convicted pedophile Jeffrey Epstein, some saw it as a momentary lapse in judgment, and there was widespread support for Media Lab director Joi Ito. But then New Yorker writer Ronan Farrow reported that the Epstein relationship was much deeper than it first appeared — including the fact that Ito got a significant amount of money from Epstein for his own personal investments. Much of the earlier support evaporated, and Ito agreed to resign. And there were other spinoff effects as well: Richard Stallman, a free-software pioneer and veteran MIT professor, also resigned, after being criticized for comments he made on an internal email list that downplayed the impact of Epstein’s sexual abuse.

To explore these and other issues, CJR had a series of one-on-one and roundtable interviews — using its Galley discussion platform — with a number of journalists and other interested observers, including WBUR reporter Max Larkin, Slate writer Justin Peters, Gizmodo editor Adam Clark Estes and Stanford researcher Becca Lewis. We talked about why places like the Media Lab often get a free pass from reporters, and why there’s so much technology writing that focuses on the “hero/genius” trope, where the all-knowing founder gets credit for inventing something amazing, even if the thing they invented either doesn’t work (Theranos) and/or they are terrible people in a variety of ways (Steve Jobs, Elon Musk, etc.).

Larkin said some inside MIT were frustrated that the Epstein donations got so much attention, when the institution also recently accepted money and a visit from Saudi Arabian leader Mohammad bin Salman, who has been implicated in the vicious killing of Washington Post journalist Jamal Khashoggi. “One Media Lab alum told me she was, on balance, more appalled by MIT’s ties to the late David Koch than by the ties to Epstein,” said Larkin, since the Kochs had done so much to undermine the Institute’s core values with their support of climate change-denying groups. And Larkin also noted that some defenders of the Epstein donations — including Media Lab founder and chairman Nicholas Negroponte — believed in what might be called the “transmutation” argument, namely that taking money from bad people and turning it into funding for creative academic pursuits was a positive thing.

Continue reading “Some lessons from the MIT Media Lab controversy”

What happens when Facebook confronts an existential threat?

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Facebook CEO Mark Zuckerberg doesn’t do a lot of off-the-cuff speaking. His public appearances–whether before Congress or at a launch event–tend to be carefully scripted and rehearsed to the point where a cardboard cutout would seem animated by comparison. All of which helps explain some of the excitement surrounding a Verge report this week, consisting of two hours worth of unedited audio and transcripts of Zuckerberg addressing a town hall at Facebook, including questions from the staff. Although the scoop was heavily promoted, the transcripts didn’t contain any smoking bombshells exactly–in fact, Zuckerberg himself promoted the story in a post on his personal Facebook page, which pretty much guarantees there was nothing earth-shattering in the text.

That said, however, a number of observers highlighted one comment they found troubling: when the Facebook CEO was asked whether he was concerned about the company being broken up by government regulators, he responded that he could see federal authorities–and here he mentioned Elizabeth Warren specifically–trying such a gambit, and that if necessary he would oppose it. And then Zuckerberg said: “At the end of the day, if someone’s going to try to threaten something that existential, you go to the mat and you fight.” Based on the context of the quote, it seems clear that the Facebook CEO meant he would fight the government’s attempt in the courts. In the full transcript, he prefaces his comment by saying one of the things he loves and appreciates about the US is “that we have a really solid rule of law,” and that he doesn’t think such a case would survive a court challenge (and he is probably right).

On Twitter and elsewhere, however, the reference to Warren and her desire to break up the company was boiled down to the point where it appeared that Zuckerberg sees Warren herself–and her presidential candidacy–as being an existential threat. The Facebook CEO’s comment brought up what some saw as a disturbing scenario. What if you almost single-handedly controlled the world’s largest information distributor, one that hundreds of millions of people rely on for their news, and one that has been implicated in the past in spreading misinformation and propaganda during an election–how might you respond to something that you perceive as an existential threat to your company?

Continue reading “What happens when Facebook confronts an existential threat?”