Section 230 gets its day in court

For a law whose central clause contains just twenty-six words, Section 230 of the Communications Decency Act of 1996 has generated vast amounts of debate over the past few years, thanks in part to criticism from both sides of the political spectrum. Conservative politicians say the law—which shields online services from liability for the content they host—allows social networks like Twitter and Facebook to censor right-wing voices, while liberals say Section 230 gives the social platforms an excuse not to remove offensive speech and disinformation. Donald Trump and Joe Biden have both spoken out against the law, and promised to change it. This week, the Supreme Court is hearing oral arguments in two cases that could alter or even dismantle Section 230.

On Tuesday, the court’s nine justices heard arguments in the first case, Gonzalez v Google. The family of Nohemi Gonzalez, a US citizen who was killed in an Isis attack in Paris in 2015, claim that YouTube violated the federal Anti-Terrorism Act by recommending videos featuring terrorist groups, and thereby helped cause Gonzalez’s death. On Wednesday, the court heard arguments in the second case, which also involves a terrorism-related death: in that case, the family of Nawras Alassaf, who was killed in a terrorist attack in 2017, claim that Twitter, Facebook, and YouTube recommend content related to terrorism, and thus contributed to his death. After a lower court ruled the companies could be liable, Twitter asked the Supreme Court to say whether Section 230 applies.

The clause at the heart of Section 230 states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In practice, this has meant that services such as Twitter, Facebook, and YouTube are not held liable for things their users post, whether it’s links or videos or any other content (unless the content is illegal). The question before the Supreme Court is whether that protection extends to content these services recommend, or promote to users via their algorithms. Section 230, the plaintiffs argue in Gonzalez, “does not contain specific language regarding recommendations, and does not provide a distinct legal standard governing recommendations.”

One issue the Supreme Court justices have already started to grapple with during their questions on Tuesday is whether there is a way to hold the platforms accountable for recommended content, when the same kinds of algorithms are used to rank search results and other responses to user input. “From what I understand, it’s based upon what the algorithm suggests the user is interested in,” Justice Thomas said. “Say you get interested in rice pilaf from Uzbekistan. You don’t want pilaf from some other place, say, Louisiana.” Recommendation algorithms, he suggested, are at the heart of how a search engine operates. So how does one make Google liable for one and not the other?

John Bergmayer, legal director of Public Knowledge, told the Berkman Klein Center that algorithmic recommendations “fit the common law understanding of publication. There is no principled way to distinguish them from other platform activities that most people agree should be covered by 230. The attempt to distinguish search results from recommendations is legally and factually wrong.” If the Supreme Court comes to the wrong conclusion, Bergmayer said, it could limit the usefulness or even viability of many services. The internet, he argued, “might become more of a broadcast medium, rather than a venue where people can make their views known and communicate with each other freely. And useful features of platforms may be shut down.”

Julian Angwin, a journalist and co-founder of The Markup, disagrees. She wrote in her inaugural column for the New York Times that while tech companies claim any limitation to Section 230 could “break the internet and crush free speech,” this isn’t necessarily true. What’s needed is a law drawing a distinction between speech and conduct,” she said. Based on his comments about previous cases involving Section 230, Justice Clarence Thomas is itching to try his hand in finding such a distinction. In a decision last March, he said that “assuming Congress does not step in to clarify Section 230’s scope, we should do so,” adding that he found it hard to see why the law should protect Facebook from liability for its own acts and omissions.”

In a podcast discussion of the two cases being heard by the Supreme Court, Evelyn Douek—a professor of law at Stanford who specializes in online content—suggested that both seem like a stretch, because neither one mentions any specific content recommended by YouTube, Facebook, or Twitter that allegedly caused the terrorist deaths in question. Her guest, Daphne Keller, the director of platform regulation at the Stanford Cyber Policy Center, agreed. “I don’t even have a good theory about why they would choose such exceedingly convoluted cases,” Keller said. “Maybe it’s just that Justice Thomas had been champing at the bit for so long they finally felt they had to take something, and they didn’t realize what a mess of a case they were taking.”

Even if the Supreme Court decides that Section 230 doesn’t protect the platforms when it comes to terrorist content, that doesn’t mean platforms like Facebook and Twitter are out of options. Online speech experts say they could argue with some justification that the First Amendment protects them against legal liability for the work of their recommendation algorithms. “To the extent that people want to force social media companies to leave certain speech up, or to boost certain content, or ensure any individual’s continuing access to a platform, their problem isn’t Section 230, it’s the First Amendment,” Mary Anne Franks, a professor of law at the University of Miami, said during a conversation on CJR’s Galley discussion platform in 2021.

One problem with that theory, however, is that online platforms might not bother trying to fight such cases at all, because of the difficulty of proving that their behavior is permitted by the First Amendment. Instead, they may decide to just remove content willy-nilly, in case a court finds them liable. The consequences of this “could be catastrophic,” the Washington Post argues. “Platforms would likely abandon systems that suggest or prioritize information altogether, or just sanitize their services to avoid carrying anything close to objectionable.” The result, the Post editorial says, could create “a wasteland.”

Leave a Reply

Your email address will not be published. Required fields are marked *