Section 230, the platforms, and the Supreme Court

Note: This was originally published as the daily newsletter at the Columbia Journalism Review, where I am the chief digital writer

For the past several years, critics on both sides of the political spectrum have argued that Section 230 of the Communications Decency Act of 1996 gives social-media platforms such as Facebook, Twitter, and YouTube too much protection from legal liability for the content that appears on their networks. Right-wing critics argue that Section 230 allows social-media companies to censor conservative thinkers and groups without recourse, by removing their content (even though there is no evidence that this occurs), and liberal critics say the platforms use Section 230 as an excuse not to remove things they should be taking down, such as misinformation. Before the 2020 election, Joe Biden said he would abolish Section 230 if he became president, and he has made similar statements since he took office, saying the clause “should be revoked immediately.”

This week, the Supreme Court said it plans to hear two cases that are looking to chip away at Section 230 legal protections. One case claims that Google’s YouTube service violated the federal Anti-Terrorism Act by recommending videos featuring the ISIS terrorist group, and that these videos helped lead to the death of Nohemi Gonzalez, a 23-year-old US citizen who was killed in an ISIS attack in Paris in 2015. In the lawsuit, filed in 2016, Gonzalez’s family claims that while Section 230 protects YouTube from liability for hosting such content, it doesn’t protect the company from liability for promoting that content with its algorithms. The second case involves Twitter, which was also sued for violating the Anti-Terrorism Act; the family of Nawras Alassaf claimed ISIS-related content on Twitter contributed to his death in a terrorist attack in 2017.

The Supreme Court decided not to hear a similar case in 2020, which claimed that Facebook was responsible for attacks in Israel, because the social network promoted posts about the terrorist group Hamas. In March, the court also refused to review a decision which found Facebook was not liable for helping a man traffick a woman for sex. While Justice Clarence Thomas agreed with the decision not to hear that case, he also wrote that the court should consider the issue of “the proper scope of immunity” under Section 230. “Assuming Congress does not step in to clarify Section 230’s scope, we should do so in an appropriate case,” Thomas wrote. “It is hard to see why the protection that Section 230 grants publishers against being held strictly liable for third parties’ content should protect Facebook from liability for its own ‘acts and omissions.’”

Thomas has made similar comments in a number of other decisions. In 2020, the Supreme Court declined to hear a case in which Enigma Software argued that MalwareBytes, an internet security company, should be liable for calling Enigma’s products malware. Although he agreed with that decision, Thomas went on at length about what he described as a movement to use Section 230 to “confer sweeping immunity on some of the largest companies in the world.” He also suggested he agreed with an opinion from a lower-court judge, in a case in which Facebook was sued for terrorist content. The opinion said it “strains the English language to say that in targeting and recommending these writings to users… Facebook is acting as ‘the publisher of information provided by another information content provider,'” which is what Section 230 provides legal protection for.

Jeff Kosseff, a cybersecurity law professor at the US Naval Academy and the author of a book on Section 230, told the Washington Post that with the Supreme Court considering these questions, “the entire scope of Section 230 could be at stake.” This will be the first time the court will directly evaluate the legal protection afforded by Section 230, Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told the Post. In particular, it will be the first time the court has considered whether there is a distinction between content that is hosted, and content recommended by algorithms. Goldman calls this a “false dichotomy,” and says the process of recommending content is one of the traditional editorial functions of a social-media network. In that sense, he told the Post, “the question presented goes to the very heart of Section 230.”

While Section 230 gets most of the attention, it isn’t the only protection the platforms have, something critics of the law sometimes forget. A feature on hate speech in the New York Times described Section 230 as the main reason why such speech exists online, but later added a correction clarifying that the First Amendment also protects online speech. Even if the Supreme Court decides Section 230 doesn’t protect the platforms when it comes to terrorist content, Facebook and Twitter could argue with some justification that the First Amendment does. As Mary Anne Franks, a professor of law at the University of Miami, said during a discussion of Section 230 on CJR’s Galley platform last year: “To the extent that people want to force social media companies to leave certain speech up, or to boost certain content, or ensure any individual’s continuing access to a platform, their problem isn’t Section 230—it’s the First Amendment.”

This argument is at the heart of another case the Supreme Court was recently asked to hear, involving a Florida law designed to control how the platforms moderate content. The law was struck down by the Eleventh Circuit Court of Appeals in May as unconstitutional, since the court ruled that moderation decisions are an exercise of the platforms’ First Amendment rights. A similar law passed in Texas, however, was upheld in a decision earlier this month, one that explicitly rejected the First Amendment defense. Now the Supreme Court gets to decide whether Section 230 covers all the platforms’ moderation and content choices, or the First Amendment, or both—or neither.

Here’s more on the platforms and liability:

Free expression: Jack Dorsey, then CEO of Twitter, and Mark Zuckerberg, CEO of Facebook, warned the Senate in 2020 that curtailing the protections of Section 230 could harm free expression on the internet. Dorsey said it could “collapse how we communicate on the Internet” and leave “only a small number of giant and well-funded” tech firms, while Zuckerberg said “without Section 230, platforms could potentially be held liable for everything people say” and could “face liability for doing even basic moderation, such as removing hate speech and harassment.”

Out of date: Michael Smith, professor of information technology at Carnegie Mellon, and Marshall Van Alstyne, a business professor at Boston University, wrote in an essay for Harvard Business Review last year that Section 230 needs to be updated because it was originally drafted “a quarter century ago during a long-gone age of naïve technological optimism and primitive technological capabilities,” and its protections are now “desperately out of date.” When you grant platforms complete immunity for the content that their users post, Smith and Van Alstyne argue, “you also reduce their incentives to proactively remove content causing social harm.”

Narrow path: Daphne Keller, a former associate counsel at Google who directs the Program on Platform Regulation at Stanford’s Cyber Policy Center, wrote in a paper published by the Knight First Amendment Institute at Columbia University that the desire to regulate recommendation or amplification algorithms is understandable, but a long way off. “Some versions of amplification law would be flatly unconstitutional in the US,” she writes. “Others might have a narrow path to constitutionality, but would require a lot more work than anyone has put into them so far.”

Sowing seeds: In 2019, Eric Goldman argued that while Section 230 protects giant platforms such as Facebook and Twitter, it also sows the seeds of their eventual destruction, by making it easier for startups to compete. “Due to Section 230’s immunity, online republishers of third-party content do not have to deploy industrial-grade content filtering or moderation systems, or hire lots of content moderation employees, before launching new startups,” Goldman says. “This lowers startup costs generally; in particular, it helps these new market entrants avoid making potentially wasted investments in content moderation before they understand their audience’s needs.”

Other notable stories:

Aljazeera reports that Jean Damascène Mutuyimana, Niyodusenga Schadrack, and Jean Baptiste Nshimiyimana, three Rwandese journalists with the YouTube channel Iwacu TV who had been detained for four years, have been freed. On Wednesday, a court ruled that “there is no evidence to prove that their publication incited violence.” The three were arrested on October 2018 on charges of spreading false information with the intention of inciting violence, and tarnishing the country’s image.

The Taliban shut down two news websites in Afghanistan on Monday, reported the Committee to Protect Journalists. The Hasht-e Subh Daily and Zawia News were closed for engaging in “false propaganda” against the Taliban, according to a tweet by Anayatullah Alokozay, a spokesman for the Taliban. Both websites are operated by Afghan journalists living in exile, and confirmed that the Taliban had deactivated their websites. Hasht-e Subh has since resumed operations under a different name, while Zawia News has shifted its content to its parent company, Zawia Media.  

Facebook is ending its newsletter subscription service, Bulletin, after fifteen months in operation. According to the New York Times, executives told staff in July that they would shift resources away from the newsletter. The service aimed to compete with Substack by attracting both emerging and high-profile writers and helping them to build a following with Facebook’s publishing and legal support. Last year, Facebook said it committed $5 million to Bulletin’s local news writers and offered writers contracts extending into 2024.

Latinos are underrepresented in the media industry and are more likely to perform service roles, according to the Government Accountability Office’s latest report on Latino representation in film, television, and other publishing entities. The report found that Latinos make up twelve percent of the media workforce and only four per cent of media management. The report, released last week, also showed that when Latinos get jobs in the media industry, they’re often placed into service roles.

In the New Yorker, Kevin Lozano writes about Mark Bergen’s recently published book “Like, Comment, Subscribe,” which looks at the history and evolution of Youtube. While YouTube has over 2 billion users, and is one of the most popular sites among teenagers in particular, Lozano says that the Google-owned service is often forgotten when reporters look at the flaws of social-media giants such as Facebook and Twitter. Lozano argues that despite its scandals Youtube has managed to survive fairly unscathed because it has become “too useful and too ubiquitous to fail.”

LaFontaine Oliver, the current president and CEO of Baltimore’s NPR station, WYPR, is the new CEO of New York Public Radio, Gothamist reports. The role has been vacant for nearly a year since Goli Sheikholeslami, NYPR’s former CEO, left to lead Politico Media Group. Oliver will oversee ​​WNYC, WQXR, Gothamist, WNYC Studios, the Jerome L. Greene Performance Space, and New Jersey Public Radio. He is the first Black person to serve as CEO of New York Public Radio. 

Leave a Reply

Your email address will not be published. Required fields are marked *