Cloudflare, Kiwi Farms, and the challenges of deplatforming

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

In August, local police arrived at Clare Sorrenti’s apartment in London, Ontario with a search warrant, which they used to confiscate her computer, her cellphone, and some other possessions. Sorrenti, a trans political commentator who streams on Microsoft’s Twitch network, says she was held for 11 hours, and questioned about an email a number of local city councillors said they received that used her former name. The email contained a photo of a handgun and allegedly made threats of harm. Sorrenti, who was released without charges, believes the fake email was a “swatting” attempt—a tactic some online trolls use to attack their enemies, by calling in threats designed to trigger a visit by police or SWAT teams. Although the identity of the email sender remains unknown, Sorrenti had warned local police that a swatting attempt might occur, because of the abuse she had received from users of an online forum known as Kiwi Farms. She said she was repeatedly doxxed (had her personal information, including her physical address, posted onlne) and had also a number of her online accounts hacked by unknown actors.

Ben Collins and Kat Tenbarge of NBC News describe Kiwi Farms as “an internet message board known for being an epicenter of vicious, anti-trans harassment campaigns.” The forum, previously known as CWCki Forums, is an offshoot of 8chan, another notoriously lawless online community that helped give birth to the QAnon conspiracy movement. Collins and Tenbarge say Kiwi Farms has become known for targeting trans and gay personalities by doxxing and swatting, and is also infamous for collecting and archiving the racist and homophobic “manifestos” written by mass shooters. After being swatted, Sorrenti and her supporters started lobbying Cloudflare, a company that provides hosting and security services to websites, asking it to cut off Kiwi Farms. At first, the company said it would not do so: Matthew Prince, the CEO, wrote in a blog post that removing services from even reprehensible content “is the equivalent argument in the physical world that the fire department shouldn’t respond to fires in the homes of people who do not possess sufficient moral character,” calling it “a dangerous precedent.”

Just a few days later, however, Prince changed his mind, and wrote in a new blog post that Cloudflare had removed its security protections from Kiwi Farms, opening the site up to attacks such as a distributed denial of service (Prince also noted that Cloudflare had never provided hosting services to Kiwi Farms). “This is an extraordinary decision for us to make and, given Cloudflare’s role as an Internet infrastructure provider, a dangerous one that we are not comfortable with,” Prince wrote. The decision was made not because of Sorrenti’s lobbying campaign, he said, but because “the rhetoric on the Kiwifarms site and specific, targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life.” Cloudflare’s about-face was hailed by Sorrenti and others as a victory for human rights, among other things, since it likely means that Kiwi Farms has been removed from the internet. But it also raises difficult questions, including who gets to decide what content we see.

This is a dilemma that Matthew Prince of Cloudflare has faced several times in the past. In 2017, the company cut off The Daily Stormer, a neo-Nazi website, and Prince wrote at the time that he thought doing so was both the right decision to make, and also a dangerous one. “You, like me, may believe that the Daily Stormer’s site is vile. You may believe it should be restricted. You may think the authors of the site should be prosecuted,” he wrote. “But having the mechanism of content control be vigilante hackers launching DDoS attacks subverts any rational concept of justice.” Two years later, Cloudflare cut off 8chan, because Prince said it helped inspire a mass shooter in El Paso who killed 20 people. Prince reiterated that he felt “uncomfortable” about deciding what content should be available and what should not. “Cloudflare is not a government,” he wrote. “While we’ve been successful as a company, that does not give us the political legitimacy to make determinations on what content is good and bad. Nor should it.”

In his Platformer newsletter, Casey Newton defended Prince’s reasoning to some extent, saying the decisions about what content to remove shouldn’t be made at the level of an infrastructure provider. “Generally speaking, you don’t want Comcast deciding what belongs on Instagram,” Newton wrote. However, he said Prince’s arguments were also convenient for Cloudflare, because they allowed the company to avoid having to make difficult moderation decisions, which in turn allowed it to “keep out of hot-button cultural debates; and stay off the radar of regulators who are increasingly skeptical of tech companies moderating too little.” The bottom line, Newton argued, is that Cloudflare’s previous position “arguably made it complicit in whatever happened to poor Sorrenti, and anyone else the mob might decide to target. (Three people targeted by Kiwi Farms have died by suicide, according to Gizmodo.)” Some might argue that removing such content is a slippery slope, but Will Oremus, a Washington Post technology writer, argued that 8chan, The Daily Stormer, and Kiwi Farms don’t look that slippery at all.

As Newton suggested in his analysis, Cloudflare isn’t the only one struggling with moderation-related content dilemmas that have attracted the attention of regulators. Facebook has been down that particular road many times, whether for removing a famous photo of a Vietnam War victim, or for not taking down disinformation about COVID. At one point, the company was the target of legislation proposed by Republican members of Congress who felt it took down too much content (primarily posts from conservative sources, they said) and at the same time was the target of proposed legislation from Democratic members of Congress who felt it was not removing enough content. Then there is the contentious “deplatforming” of Donald Trump and others, something both Facebook and Twitter have been criticized for. Elon Musk has said if he acquires Twitter (which remains in doubt), he would restore Trump’s account, which reinforces how arbitrary these kinds of platform content-moderation decisions often are.

If there’s one thing that platforms—whether it’s Facebook, Twitter, Cloudflare, YouTube, or Amazon—could offer, it’s more transparency on why such decisions get made. In his post on removing The Daily Stormer in 2017, Prince wrote that without some kind of clear framework—either from government or industry—as a guide, “a small number of companies will largely determine what can and cannot be online.” There is plenty of talk about the First Amendment, Prince said, but equally important is due process, which “at its most basic, means that you should be able to know the rules a system will follow if you participate in that system [and] requires that decisions be public and not arbitrary.” Despite the “transparency reports” that companies such as Facebook and Twitter release annually, there seems to be a lot of room for improvement in that area.

Here’s more on deplatforming:

Stochastic terror: Alejandra Carabello, an attorney at Harvard Law School’s Cyberlaw Clinic, told NBC News that she worries the Kiwi Farms “playbook”—in which trolls use doxxing and access to other forms of online data to target individuals they believe are on the wrong side of certain cultural issues such as sexuality or abortion—will be expanded as political rhetoric around those issues heats up in advance of the 2024 election. “This is stochastic terror that’s being implemented as part of the culture war,” Carabello told NBC. “Kiwi Farms’ goal is a world where LGBTQ users are not going to be as out and open on social media—they’re going to live in fear of threats and harassment.”

Hiding in plain site: Alex Stamos, director of the Stanford Internet Observatory and the former head of security for Facebook, said on Twitter that while he understood the situation Cloudflare found itself in, the company’s defense of its initial approach to Kiwi Farms was wrong. “I certainly understand the impulses reflected [in Prince’s blog post],” he wrote. “Few commentators on tech policy have a consistent position on platform responsibility versus net neutrality, and Cloudflare falls right into that difficult intersection.” Stamos argued the company did more than just provide security for Kiwi Farms, however, since Cloudflare’s structure also effectively hides a site’s true location.

Libs of TikTok: Children’s hospitals across the US are “facing growing threats of violence, driven by an online anti-LGBTQ campaign attacking the facilities for providing care to transgender kids and teens,” the Washington Post reports. The campaign has been led by a Twitter account called Libs of TikTok, which has more than 1.3 million followers and is run by Chaya Raichik, a former Brooklyn real estate agent. Twitter has allowed the account to remain online despite criticism from a number of areas, including its own employees, who the Post says have been “voicing concerns in internal Slack channels that it’s only a matter of time before the posts lead to someone getting killed.”

Gift of the Gab: In 2018, I wrote for CJR about the moves to deplatform Gab, a right-wing service that saw itself as an alternative to Twitter. “In Gab’s case, the service has been rejected by hosts such as Joyent and Microsoft’s Azure, which ended its contract with Gab earlier this year, and it has also been blocked by payment processors PayPal and Stripe,” I wrote. “On the weekend, after a user of Gab allegedly opened fire and killed 11 people at a Pittsburgh synagogue, domain registrar GoDaddy cut the service off and told it to find another registrar. So, even if Gab manages to find a new host for the network, it would be difficult for users to find it just by typing in a web address.” In 2019, however, Gab managed to find alternative hosting and domain registration, and it remains online.

Other notable stories:

Police in Las Vegas arrested Clark County Public Administrator Robert Telles on Wednesday on suspicion of murder in the stabbing death of Las Vegas Review-Journal investigative reporter Jeff German, the Review-Journal reported. “German’s investigation of Telles this year contributed to the Democrat’s primary election loss, and German was working on a potential follow-up story about Telles before he was killed,” the paper wrote. Las Vegas police had interviewed Telles and searched his home earlier that day, then returned in tactical gear that evening and Telles was “wheeled out of the home on a stretcher and loaded into an ambulance,” the paper reported.

Vice Media is is “exploring a deal with MBC, a media giant partly owned by the Saudi government, to start a new content partnership in the region,” the New York Times reported Wednesday, quoting two people with knowledge of the talks. “The deal, which may include the creation of a media brand focused on lifestyle coverage and training local media workers, could be worth at least $50 million over multiple years, one of the people said.” In April last year, Vice’s decision to open a commercial office in the Saudi capital, Riyadh, became a point of contention inside the company, the Times said. In a call with staff, one producer called that decision “morally bankrupt.”

A Delaware court has denied Elon Musk’s attempt to push back the date of the trial over his delayed Twitter acquisition, which is currently scheduled to start October 17 in Delaware’s Chancery Court, The Verge reports. However, the judge agreed that Musk could incorporate claims made by Peiter Zatko, the former Twitter security chief turned whistleblower into his case. Meanwhile, Bob Iger, former CEO of Disney, told the Code conference that he decided not to acquire Twitter in 2016 in part because an investigation into the company’s user base showed that “a substantial portion were not real.”

Google News Showcase, a feature that pays publishers for their news content, is “almost a year behind its intended launch schedule in the US, as negotiations with some media outlets have bogged down,” the Wall Street Journal reported. Some publishers felt Google wasn’t offering enough; in one case, Gannett was offered $6 million a year as part of a multi-year deal, the WSJ wrote, but the newspaper chain asked for $300 million a year. Meanwhile, “some publishers want to wait and see the fate of legislation in Congress that would give publishers a stronger negotiating hand with tech platforms,” the WSJ said.

Shailesh Prakash, the longtime head of technology for the Washington Post, is leaving the company for a new executive role at Google, Sarah Fischer reported for Axios. Prakash led the Post‘s publishing arm, Arc XP, since its inception in 2015, as well as its ad tech arm Zeus. Axios reported earlier this year that the Post decided not to sell Arc XP, despite having conversations with a number of parties. “It’s unclear if Prakash’s departure is tied to the fact that a spin-off didn’t happen,” Fischer wrote.

Muck Rack, which has compiled an automated database of journalists and their coverage areas and contact information—a tool used by the public relations and marketing industry— has raised $180 million in financing, its first outside funding, TechCrunch reported. “The money is coming from a single, big-name investor, Susquehanna Growth Equity, which is taking a minority stake in the company,” the site wrote. Founders Gregory Galant and Lee Semel will continue to control the company, which they founded in 2009.

Twitter is expanding Birdwatch, its crowdsourced misinformation-debunking product, Gizmodo reports. “Beginning this week, Twitter will begin to accept 1,000 new contributors per week to Birdwatch, adding to the roughly 15,000 it already has,” the site wrote. “Contributors’ work will be more visible on timelines, with Twitter aiming to eventually roll out the feature to 50% of U.S. users.” Birdwatch was launched as a pilot product in 2021 and uses a community approach to reduce misinformation on the platform. “Birdwatch contributors, who are anonymous, are able to write notes that appear below tweets or link to outside sources,” Gizmodo said.

Leave a Reply

Your email address will not be published. Required fields are marked *