Facebook shuts down research, blames user privacy rules

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Last October, Facebook warned a group of social scientists from New York University that their research — known as the Ad Observatory, part of the Cybersecurity for Democracy Project — was in breach of the social network’s terms of service, because it used software to “scrape” information from Facebook without the consent of the service’s users. The company said that unless the researchers stopped using the browser extension they developed, or changed the way that it acquired information, they would be subject to “additional enforcement action.” Late Tuesday night, Facebook followed through on this threat by blocking the group from accessing any of the platform’s data, and also shutting down the researchers’ personal accounts and pages. In a blog post, the company said it was forced to do so because the browser extension violated users’ privacy. “While the Ad Observatory project may be well-intentioned, the ongoing and continued violations of protections against scraping cannot be ignored,” Facebook said.

The NYU researchers responded that they have taken all the precautions they can to avoid pulling in personally identifiable information from users — including names, user ID numbers, and Facebook friend lists — and also pointed out that the thousands of users who signed up to help the Ad Observatory Project installed the group’s browser extension willingly, to help the scientists research the impact of the social network’s ad-targeting algorithms. “Facebook is silencing us because our work often calls attention to problems on its platform,” Laura Edelson, one of the NYU researchers, told Bloomberg News in an email. “Worst of all, Facebook is using user privacy, a core belief that we have always put first in our work, as a pretext for doing this.” Edelson also said on Twitter that the Facebook shutdown has effectively cut off more than two dozen other researchers and journalists who got access to Facebook advertising data through the NYU project

Unauthorized access to private user data is a sensitive topic for Facebook. In the Cambridge Analytica scandal of 2018, a political consulting firm acquired personally identifiable information on more than 80 million people from a researcher who gained access to it through a seemingly harmless Facebook app. The resulting furor eventually led to a $5 billion settlement with the Federal Trade Commission for breaches of privacy, and the company promised it would never share the personal information of its users with third parties without stringent controls. The ripple effects of the FTC order — combined with the subsequent passing of the European Union’s General Data Protection Regulation or GDPR — led to severe restrictions on the social network’s API (application programming interface), which other web services and software use to exchange data with the social network. And many of those restrictions also affected researchers like those at NYU.

Continue reading “Facebook shuts down research, blames user privacy rules”

The Straw Hat Riot of 1922

We all know that fashions were different in earlier times, but who knew something as simple as when someone chose to wear a hat could cause a massive riot, leading to dozens of arrests and injuries? That’s what happened in New York City in 1922, during the infamous “Straw Hat” riots, which started when gangs of hooligans began attacking anyone wearing a straw hat, and lasted for more than a week. Why did they start attacking people wearing these hats? Because at the time, it was considered unseemly or even ridiculous to wear such a hat after September 15th. For some reason that year, the ridicule turned to violence. The New York Times reported:

“Gangs of young hoodlums ran riot in various parts of the city last night, smashing unseasonable straw hats and trampling them in the street. In some cases, mobs of hundreds of boys and young men terrorized whole blocks. A favorite practice of the gangsters was to arm themselves with sticks, some with nails at the tip, and compel men wearing straw hats to run a gauntlet. Sometimes the hoodlums would hide in doorways and dash out, ten or twelve strong, to attack.”

straw hat riots

Section 230 critics are forgetting about the First Amendment

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

A recurring theme in political circles is the idea that giant digital platforms such as Facebook, Twitter, and YouTube engage in bad behavior—distributing disinformation, allowing hate speech, removing conservative opinions, and so on—in part because they are protected from legal liability by Section 230 of the Communications Decency Act, which says they aren’t responsible for content posted by their users. Critics on both sides of the political aisle argue that this protection either needs to be removed or significantly amended because the social networks are abusing it. Former president Donald Trump signed an executive order in an attempt to get the FTC to do something about Section 230, although his efforts went nowhere, and Section 230 also plays a role in his recent lawsuits against Facebook, Google, and Twitter for banning him. President Joe Biden hasn’t pushed anyone to do anything specific yet, but he has said that the clause should be “revoked immediately.”

One of the most recent attempts to change Section 230 comes from Democratic Senator Amy Klobuchar, who has proposed a bill that would carve out an exception for medical misinformation during a health crisis, making the platforms legally liable for distributing anything the government defines as untrue. While this may seem like a worthwhile goal, given the kind of rampant disinformation being spread about vaccines on platforms like Facebook and Google’s YouTube, some freedom of speech advocates argue that even well-intentioned laws like Klobuchar’s could backfire badly and have dangerous consequences. Similar concerns have been raised about a suite of proposed bills introduced by a group of Republican members of Congress, which involve a host of “carve-outs” for Section 230 aimed at preventing platforms from removing certain kinds of content (mostly conservative speech), and forcing them to remove other kinds (cyber-bullying, doxxing, etc.).

To talk about these and related issues, we’ve been interviewing a series of experts in law and technology using CJR’s Galley discussion platform, including Makena Kelly, a policy reporter for The Verge covering topics like net neutrality, data privacy, antitrust, and internet culture; Jeff Kosseff, an assistant professor of cybersecurity law at the United States Naval Academy, and author of “The Twenty-Six Words That Created the Internet, a history of Section 230;Mike Masnick, who runs technology analysis site Techdirt and co-founded a think tank called the Copia Institute; Mary Anne Franks, professor of law at the University of Miami, and president of the Cyber Civil Rights Initiative; James Grimmelmann, a law professor at Cornell Tech; and Eric Goldman, a professor of law at Santa Clara University.

Continue reading “Section 230 critics are forgetting about the First Amendment”