Facebook “transparency report” turns out to be anything but

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Last week, Facebook released a report detailing some of the most popular content shared on the site in the second quarter of this year. The report is a first for the social network, and part of what the company has said is an attempt to be more transparent about its operations: Guy Rosen, Facebook’s vice president of integrity, described the content review as part of “a long journey” to be “by far the most transparent platform on the internet.” If that is the case, however, the story behind the creation of the report shows the company still has a long way to go to reach that goal.

To take just one example, Facebook’s new content report appears to be, at least in part, a co-ordinated response to critical reporting from Kevin Roose, a Times‘ technology columnist, who has been tracking the posts that get the most engagement on Facebook for some time, using the company’s own CrowdTangle tool, and has consistently found that right-wing pages get the most interaction from users.

This isn’t something Facebook likes to hear, apparently, so the content report tries to do two things to contradict that impression: the first is it tries to argue that engagement, or the number of times someone clicks on a link — which Roose uses as the metric for his Top 10 lists — isn’t the most important way of looking at content, and so it focuses instead on “reach,” or how many people saw a certain post. The second thing it tries to do is show that even the most popular content only amounts to only a tiny fraction of what gets seen on the platform (less than 0.1 percent, according to the report). As Robyn Caplan, a researcher with Data & Society has pointed out, this seems to be an attempt to show that disinformation on the platform isn’t a big deal because so few people see it.

Continue reading “Facebook “transparency report” turns out to be anything but”

Apple’s plan to scan images on users’ phones sparks backlash

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Earlier this month, Apple announced a series of steps it is taking to help keep children safe online. One of those new additions is a feature for its Alexa line of intelligent assistants that will automatically suggest a help-line number if someone asks for child-exploitation material, and another is a new feature that scans images shared through iMessage, to make sure children aren’t sharing unsafe pictures of themselves in a chat window. Neither of these new features sparked much controversy, since virtually everyone agrees that online sharing of child sexual-abuse material is a significant problem that needs to be solved, and that technology companies need to be part of that solution. The third plank in Apple’s new approach to dealing with this kind of content, however, triggered a huge backlash: rather than simply scanning photos that are uploaded to Apple’s servers in the cloud, the company said it will start scanning the photos that users have on their phones to see whether they match an international database of child-abuse content.

As Alex Stamos, former Facebook security chief, pointed out in an interview with Julia Angwin, founder and editor of The Markup, scanning uploaded photos to see if they include pre-identified examples of child sexual-abuse material has been going on for a decade or more, ever since companies like Google, Microsoft, and Facebook started offering cloud-based image storage. The process relies on a database of photos maintained by the National Center for Missing and Exploited Children, each of which comes with a unique cryptographic code known as a “hash.” Cloud companies compare the code to the images that are uploaded to their servers, and then flag and report the ones that match. Federal law doesn’t require companies to search for such images — and until now, Apple has not done so — but it does require them to report such content if they find it.

What Apple plans to do is to implement this process on a user’s phone, before anything is uploaded to the cloud. The company says this is a better way of cracking down on this kind of material, but its critics say it is not only a significant breach of privacy, but also opens a door to other potential invasions by the government, and other state actors, that can’t easily be closed. The Electronic Frontier Foundation called the new feature a “backdoor to your private life,” and Mallory Knobel, chief technology officer at the Center for Democracy and Technology, told me in an interview on CJR’s Galley discussion platform that this ability could easily be expanded to other forms of content “by Apple internal policy as well as US government policy, or any government orders around the world.” Although Apple often maintains that it cares more about user privacy than any other technology company, Knobel and other critics note that the company still gave the Chinese government virtually unlimited access to user data for citizens in that country.

Continue reading “Apple’s plan to scan images on users’ phones sparks backlash”

Facebook’s excuses for shutting down research ring hollow

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Last week, Facebook shut down the personal accounts of several researchers affiliated with New York University, claiming that their work—including a browser extension called Ad Observer, which allows users to share the ads that they are shown in their Facebook news feeds—violated the social network’s privacy policies. The company said that while it wants to help social scientists with their work, it can’t allow user information to be shared with third parties, in part because of the consent decree it signed with the Federal Trade Commission as part of a $5 billion settlement in the Camridge Analytica case in 2018. Researchers, including some of those who were involved in the NYU project, said Facebook’s behavior was not surprising, given the company’s long history of dragging its feet when it comes to sharing information. And not long after Facebook used the FTC consent decree as a justification for the shutdown, the federal agency took the unusual step of making public a letter it sent to Mark Zuckerberg, Facebook’s CEO, stating that if the company had contacted the FTC about the research, “we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest.”

To discuss how Facebook responded in this case, its track record when it comes to social-science research, and the way that other platforms such as Twitter treat researchers, CJR brought together a number of experts using our Galley discussion platform. The group included Laura Edelson, a doctoral candidate in computer science at NYU and one of the senior scientists on the Ad Observatory team; Jonathan Mayer, a professor at Princeton and former chief technologist with the Federal Communication Commission; Julia Angwin, founder and editor-in-chief of The Markup, a data-driven investigative reporting startup that has a similar ad research tool called Citizen Browser; Neil Chilson, a fellow at the Charles Koch Institute and former chief technologist at the Federal Trade Commission; Nathalie Marechal of Ranking Digital Rights; and Rebekah Tromble, a doctoral candidate and director of the Institute for Data, Democracy & Politics at George Washington University.

Edelson has said the drastic action Facebook took against her and the rest of the team was the culmination of a series of escalating threats about the group’s research (they are currently lobbying the company to get their accounts reinstated), but that she also has good relationships with some people at the social network. “Facebook’s behavior toward our group has been… complicated,” she said. Since the group studies the safety and efficacy of Facebook’s systems around political ads and misinformation, Edelson said “there is always going to be an inherent tension there,” but that there are several people she has worked with at Facebook who are “smart and dedicated.” One thing that makes the company’s behavior somewhat confusing is that the user information Facebook says it is trying to protect is the names of advertisers in its political ad program, which are publicly available through its own Ad Library. “Those are, technically speaking, Facebook user names,” Edelson says. “We think they are public, and Facebook is saying they are not.”

Continue reading “Facebook’s excuses for shutting down research ring hollow”

Facebook shuts down research, blames user privacy rules

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Last October, Facebook warned a group of social scientists from New York University that their research — known as the Ad Observatory, part of the Cybersecurity for Democracy Project — was in breach of the social network’s terms of service, because it used software to “scrape” information from Facebook without the consent of the service’s users. The company said that unless the researchers stopped using the browser extension they developed, or changed the way that it acquired information, they would be subject to “additional enforcement action.” Late Tuesday night, Facebook followed through on this threat by blocking the group from accessing any of the platform’s data, and also shutting down the researchers’ personal accounts and pages. In a blog post, the company said it was forced to do so because the browser extension violated users’ privacy. “While the Ad Observatory project may be well-intentioned, the ongoing and continued violations of protections against scraping cannot be ignored,” Facebook said.

The NYU researchers responded that they have taken all the precautions they can to avoid pulling in personally identifiable information from users — including names, user ID numbers, and Facebook friend lists — and also pointed out that the thousands of users who signed up to help the Ad Observatory Project installed the group’s browser extension willingly, to help the scientists research the impact of the social network’s ad-targeting algorithms. “Facebook is silencing us because our work often calls attention to problems on its platform,” Laura Edelson, one of the NYU researchers, told Bloomberg News in an email. “Worst of all, Facebook is using user privacy, a core belief that we have always put first in our work, as a pretext for doing this.” Edelson also said on Twitter that the Facebook shutdown has effectively cut off more than two dozen other researchers and journalists who got access to Facebook advertising data through the NYU project

Unauthorized access to private user data is a sensitive topic for Facebook. In the Cambridge Analytica scandal of 2018, a political consulting firm acquired personally identifiable information on more than 80 million people from a researcher who gained access to it through a seemingly harmless Facebook app. The resulting furor eventually led to a $5 billion settlement with the Federal Trade Commission for breaches of privacy, and the company promised it would never share the personal information of its users with third parties without stringent controls. The ripple effects of the FTC order — combined with the subsequent passing of the European Union’s General Data Protection Regulation or GDPR — led to severe restrictions on the social network’s API (application programming interface), which other web services and software use to exchange data with the social network. And many of those restrictions also affected researchers like those at NYU.

Continue reading “Facebook shuts down research, blames user privacy rules”

The Straw Hat Riot of 1922

We all know that fashions were different in earlier times, but who knew something as simple as when someone chose to wear a hat could cause a massive riot, leading to dozens of arrests and injuries? That’s what happened in New York City in 1922, during the infamous “Straw Hat” riots, which started when gangs of hooligans began attacking anyone wearing a straw hat, and lasted for more than a week. Why did they start attacking people wearing these hats? Because at the time, it was considered unseemly or even ridiculous to wear such a hat after September 15th. For some reason that year, the ridicule turned to violence. The New York Times reported:

“Gangs of young hoodlums ran riot in various parts of the city last night, smashing unseasonable straw hats and trampling them in the street. In some cases, mobs of hundreds of boys and young men terrorized whole blocks. A favorite practice of the gangsters was to arm themselves with sticks, some with nails at the tip, and compel men wearing straw hats to run a gauntlet. Sometimes the hoodlums would hide in doorways and dash out, ten or twelve strong, to attack.”

straw hat riots

Section 230 critics are forgetting about the First Amendment

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

A recurring theme in political circles is the idea that giant digital platforms such as Facebook, Twitter, and YouTube engage in bad behavior—distributing disinformation, allowing hate speech, removing conservative opinions, and so on—in part because they are protected from legal liability by Section 230 of the Communications Decency Act, which says they aren’t responsible for content posted by their users. Critics on both sides of the political aisle argue that this protection either needs to be removed or significantly amended because the social networks are abusing it. Former president Donald Trump signed an executive order in an attempt to get the FTC to do something about Section 230, although his efforts went nowhere, and Section 230 also plays a role in his recent lawsuits against Facebook, Google, and Twitter for banning him. President Joe Biden hasn’t pushed anyone to do anything specific yet, but he has said that the clause should be “revoked immediately.”

One of the most recent attempts to change Section 230 comes from Democratic Senator Amy Klobuchar, who has proposed a bill that would carve out an exception for medical misinformation during a health crisis, making the platforms legally liable for distributing anything the government defines as untrue. While this may seem like a worthwhile goal, given the kind of rampant disinformation being spread about vaccines on platforms like Facebook and Google’s YouTube, some freedom of speech advocates argue that even well-intentioned laws like Klobuchar’s could backfire badly and have dangerous consequences. Similar concerns have been raised about a suite of proposed bills introduced by a group of Republican members of Congress, which involve a host of “carve-outs” for Section 230 aimed at preventing platforms from removing certain kinds of content (mostly conservative speech), and forcing them to remove other kinds (cyber-bullying, doxxing, etc.).

To talk about these and related issues, we’ve been interviewing a series of experts in law and technology using CJR’s Galley discussion platform, including Makena Kelly, a policy reporter for The Verge covering topics like net neutrality, data privacy, antitrust, and internet culture; Jeff Kosseff, an assistant professor of cybersecurity law at the United States Naval Academy, and author of “The Twenty-Six Words That Created the Internet, a history of Section 230;Mike Masnick, who runs technology analysis site Techdirt and co-founded a think tank called the Copia Institute; Mary Anne Franks, professor of law at the University of Miami, and president of the Cyber Civil Rights Initiative; James Grimmelmann, a law professor at Cornell Tech; and Eric Goldman, a professor of law at Santa Clara University.

Continue reading “Section 230 critics are forgetting about the First Amendment”