Facebook “transparency report” turns out to be anything but

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Last week, Facebook released a report detailing some of the most popular content shared on the site in the second quarter of this year. The report is a first for the social network, and part of what the company has said is an attempt to be more transparent about its operations: Guy Rosen, Facebook’s vice president of integrity, described the content review as part of “a long journey” to be “by far the most transparent platform on the internet.” If that is the case, however, the story behind the creation of the report shows the company still has a long way to go to reach that goal.

To take just one example, Facebook’s new content report appears to be, at least in part, a co-ordinated response to critical reporting from Kevin Roose, a Times‘ technology columnist, who has been tracking the posts that get the most engagement on Facebook for some time, using the company’s own CrowdTangle tool, and has consistently found that right-wing pages get the most interaction from users.

This isn’t something Facebook likes to hear, apparently, so the content report tries to do two things to contradict that impression: the first is it tries to argue that engagement, or the number of times someone clicks on a link — which Roose uses as the metric for his Top 10 lists — isn’t the most important way of looking at content, and so it focuses instead on “reach,” or how many people saw a certain post. The second thing it tries to do is show that even the most popular content only amounts to only a tiny fraction of what gets seen on the platform (less than 0.1 percent, according to the report). As Robyn Caplan, a researcher with Data & Society has pointed out, this seems to be an attempt to show that disinformation on the platform isn’t a big deal because so few people see it.

Continue reading “Facebook “transparency report” turns out to be anything but”

Apple’s plan to scan images on users’ phones sparks backlash

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Earlier this month, Apple announced a series of steps it is taking to help keep children safe online. One of those new additions is a feature for its Alexa line of intelligent assistants that will automatically suggest a help-line number if someone asks for child-exploitation material, and another is a new feature that scans images shared through iMessage, to make sure children aren’t sharing unsafe pictures of themselves in a chat window. Neither of these new features sparked much controversy, since virtually everyone agrees that online sharing of child sexual-abuse material is a significant problem that needs to be solved, and that technology companies need to be part of that solution. The third plank in Apple’s new approach to dealing with this kind of content, however, triggered a huge backlash: rather than simply scanning photos that are uploaded to Apple’s servers in the cloud, the company said it will start scanning the photos that users have on their phones to see whether they match an international database of child-abuse content.

As Alex Stamos, former Facebook security chief, pointed out in an interview with Julia Angwin, founder and editor of The Markup, scanning uploaded photos to see if they include pre-identified examples of child sexual-abuse material has been going on for a decade or more, ever since companies like Google, Microsoft, and Facebook started offering cloud-based image storage. The process relies on a database of photos maintained by the National Center for Missing and Exploited Children, each of which comes with a unique cryptographic code known as a “hash.” Cloud companies compare the code to the images that are uploaded to their servers, and then flag and report the ones that match. Federal law doesn’t require companies to search for such images — and until now, Apple has not done so — but it does require them to report such content if they find it.

What Apple plans to do is to implement this process on a user’s phone, before anything is uploaded to the cloud. The company says this is a better way of cracking down on this kind of material, but its critics say it is not only a significant breach of privacy, but also opens a door to other potential invasions by the government, and other state actors, that can’t easily be closed. The Electronic Frontier Foundation called the new feature a “backdoor to your private life,” and Mallory Knobel, chief technology officer at the Center for Democracy and Technology, told me in an interview on CJR’s Galley discussion platform that this ability could easily be expanded to other forms of content “by Apple internal policy as well as US government policy, or any government orders around the world.” Although Apple often maintains that it cares more about user privacy than any other technology company, Knobel and other critics note that the company still gave the Chinese government virtually unlimited access to user data for citizens in that country.

Continue reading “Apple’s plan to scan images on users’ phones sparks backlash”

Facebook’s excuses for shutting down research ring hollow

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Last week, Facebook shut down the personal accounts of several researchers affiliated with New York University, claiming that their work—including a browser extension called Ad Observer, which allows users to share the ads that they are shown in their Facebook news feeds—violated the social network’s privacy policies. The company said that while it wants to help social scientists with their work, it can’t allow user information to be shared with third parties, in part because of the consent decree it signed with the Federal Trade Commission as part of a $5 billion settlement in the Camridge Analytica case in 2018. Researchers, including some of those who were involved in the NYU project, said Facebook’s behavior was not surprising, given the company’s long history of dragging its feet when it comes to sharing information. And not long after Facebook used the FTC consent decree as a justification for the shutdown, the federal agency took the unusual step of making public a letter it sent to Mark Zuckerberg, Facebook’s CEO, stating that if the company had contacted the FTC about the research, “we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest.”

To discuss how Facebook responded in this case, its track record when it comes to social-science research, and the way that other platforms such as Twitter treat researchers, CJR brought together a number of experts using our Galley discussion platform. The group included Laura Edelson, a doctoral candidate in computer science at NYU and one of the senior scientists on the Ad Observatory team; Jonathan Mayer, a professor at Princeton and former chief technologist with the Federal Communication Commission; Julia Angwin, founder and editor-in-chief of The Markup, a data-driven investigative reporting startup that has a similar ad research tool called Citizen Browser; Neil Chilson, a fellow at the Charles Koch Institute and former chief technologist at the Federal Trade Commission; Nathalie Marechal of Ranking Digital Rights; and Rebekah Tromble, a doctoral candidate and director of the Institute for Data, Democracy & Politics at George Washington University.

Edelson has said the drastic action Facebook took against her and the rest of the team was the culmination of a series of escalating threats about the group’s research (they are currently lobbying the company to get their accounts reinstated), but that she also has good relationships with some people at the social network. “Facebook’s behavior toward our group has been… complicated,” she said. Since the group studies the safety and efficacy of Facebook’s systems around political ads and misinformation, Edelson said “there is always going to be an inherent tension there,” but that there are several people she has worked with at Facebook who are “smart and dedicated.” One thing that makes the company’s behavior somewhat confusing is that the user information Facebook says it is trying to protect is the names of advertisers in its political ad program, which are publicly available through its own Ad Library. “Those are, technically speaking, Facebook user names,” Edelson says. “We think they are public, and Facebook is saying they are not.”

Continue reading “Facebook’s excuses for shutting down research ring hollow”