Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer
Earlier this month, Apple announced a series of steps it is taking to help keep children safe online. One of those new additions is a feature for its Alexa line of intelligent assistants that will automatically suggest a help-line number if someone asks for child-exploitation material, and another is a new feature that scans images shared through iMessage, to make sure children aren’t sharing unsafe pictures of themselves in a chat window. Neither of these new features sparked much controversy, since virtually everyone agrees that online sharing of child sexual-abuse material is a significant problem that needs to be solved, and that technology companies need to be part of that solution. The third plank in Apple’s new approach to dealing with this kind of content, however, triggered a huge backlash: rather than simply scanning photos that are uploaded to Apple’s servers in the cloud, the company said it will start scanning the photos that users have on their phones to see whether they match an international database of child-abuse content.
As Alex Stamos, former Facebook security chief, pointed out in an interview with Julia Angwin, founder and editor of The Markup, scanning uploaded photos to see if they include pre-identified examples of child sexual-abuse material has been going on for a decade or more, ever since companies like Google, Microsoft, and Facebook started offering cloud-based image storage. The process relies on a database of photos maintained by the National Center for Missing and Exploited Children, each of which comes with a unique cryptographic code known as a “hash.” Cloud companies compare the code to the images that are uploaded to their servers, and then flag and report the ones that match. Federal law doesn’t require companies to search for such images — and until now, Apple has not done so — but it does require them to report such content if they find it.
What Apple plans to do is to implement this process on a user’s phone, before anything is uploaded to the cloud. The company says this is a better way of cracking down on this kind of material, but its critics say it is not only a significant breach of privacy, but also opens a door to other potential invasions by the government, and other state actors, that can’t easily be closed. The Electronic Frontier Foundation called the new feature a “backdoor to your private life,” and Mallory Knobel, chief technology officer at the Center for Democracy and Technology, told me in an interview on CJR’s Galley discussion platform that this ability could easily be expanded to other forms of content “by Apple internal policy as well as US government policy, or any government orders around the world.” Although Apple often maintains that it cares more about user privacy than any other technology company, Knobel and other critics note that the company still gave the Chinese government virtually unlimited access to user data for citizens in that country.
India, as Stamos points out, “has incredibly broad laws that make speech illegal, such as laws around blasphemy that we don’t have. They have already been creating bills that would require the filtering of speech that is considered illegal in India. One of my concerns would be that those bills will now include that phones that are sold in India have the ability to filter out that content.” And it’s not just the potential for abuse by authoritarian states that has some worried about Apple’s new feature. Both the Obama and Trump governments tried, and in some cases succeeded, in getting access to phone calls and other records belonging to journalists they suspected of being involved in government leaks, and Apple’s new abilities raise the possibility that future subpoenas could contain even more requests for back-door access to reporters’ mobile devices — all in the name of national security.
Apple has tried to reassure those concerned about its new powers by saying it would refuse any order from governments to scan for images other than child sexual-abuse material. But critics of the new policy point out that all there is standing between Apple and the ability to search a user’s device for any kind of content is the company’s verbal commitment that it would never do this. Knobel also said the way Apple is planning to scan iMessages breaks its end-to-end encryption, and that this is another potential back door that governments will likely want to make use of, not just for Apple services but other end-to-end-encrypted services such as Facebook’s WhatsApp. And it’s not just external critics who are against the plan: according to Reuters, Apple employees “have flooded an Apple internal Slack channel with more than 800 messages on the plan announced a week ago.” Many said they were afraid the feature could be exploited by governments looking to find other material for censorship or arrests, according to workers who saw the discussion.
Stanford’s Stamos and other experts in the security and privacy field say one of the biggest flaws in Apple’s strategy isn’t the specific ways in which it wants to police CSAM, but the fact that the company has been virtually absent until now from the debate over how best to do that kind of scanning at scale, while protecting privacy. “I am both happy to see Apple finally take some responsibility for the impacts of their massive communication platform, and frustrated with the way they went about it. They both moved the ball forward technically while hurting the overall effort to find policy balance,” Stamos said in a Twitter thread. “With this announcement they just busted into the balancing debate and pushed everybody into the furthest corners with no public consultation.”
Here’s more on Apple and privacy:
Backwards: Technology analyst Ben Thompson writes in his Stratechery newsletter that Apple has made a significant error in the way it structured its search for child-abuse imagery, because it inverts the traditional privacy model, where what is on a user’s phone is private, and whatever is uploaded to the cloud is not. “Apple’s choices in this case go in the opposite direction: instead of adding CSAM-scanning to iCloud Photos in the cloud that they own-and-operate, Apple is compromising the phone that you and I own-and-operate, without any of us having a say in the matter,” he says. “Yes, you can turn off iCloud Photos to disable Apple’s scanning, but that is a policy decision; the capability to reach into a user’s phone now exists, and there is nothing an iPhone user can do to get rid of it.”
Open letter: More than eight thousand security and privacy experts, cryptographers, researchers, professors, legal experts, and Apple users have signed a letter criticizing the company for its plan. “Apple’s proposed measures could turn every iPhone into a device that is continuously scanning all photos and messages that pass through it in order to report any objectionable content to law enforcement, setting a precedent where our personal devices become a radical new tool for invasive surveillance, with little oversight to prevent eventual abuse and unreasonable expansion of the scope of surveillance,” it says.
Start again: Stamos says the company needs to unwind its plan and start again, and this time just scan photos in the cloud the way everyone else does. “If they believe that sharing of photos on iCloud is a real risk for people to share child sexual abuse material—and I think that is probably an accurate belief,” he told Angwin, “then they could decide not to make shared photo albums end-to-end encrypted, and they could scan them on the server side just like everybody else does. My real fear is that there’s a lot of opportunity to use machine learning to keep people safe and that Apple has completely poisoned the well on this, where now you will never get the privacy advocates to accept anything and you will have a massive amount of paranoia from the general public.”
Other notable stories:
The Wall Street Journal reports that as BuzzFeed was exploring plans to go public earlier this year by merging with a SPAC, or special purpose acquisition company, it started to get pushback from executives at NBCUniversal, its largest investor. According to the Journal, the broadcasting company “thought they were getting a bad deal,” because the arrangement BuzzFeed proposed valued the company at $1.5 billion, less than the valuation it had when NBCUniversal invested. “The broadcaster ultimately approved the deal after reaching an agreement in April with BuzzFeed chief executive Jonah Peretti that guaranteed it concessions while still leaving it facing a loss of roughly $100 million,” the Journal reported.
For the first time, Facebook has released a list of what it says are the most popular posts, pages, and links on the social network, ranked in terms of “reach,” or the number of people who saw them. Although the company positioned the report as part of its attemps to be more transparent about what happens on its site, Protocol notes that “the information contained in it also serves another purpose: countering the idea that far-right pages and accounts dominate the site in the U.S.” Kevin Roose of the New York Times has reported that Facebook started working on the list in part because the company disliked a list that he puts together of pages with the most engagement, using Facebook’s CrowdTangle tracking tool, which often shows pages run by right-wing commentators like Ben Shapiro at the top. Roose called the company’s list of most popular posts “a tremendously weird document.”
The organisation that represents British newspaper editors has withdrawn its claim that the UK media is not racist or bigoted, according to a report in the Guardian, after six months of what the newspaper called “pressure from journalists of color who said it did not reflect their experience of the industry.” The Society of Editors said it plans to “work to improve diversity in the industry,” and is no longer standing by statements made in March by Ian Murray, its former director. Murray disputed claims by the Duchess of Sussex that negative coverage of her relationship with her husband, Prince Harry, was motivated by her skin color.
According to a series of tweets on Wednesday from Lyta Gold, managing editor of Current Affairs magazine, she and the other writers and editors at the alt-left publication were fired by Nathan J. Robinson, the founder and publisher of the magazine, because they tried to organize the publication as a co-op. According to Gold, Robinson said he was in support of this effort at an all-staff meeting two weeks ago, but changed his mind later. Over a series of emails to staff, she said Robinson confessed that he wanted to remain in control of the organization. “I think I should be on top of the org chart,” he wrote, “with everyone else selected by me and reporting to me.” Robinson later posted a long apology on Facebook saying he handled the situation badly, but that he tried to reorganize the staff because the publication was “adrift” and he was trying to get it “back on track.”
Reeves Wiedman writes for New York magazine about his attempts to track down a mysterious figure who has been stealing book manuscripts for years. “A clever thief adopting multiple aliases, targeting victims around the world, and acting with no clear motive. The manuscripts weren’t being pirated, as far as anyone could tell. Was the thief simply an impatient reader? A strung-out writer in need of ideas?” Among the details that gave the scammer away in one instance: An assistant at the talent agency WME “realized her boss was being impersonated because she would never say ‘please’ or ‘thank you.’”
The New Republic profiled a group called Distributed Denial of Secrets or DDoS (a reference to a common hacker exploit called a distributed denial of service), which wants to be a successor to WikiLeaks. “In June 2020, in a release known as BlueLeaks, the group published 269 gigabytes of law enforcement data, which exposed police malfeasance and surveillance overreach across the United States. DDoSecrets also published incriminating records from overseas tax shelters, from the social media site Gab, and from a Christian crowdfunding site often used by the far right.”
Ross Barkan writes for CJR about the media’s complicity in the myth of Andrew Cuomo. “Cuomo’s enormous popularity in the early stages of the pandemic granted him enough residual goodwill to survive the first wave of scandal in February and March, to not resign when senators and representatives like Chuck Schumer and Alexandria Ocasio-Cortez told him to leave,” Barkan writes. “And this was possible because the media, at every turn, fueled the Cuomo myth.” The New York Times, CNN, and MSNBC all helped inflate Cuomo’s reputation, he says. “CNN, most notoriously, allowed Chris Cuomo, the governor’s brother, to interview him repeatedly in prime-time.”
The Daily Beast has named Tracy Connor as its next top editor, replacing former editor Noah Shachtman, who recently left to take the top editorial post at Rolling Stone magazine. Connor, 54, has been acting in the role since her predecessor left, and will take over immediately, the company said. She has worked at both the New York Post and the New York Daily News, and prior to joining the Daily Beast was part of the investigative unit at NBC News, where she helped lead an investigation into sexual abuse committed by former US gymnastics doctor Larry Nassar. Connor said she wanted to “double down on investigations and impact.”