As the mushroom cloud continued to spread over the weekend from Friday evening’s nuclear blast—the news that Facebook provided personal data on more than 50 million users to a Trump-linked data company called Cambridge Analytica—one consistent theme amid all the noise and smoke was the increasingly defensive argument from senior Facebook executives that a) What happened wasn’t technically a data “breach,” and b) It happened a long time ago, before they tightened up their data usage policies, so it doesn’t relate to current events like the Trump election campaign, etc.
What’s interesting is that the response to the Cambridge Analytica incident—shock, horror, the pointing of accusatory fingers and threats of regulation—says a lot about the way that attitudes toward Facebook and what it does have shifted over time. The honeymoon isn’t just over at this point; both sides are looking at hiring expensive lawyers and taking each other to divorce court.
Facebook breach: This is a major breach that must be investigated. It’s clear these platforms can’t police themselves. I've called for more transparency & accountability for online political ads. They say “trust us.” Mark Zuckerberg needs to testify before Senate Judiciary.
— Amy Klobuchar (@amyklobuchar) March 17, 2018
At the risk of appearing like a Facebook apologist, both of the points made by Facebook’s former ad executive Andrew “Boz” Bosworth and Chief Strategy Officer Alex Stamos have a certain amount of truth to them. The data wasn’t obtained as the result of hackers getting access to a database illegally, so it wasn’t technically a breach. Cambridge Analytica got the data because an academic researcher sold it, even though Facebook’s rules say you’re not supposed to do that, and then the firm failed to delete it.
On the second point, Facebook is right that the API access the researcher made use of—which gave him access not just to the friend graph of users who signed up for a quiz, but to their friends’ friends as well—was tightened up in 2014, after a number of privacy researchers and others pointed out it could be misused.
In one of the many responses to the Facebook/Cambridge incident, Benedict Evans, who works for Silicon Valley venture capital firm Andreessen Horowitz, defended the social network by pointing out that in the past, people complained that Facebook was doing too much censoring of the News Feed and was also too stingy with its data, and now that conversation has completely flipped:
https://twitter.com/BenedictEvans/status/975054282771722240
As a VC staffer, Evans is naturally inclined to defend a great Silicon Valley success story like Facebook (which Andreessen Horowitz invested in, and AH co-founder Marc Andreessen once sat on the board of). But that’s not to say he doesn’t have a point.
Not that long ago, Facebook was criticized for removing posts too often and infringing on people’s free-speech rights, but now people seem to want it to do a lot more to remove offensive speech, fake news, and so on. And when it comes to the company’s data, one major complaint was that Facebook’s API was too locked down, not open enough, and that it should make it easier for others (including users) to get their data out. Now the criticism seems to be that it didn’t lock it down soon enough, or tight enough.
As Two-Face said in movie Batman: The Dark Knight, you either die a hero or live long enough to see yourself as a villain, and that’s where Facebook is now: All of the things it used to do that many people celebrated as a triumph of social technology—including the ability to target individuals based on their personal data, something the Obama campaign was celebrated for doing—are the fruit of a poisoned tree, in part because we understand what Russian trolls and other governments can do with such data. Our innocence has been lost, and perhaps that’s ultimately a good thing.