Facebook’s defense boils down to “Everyone was doing it back then”

It seems as though hardly a week goes by without Facebook coming under fire for playing fast and loose with people’s data, and this week is no exception. In some cases, the social network has immediately fallen on its sword and begged for forgiveness, but in the latest case it appears to be arguing that this is just how the industry works, and therefore it shouldn’t be accused of bad behavior. For many, that attitude is not going to sit well, and it’s likely to increase pressure for some kind of regulatory oversight.

The New York Times reported on the weekend that Facebook has been sharing the personal data of possibly millions of users—and all of their friends—with device makers like Apple, BlackBerry and Samsung. According to the Times, the social network did so without asking for permission or informing users that phone manufacturers were storing some or all of that data. As the story describes it:

Facebook allowed the device companies access to the data of users’ friends without their explicit consent, even after declaring that it would no longer share such information with outsiders. Some device makers could retrieve personal information even from users’ friends who believed they had barred any sharing, The New York Times found.

In previous cases, including the Cambridge Analytica data leak, messages of apology came from either Facebook CEO Mark Zuckerberg or COO Sheryl Sandberg, or someone of that caliber. But in the latest case, the response from the social network came from a vice-president of product partnerships—and the message was significantly less conciliatory. Entitled “Why We Disagree with The New York Times,” the post says the process of allowing device makers to extract data was necessary in order to allow those devices to incorporate Facebook-like functions, in the days before there were discrete apps.

The company also says data was shared only when users gave explicit permission, even though this directly contradicts the reporting by the Times. According to the newspaper, users in their tests had information shared even when they had turned data-sharing off. As former FTC chief technology officer Ashkan Soltani pointed out on Twitter, this seems to go against Zuckerberg’s previous statements during the Cambridge Analytica uproar that Facebook shut down all such data sharing in 2014.

In a series of tweets that were posted by the main Facebook account following the Times story, the company made the case that it restricted the amount of data shared with device makers, and also noted that it launched the device-integrated APIs in order to help get Facebook onto mobile device before app stores had been invented. And finally, the company argued that “this was standard industry practice.” That argument isn’t likely to hold much water for some Facebook critics, however.

Even one of Facebook’s most powerful executives—Adam Mosseri, until recently the man in charge of the all-powerful News Feed—admitted recently that this kind of attitude has been part of the problem not just with the social network but with many other Silicon Valley companies as well. Mosseri told attendees at a CJR event in San Francisco in May that Facebook and many other companies have suffered from a blind spot that caused them to ignore the potential downsides of a given technology.

For better or worse, events like the Cambridge Analytica leak have highlighted how much of Facebook’s behavior was seen by the company as just standard industry practice, but is now a red flag pointing to the social network’s cavalier attitude on user information and privacy. And if both of these recent cases suggest that the company has breached its 2011 consent decree with the FTC, that is likely going to ramp up calls for the government to step in and take some kind of action.

https://twitter.com/jimray/status/1003613006654332928

Facebook cuts its losses by killing its Trending Topics feature

Trending Topics is no more. Facebook said Friday that it has decided to sunset the feature—which was introduced in 2014 as a way of competing with Twitter as a source of breaking news—because users no longer seem interested in it. But that’s probably not the only reason Facebook decided to kill the section, which consisted of a list of trending keywords in the upper right-hand of the site. Trending Topics has also been the source of a significant amount of controversy in the past over what shows up in it, and how the company decides to moderate or filter it to remove certain terms.

That process used to be handled by a team of human editors who were hired for the task, until one staffer confessed to Gizmodo in 2016 that editors were in the habit of manually removing conservative news sites from the ranking. The resulting storm of criticism from conservative media companies and politicians led to the firing of almost all of the human editors and a meeting between CEO Mark Zuckerberg and several prominent conservative commentators such as Glenn Beck.

The ranking process , meanwhile, was handed over almost entirely to Facebook’s all-powerful algorithms. But the section continued to draw complaints both for what it included as well as what it chose not to include. Among other things, the section promoted a conspiracy theory about the 9/11 attacks, as well as a number of other false and misleading stories. Although Facebook improved the algorithm, the feature never seemed to take off with users, perhaps in part because of the controversy.

In a way, this was a taste of what was to come for the social network. The term “fake news” started to become more and more popular, and soon Facebook was put under the spotlight for its role in promoting conspiracy theories and other forms of misinformation during the 2016 election on behalf of the Internet Research Agency, an infamous group of online trolls with links to the Russian government. Since then, the social network has repeatedly said it is committed to focusing on only high-quality news sources, including those it believes are the most trusted by a broad range of users.

In place of the trending section, Facebook says it will be introducing several new experiments, including:

  • A “Breaking News” Label: The company says it’s currently running a test with 80 publishers across North and South America as well as Europe, India and Australia that lets publishers place a “breaking news” indicator on their posts in News Feed, combined with breaking news notifications.
  • Today In: Facebook says it is experimenting with a new dedicated section on the site that is called Today In, which the company says will “connect people to the latest breaking and important news from local publishers in their city,” as well as providing updates from local officials and groups.
  • News Video in Watch: As CJR reported after an interview with Facebook’s Head of News, Campbell Brown, the site is also rolling out a new dedicated section in Watch, its video feature, that will provide live video news coverage and analysis provided by a range of media partners.

It remains to be seen whether the new “Breaking News” category will become as clogged with questionable content as the old Trending Topics section was. Presumably Facebook is devoting considerably more resources to the new feature, but that isn’t likely to stop certain news sites and publishers from complaining if their articles aren’t highlighted and those from other news sites are. Picking winners and losers is never an easy game for a platform to play, but Facebook is in that role whether it wants to be or not. Now it has to figure out how to live up to its commitments without starting another PR firestorm.

What can social media do for democracy?

Given the seemingly never-ending litany of transgressions we find all around us on social-media platforms—whether it’s Facebook giving up data to Cambridge Analytica and being manipulated by Russian trolls, or Twitter’s complicity in racism and online harassment—it’s difficult to imagine a case being made that social media in general is anything but a looming danger to society and democracy.

Despite this, however, Ethan Zuckerman—who runs the Center for Civic Media at MIT and teaches at MIT’s Media Lab—did his best to put together a list of the ways in which social media can or should help democracy and society, in a post he published Wednesday on his blog and at Medium. Whether his argument ultimately succeeds or not is hard to say, but it’s a worthwhile question to ask. As Zuckerman puts it:

I’m interested in what social media should do for us as citizens in a democracy. We talk about social media as a digital public sphere, invoking Habermas and coffeehouses frequented by the bourgeoisie. Before we ask whether the internet succeeds as a public sphere, we ought to ask whether that’s actually what we want it to be.

Zuckerman uses as his template an essay that journalism professor Michael Schudson wrote as part of his 2008 book Why Democracies Need an Unlovable Press, in which he argues that good journalism can accomplish a number of things that are worthwhile for society—including informing the public, investigating important issues, analyzing complex topics and serving as a tool for social empathy.

So what can social media do? Zuckerman says that at its best, social networks can also inform us about significant news events, as Twitter and Facebook did during the Arab Spring in Egypt and the killing of Michael Ferguson by police in Missouri. And they can amplify important voices, he says. “By sharing content with small personal networks on social media, individuals signal the issues they see as most important and engage in a constant process of self-definition.” He argues social media can also show us diverse views and perspectives, and provide a place for informed debate.

Anyone who has spent any time on Twitter—or Facebook for that matter—discussing issues like the 2016 election of Donald Trump or the rise of the “alt right” in US politics might laugh at the idea that social platforms can show us diverse views or be a place for informed debate. And Zuckerman admits that every beneficial aspect he mentions can have a significant downside:

The tools that allow marginalized people to report their news and influence media are the same ones that allow fake news to be injected into the media ecosystem. Amplification is a technique used by everyone from Black Lives Matter to neo-Nazis. The bad news is that making social media work better for democracy likely means making it work better for the Nazis as well.

In the end, Zuckerman argues that if we are to expect better from platforms like Facebook and Twitter, then we need to know what it is we want them to do—what service do we think they can or should perform in a civil society? “If our response to the shortcomings of contemporary social media is to move beyond the idea that we should burn it all down,” he writes, “then it’s critical that we ask what social media can do for democracy and demand that it play its part.” Whether the platforms will listen is a different question.