Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer
A year and a half ago, Mark Zuckerberg floated what seemed like a crazy idea. In an interview with Ezra Klein of Vox Media, the Facebook CEO said he was thinking about creating a kind of Supreme Court for the social network—an independent body that would adjudicate some of the hard decisions about what kinds of content should or shouldn’t be allowed on the site’s pages, decisions Facebook routinely gets criticized for. Imagine, Zuckerberg said, “some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech.” It wasn’t just a pipedream or an offhand comment: for better or worse, Facebook has been hard at work over the past year creating just such an animal, which it is now calling an “Oversight Board.” This week, it took another in a series of steps towards that goal, by publishing the charter that will govern the board’s actions, as well as a document that shows how it incorporated feedback from experts and interested parties around the world, and a description of how the governance process will work for this third-party entity.
As part of the roadshow for this effort, Facebook held an invitation-only conference call with journalists, in which the head of the governance team took pains to describe just how much work the company did to gather as much feedback as possible on the idea. Facebook held six in-depth workshops and 22 roundtables, featuring journalists, privacy experts, digital-rights activists, and constitutional scholars from 88 different countries, along with 1,200 written submissions—all of which sounds very impressive until you remember that Facebook has more than 35,000 employees and revenues of more than $56 billion. And what did the company come up with? The charter describes an independent body that will start with 11 members and eventually number as many as 40, who will hear cases in groups of five. Some cases will be referred by Facebook, while others will come from appeals launched by users whose content has been removed for a variety of reasons.
In an attempt to keep the board as independent as possible, Facebook says it will appoint two co-chairs for the board, who will then be free to select whomever they wish to fill out the rest of the board membership. And Facebook will not compensate board members directly, for fear of the perception of a conflict of interest—compensation will come from a trust that the company will set up (and fund), which will also be run independently. The charter specifically states that members can’t be removed because of specific decisions they make, but can only be disqualified if they breach the code of conduct set out in the charter. But most important of all, Facebook says, decisions made by the board are binding, which means they can’t be overruled by the company unless the changes that would be required to comply actually violate the law, or unless the board recommends something that is technically impossible.
In a blog post he published to coincide with the release of the charter, Zuckerberg said that while Facebook makes decisions every day about what kind of speech it will and won’t tolerate, “I don’t believe private companies like ours should be making so many important decisions about speech on our own.” Hence, the Oversight Board, and the promise of an appeal process that is at least notionally independent from Facebook (which, it should be noted, is controlled almost single-handedly by Zuckerberg, thanks to his ownership of multi-voting shares). Although skepticism abounds—not surprisingly, given some of the company’s past commitments that have failed to come to fruition—there is also some grudging admiration for what the company is trying to do. In a Twitter thread, law professor and free-speech expert Kate Klonick said that while there’s a chance all these good intentions could turn out to be vaporware, “at the very least, so far, it’s a bigger & more rigorous commitment of time, money, & platform power than anything that’s come before.”
Will the decisions made by the Oversight Board actually change the way Facebook operates in ways that matter? Or will it be just a kind of fig leaf that the company holds up so that it can avoid the threat of imminent regulation? There are forces within Congress that would very much like to remove the protection that Facebook (and other platforms) have under Section 230 of the Communications Decency Act, which keeps them from being sued for content they host or moderation decisions they make. And what are the larger implications of a company like Facebook making decisions about what limits should be placed on free speech, even if those choices are rubber-stamped by a theoretically independent body? We are all about to find out the answers to those questions, whether we like it or not.
Here’s more on Facebook, free speech and the Oversight Board:
Jellyfish skeleton: I spoke with Kate Klonick in an in-depth interview on CJR’s Galley platform recently, and we discussed the proposed Facebook “Supreme Court” idea. Klonick said that she is cautiously optimistic, and that she likes to describe the idea as “trying to retro-fit a skeletal system for a jellyfish. A private transnational company voluntarily creating an independent body and process to oversee a fundamental human right [is] really a very daunting idea.”
Sheer complexity: When Facebook released a draft version of its charter for the Oversight Board earlier this year, Issie Lapowsky of Wired wrote that comparing it to the Supreme Court actually “minimizes the sheer complexity of what Facebook is setting out to accomplish.” The Supreme Court just hands down rulings for the US, but Facebook’s version would be choosing from several million cases every week, and its decisions would affect 2.3 billion Facebook users, a population that’s roughly seven times the size of the US.
Unanswered questions: I spoke with Jillian York, the international director for freedom of expression at the Electronic Frontier Foundation, in a recent Galley interview, and we talked about the Oversight Board. York said she has been calling on the platforms to do something similar, “but of course, the devil is in the details.” Having an external body that can assess content decisions is clearly good, she said, but there are still many unanswered questions.
Other notable stories:
CNN was widely criticized by journalists and others for booking former Trump campaign manager Corey Lewandowski, who had just admitted in testimony before the House Judiciary Committee that he had no compunction about lying to the media. “Corey Lewandowski confessed to gaslighting the press. CNN booked him hours later anyway,” said a Vox headline.
The Washington Post has launched an advertising network for publishers called Zeus Prime that the paper says will allow it and other media outlets to sell automated ads in real-time, in much the same way that large players like Google do. The company is pitching the network as a way for publishers to keep more of the advertising revenue they generate, and promises high CPM (cost per thousand) rates than they can currently get.
Facebook and Google’s parent company, Alphabet, are cozying up to publishers and media companies by offering them features they have long requested, according to a report in The Wall Street Journal, moves that many see as an attempt by the two tech giants to avoid potential government regulation.
Medium, the publishing platform run by former Twitter CEO Evan Williams, has launched a “Save to Medium” feature that mimics tools like Instapaper and Pocket, allowing users to click a browser button and save an article to their Medium account. Such tools are seen as controversial by some publishers because they strip the advertising from pages that are saved.
The Observatory on Social Media at Indiana University has released a free tool to allow journalists and others to detect potential disinformation spreading on Twitter. The tool, called BotSlayer, can be configured to follow certain searches or keywords and uses an “anomaly detection” algorithm to flag suspected bot activity. The Observatory also has several other tools aimed at tracking disinformation, including Hoaxy.
Wudan Yan writes for CJR about how some journalists, when writing about climate change, focus on lifestyle changes such as flying less, when the single biggest action someone can take to reduce their carbon footprint is to have fewer children. According to some recent estimates, a single child produces about 58 tons of carbon dioxide a year, or about 20 times as much as a single transatlantic flight generates.
The New York Times looked at how the Chinese government and its agents unleashed a storm of Twitter trolls in an attempt to discredit the protesters in Hong Kong. Some of the accounts, which the paper says numbered more than 200,000 at one point, started by posting innocuous articles about Chinese topics, but then gradually shifted to posting propaganda aimed at painting the protesters as dangerous terrorists. Others were apparently fakes acquired on the black market.
Jill Geisler of Loyola University in Chicago talks with CJR editor Kyle Pope about why some journalistic outlets are reluctant to take a side in reporting about climate change, and how they and others often shy away from collaborating with projects like CJR and The Nation’s Covering Climate Now for a number of reasons, including “Not Invented Here Syndrome.”