Did the Facebook Oversight Board drop the ball on Trump?

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Last week, after months of deliberation, the Facebook Oversight Board — the theoretically independent body that adjudicates cases in which content has been removed by the social network — released its decision on the banning of Donald Trump. The former president’s account was blocked permanently following the attack on the US Capitol building on January 6th, after Facebook decided he was using it to incite violence. It sent this decision to the Oversight Board for review, and the board announced its ruling last week — a two-part decision that upheld the ban on Trump as justified, but also noted that Facebook doesn’t have a policy that allows it to ban accounts indefinitely. So the board gave the company six months to either come up with such a policy, or impose a time limit on the Trump ban. Some observers saw this as the Oversight Board “punting” on the choice of whether to ban the former president, rather than making a hard and fast decision, while others argued that paying so much attention to the rulings of a quasi-independent body created by the company it is supposed to oversee meant giving the board (and Facebook) too much credit, and was a distraction from the company’s real problems.

Is the Oversight Board a valid mechanism for making these kinds of content decisions, or is Facebook just trying to pass the buck and avoid responsibility? Did the board’s ruling on Trump’s ban reinforce this latter idea, or is the board actively fighting to ensure that it’s not just a rubber stamp for Facebook’s decisions? To answer these and other related questions, we’re talking this week with a number of experts on Galley, CJR’s digital discussion platform, including Nathalie Maréchal, a policy analyst at Ranking Digital Rights; Kate Klonick, a law professor at St. John’s in New York who has been following the Oversight Board since its inception, and Evelyn Douek, a lecturer at Harvard Law School and an affiliate at the Berkman Klein Center for Internet and Society. Maréchal, for one, falls firmly into the camp that believes the Oversight Board is mostly a distraction from the important questions about Facebook and its control over speech and privacy. “The more time and energy we all spend obsessing over content moderation, the less time and energy we have scrutinizing the Big Tech Companies’ business models and working to pass legislation that actually protects us from these companies’ voraciousness,” she said.

The most important thing that the government could do to reign in Facebook’s power, Maréchal says, is “pass federal privacy legislation that includes data minimization (they can only collect data that they actually need to perform the service requested by the user), purpose limitation (they can only use data for the specific purpose they collected it for — ie, not for advertising), and a robust enforcement mechanism that includes penalties that go beyond fines, since we know that fines are just a slap-on-the-wrist cost of doing business.” Klonick, for her part, is a little more willing to see the Oversight Board as a potential positive influence on Facebook — although not a solution to all of the company’s myriad problems, by any means. And Klonick knows the board more intimately than most. In 2019, she convinced the company to give her access to those inside Facebook who were responsible for creating the board and developing its mandate and governance structure, and she later wrote about this process for The New Yorker and others.

Klonick says she got her own funding for the project, and Facebook had no control over the finished product, but that she was also aware the company probably agreed to give her access in part because it hoped to gain some legitimacy in the process, and that she was well aware of that, and of the “traps of access journalism” while she was working on her piece. Based on her knowledge and understanding of the board, Klonick says she believes that, far from punting the decision on Trump, the board’s ruling shows that “they are taking their role very, very seriously — and they’re also not willing to simply carry water for hard decisions that Facebook doesn’t want to make.” When it comes to legitimacy, she says, that’s something the Oversight Board will have to earn over time, by making decisions that are well tested and reasoned, and by forcing (or convincing) Facebook to abide by them. Even the United Nations faced an uphill battle in achieving legitimacy for its various tribunals in the early years of its life, Klonick notes, and the Supreme Court wasn’t seen as the all-powerful body we think of it as now until well into its existence.

In order to be effective, she says, the Oversight Board needs three specific forms of independence: 1) The financial kind, so that it wouldn’t be beholden to Facebook for funding, 2) The intellectual kind, meaning it can’t be filled with Facebook staffers or former employees, or anyone else who is biased in favor of the company, and 3) the judicial kind, which means it must be able to choose its own cases and arrive at decisions itself, without being influenced by Facebook. Klonick says she believes the board has achieved each of these for the most part, and also that it is staffed by “incredibly high profile people who have put their reputations on the line and DO NOT WANT to compromise their reputations by being seen as Facebook shills.” As for the board being a distraction from the other important problems that need solving, such as Facebook’s market power, Klonick doesn’t buy it: “You can’t have a one size fits all approach to these problems,” she says. “The issues involved in content moderation at a global scale and censoring or not censoring people is not going to be solved by breaking up Facebook at a very fundamental level.”

Here’s more on Facebook and the board:

Run the clock: In Vanity Fair, Nick Bilton writes that while the Oversight Board gave the company six months to either come up with a policy or put a time limit on Trump’s ban, the belief among a number of people who have spent time with Mark Zuckerberg, Facebook’s chief executive, is that he won’t act at all—for now. “As one person who knows Zuckerberg and is familiar with the company’s internal operations explained, it’s easier for Facebook to run out the clock until its verdict isn’t as heated. That doesn’t mean Zuckerberg won’t find himself backed into a corner eventually. But in the immediate future, the consensus is that very little will come from Facebook regarding Trump’s online fate.”

A shambles: One member of the Oversight Board told Fox News Sunday that the company’s internal rules for banning content are a “shambles,” and the company needs to fix the process to have credibility in enforcing them. Michael McConnell, a former federal judge and the co-chairman of the board, said Facebook’s rules are “not transparent. They are unclear. They are internally inconsistent.” What the board is trying to do, he said, is “bring some of the most important principles of the First Amendment, of free expression law globally, into this operation. Facebook exercises too much power [and] and it is the job of the Oversight Board to try to bring some discipline to that process.”

More experts: Last year, we had a Galley discussion with a number of other experts on platform governance and moderation, including Daphne Keller, a director at the Stanford Center for Internet and Society and former deputy legal counsel at Google; Steven Levy, Wired magazine editor-at-large and author of the recent book “Facebook: The Inside Story”; David Kaye, then the UN’s special rapporteur for freedom of expression; Alex Stamos, director of the Stanford Internet Observatory and former head of security at Facebook; Emily Bell, director of the Tow Center for Digital Journalism at Columbia University’s journalism school, and Rebecca MacKinnon, a co-founder of Global Voices and founding director of the Ranking Digital Rights project.

Other notable stories:

A seven-month investigation by the Associated Press and the Oxford Internet Institute found what it calls an army of fake accounts that have retweeted Chinese diplomats and state media tens of thousands of times, “covertly amplifying propaganda that can reach hundreds of millions of people — often without disclosing the fact that the content is government-sponsored.” Some accounts, many impersonating UK citizens, racked up over 16,000 retweets and replies before Twitter shut down their accounts, in response to the investigation by AP and the Oxford Internet Institute.

Immigration was one of the most dominant topics in early coverage of the Biden government, especially among outlets that appeal to right-leaning readers, according to a new study by the Pew Research Center. Immigration was one of the five topics most covered by 25 major news outlets during the first 60 days of the new administration, the Pew research found, accounting for 11 percent of all stories. This was exceeded only by health care, not including COVID coverage (17 percent) and the economy (22 percent). The study also finds that the coverage of the Biden administration’s handling of the immigration issue during those first 60 days was more negative than the coverage overall.

The Knight Foundation has announced $3 million in grants that will be going to four organizations for research and other projects related to using artificial intelligence in the media, especially local news. The four organizations include Associated Press, which will be creating a training and development program for 50 local news entities; the NYC Media Lab, which will develop a platform for AI in journalism information; the Brown Institute at Columbia, which is going to create new audience strategies for AI in local news, and the Partnership on AI, which will research ethical challenges faced by news organizations when making use of artificial intelligence.

Bruce D. Brown, executive director of the Reporters Committee for Freedom of the Press, writes for CJR about the Trump government’s seizure of phone records belonging to several Washington Post journalists, which only recently came to light. “Advance notice of toll-records requests is a crucial protection for newsgathering because it gives news organizations the ability to negotiate over the scope of the demand and possibly to challenge it in court,” Brown says. “Delayed notification means that, whatever a judge might say, the Justice Department already has the material, and that bell can’t be unrung.”

Employees at Apple circulated an open letter they wrote asking the company to look into why it hired Gabriel Garcia Martinez, a former Facebook advertising product manager, despite what they said were racist and misogynistic comments made by him in a book he wrote about his time in Silicon Valley called “Chaos Monkeys.” The letter said employees were concerned because the comments “directly oppose Apple’s commitment to Inclusion & Diversity,” and that the authors are “profoundly distraught by what this hire means for Apple’s commitment to its inclusion goals.” Late Wednesday, Apple announced that García Martínez was no longer employed by the company.

Laura Wagner writes for Defector about the upheaval at The Appeal, saying the “reckoning was a long time coming.” On Monday, staffers at the nonprofit news site formed a union, and five minutes later, management announced layoffs. Later, managers appeared to do a U-turn, pausing the planned layoffs pending discussions with staff, and saying they “enthusiastically” recognized the union. Defector says it reviewed dozens of internal documents, emails, and messages, and spoke to nearly 30 current and former workers, all of whom “described a demoralizing workplace culture–especially for women and people of color–with little to no protection for staff and an extremely high turnover rate.”

Instagram removed posts and blocked hashtags about the Al-Aqsa Mosque, one of Islam’s holiest sites, because the Facebook-owned service’s moderation system mistakenly associated it with terrorism, according to a report from BuzzFeed News. For the past week, the mosque has been the site of repeated violence between Israeli police and Palestinians, many of whom were visiting the mosque to pray during the holy period of Ramadan. Employees of Instagram informed management of the error, according to BuzzFeed. Facebook has also been accused of censoring content related to the conflict between Israeli security forces and Palestinians.

Matt Maddock, the Republican congressman from Michigan, has introduced a bill that would require fact checkers to register with the state. The “Fact Checker Registration Act” defines a fact checker as someone who publishes in print or online, is paid by a fact-checking organization and is a member of the International Fact Check Network, which is run by the Poynter Institute. The bill requires qualifying fact checkers to post a $1 million bond and states that any “affected person” could bring a civil action for wrongful conduct, which might result in the bond being forfeited for “demonstrable harm.”

Rest of World profiles Alt News, a fact-checking organization in India, which finds itself engaged in a game of Whac-A-Mole trying to stamp out misinformation in that country. “Misinformation is a challenge globally, but in India, it’s practically baked into the ruling party’s communications. And while the platforms that are host to this misinformation, like Facebook and Twitter, have made attempts to curtail it, it hasn’t been enough to stem the tide. The average Indian media consumer is inundated with misinformation from the time they open the day’s paper to when they lie in bed scrolling on their smartphones at night.”

Leave a Reply

Your email address will not be published. Required fields are marked *