Last year, The Guardian published leaked documents it said were internal Facebook rule books on how and when to moderate inappropriate content. The list of permitted terms caused significant controversy because it included threats of violence towards women, children and various ethnic groups, which Facebook said should be allowed to remain as long as they were not too specific. Harassment of white men, however, was not tolerated because they are considered a “protected group.” The guidelines sparked an ongoing debate over the way that Facebook makes decisions about which kinds of speech it will censor and which it won’t.
On Tuesday, the giant social network finally gave in to pressure from critics and published the community standards guidelines it says it uses to make most of its content decisions, with categories ranging from “violence and criminal behavior” to “integrity and authenticity.” The company said in a post introducing the rules that it generally errs on the side of allowing content, even when some find it objectionable “unless removing that content can prevent a specific harm.” Facebook also said that it often allows content that technically violates its standards “if we feel that it is newsworthy, significant, or important to the public interest.”
One of the top questions we’re asked is how do we decide what content is allowed on Facebook. To provide clarity and invite feedback and participation from our community, we are publishing our internal enforcement guidelines for the first time today. https://t.co/CdN2WLstSG
— Meta (@Meta) April 24, 2018
Some of the company’s rules are fairly straightforward, such as not allowing people to sell drugs or firearms. But much of what the social network is trying to do amounts to pinning Jell-O to the wall, especially when it comes to censoring speech around violence. The blog post says that Facebook considers “the language, context and details” in order to determine when content represents a “credible threat to public or personal safety.” But drawing those kinds of sharp lines is incredibly difficult, especially given the billions of posts that Facebook gets every day, which explains why the company gets so much criticism from users.
In an attempt to address some of those complaints, Facebook also announced it is introducing an official appeal process that will allow users to protest the removal of content or blocking of accounts. Until now, anyone who had content removed had to try and reach a support person via email to a general Facebook account, or through posts on social media. But Facebook says it is rolling out an official process that will allow users to request a review of the decision and get a response within 24 hours. Appeals will start being allowed for content involving nudity, hate speech and graphic violence, with other content types added later.
Facebook’s new transparency around such issues is admirable, but it still raises troubling questions about how much power the social network has over the speech and behavior of billions of people. The First Amendment technically only applies to government action, but when an entity of Facebook’s size and influence decides to ban or censor content, it has almost as much impact as if a government did it.
Here are some links to more information on Facebook’s latest moves:
- Facebook has experts: Monika Bickert, Facebook’s VP of Global Policy Management, describes how community standards decisions are made: “We have people in 11 offices around the world, including subject matter experts on issues such as hate speech, child safety and terrorism. Many of us have worked on the issues of expression and safety long before coming to Facebook.” Bickert says as a criminal prosecutor, she worked on everything from child safety to counter terrorism, and other members of the team include a former rape crisis counsellor, a human-rights lawyer and an academic who studies hate speech.
- Not enough: Malkia Cyril, a Black Lives Matter activist and executive director for the Center for Media Justice, was part of a group of civil-rights organizations that pushed Facebook to make its moderation system less racially biased. She tells The Washington Post that the company’s latest moves don’t go far enough in dealing with white supremacy and hate on the social network. “This is just a drop in the bucket,” she says. “What’s needed now is an independent audit to ensure that the basic civil rights of users are protected.”
- Protected but still sensitive: As Wired magazine points out, Facebook doesn’t have to remove any of the offensive or disturbing content on its network if it doesn’t want to, thanks to Section 230 of the Communications Decency Act, which protects online services such as Google, Twitter and Facebook from any legal consequences for the actions of its users or the content they post. But all of the major platforms have been trying to boost their efforts at removing the worst of the material they host, in part to try and stave off potential regulation.
- The advisory team: As part of Facebook’s attempts to be more transparent about how it makes such decisions, the company allowed a number of journalists to sit in on one of the social network’s weekly community standards meetings, where the team of advisers decides what content meets the guidelines and what doesn’t. HuffPost says the attendees included people “who specialize in public policy, legal matters, product development and communication,” and said there was very little mention of what other large platforms such as Google do when it comes to removing offensive or disturbing content.
Other notable stories:
- After a number of anti-gay posts were found on the blog that she mothballed last year following similar allegations, MSNBC host Joy Reid claims the posts in question were the result of hackers infiltrating the Internet Archive, which is the only place her blog is still available (the Archive is an ongoing attempt to preserve a copy of as many websites as possible). The Archive, however, says that after an investigation of the claims it could find no evidence that the blog was tampered with.
- CJR’s Alexandra Neason writes about a group of high-school students who were frustrated by the limitations of the Freedom of Information Act, and so decided to write their own bill — known as the Cold Case Records Collection Act — to make it easier to get documents related to Civil War-era crimes from the FBI and other agencies, without having them tied up in red tape or redacted to the point where they’re unusable.
- Google is rolling out its new subscription tool, which it calls Subscribe with Google, and its first launch partner is the McClatchy newspaper chain. The search giant says that its new tool allows people to subscribe to newspapers and other online publications with just two clicks, at which point Google highlights results from those publications in search results for those users who sign up. McClatchy plans to implement the tool on all 30 of its local newspaper sites, according to Digiday.
- In a fundraising email sent to his supporters, Donald Trump says that he won’t be attending the annual White House Correspondents’ Dinner because he says he doesn’t want to be “stuck in a room with a bunch of fake news liberals who hate me.” Instead, the president said he will be holding a rally in Michigan “to spend my evening with my favorite deplorables who love our movement and love America.”
- In a Rolling Stone magazine feature, Ben Wofford writes about how Sinclair Broadcasting is trying to build what amounts to a national network of hundreds of conservative-leaning, Fox News-style TV stations in small and medium-sized towns across the country, and how the Trump administration is making it easier for the company to do that. “Everything the FCC has done is custom-built for the business plan of one company, and that’s Sinclair,” one FCC commissioner told the magazine.