Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer
Anyone who has been paying attention over the past year is probably well aware that Facebook has a problem with misinformation, but a number of recent events have highlighted an issue that could be even more problematic for the company and its users: namely, harassment and various forms of abusive conduct in private groups. In the latest incident, ProPublica reported on Monday that members of a Facebook group frequented by current and former Customs and Border Patrol agents joked about the deaths of migrants, talked about throwing burritos at members of Congress who were visiting a detention facility in Texas, and posted a drawing of Rep. Alexandria Ocasio-Cortez engaged in oral sex with a migrant. According to ProPublica, the group is called “I’m 10-15,” which is the code that CBP agents use when they have illegal migrants in custody, and has 9,500 members.
It’s not clear whether the administrator of the Facebook group made any effort to restrict membership to actual or former Customs and Border Patrol agents, so it’s probably not fair to conclude that the views expressed in the group are indicative of how a majority of the CBP feels about immigrants. But that didn’t stop many commentators on Twitter and elsewhere—including a number of congressmen—from expressing their concerns about whether attitudes at the agency need to be investigated. “It’s clear there is a pervasive culture of dehumanization of immigrants at CBP,” Democratic presidential candidate Kamala Harris said on Twitter, while her fellow candidate Ocasio-Cortez said “This isn’t about ‘a few bad eggs.’ This is a violent culture,” and described CBP as “a rogue agency.” If agents are threatening violence against members of Congress, Ocasio-Cortez said, “how do you think they’re treating caged children+families?”
In response to the ProPublica story, a spokesman for the CBP said that the agency has initiated an investigation into the “disturbing social media activity,” and Border Patrol Chief Carla Provost said the posts are “completely inappropriate and contrary to the honor and integrity I see—and expect—from our agents.” But the Customs and Border Patrol group is only the latest in a series of such examples that have come to light recently. A recent investigation by Reveal, the digital publishing arm of the Center for Investigative Reporting, reported that hundreds of current and retired law enforcement officers from across the US are members of extremist groups on Facebook, including those that espouse racist, misogynistic, and anti-government views.
Some might argue that abusive comments made in a private Facebook group aren’t that different from hateful remarks made in an email thread. But email providers don’t host such discussions in the same way that Facebook hosts private groups, and the social network’s algorithm also recommends such groups to users based on their past behavior and interests. These private groups are just part of a broader—and growing—problem for Facebook, which offers private, encrypted discussions via its WhatsApp service that have also been implicated in the genocide in Myanmar, in which hundreds of thousands of Rohingya Muslims were driven from their homes, tortured and killed. Facebook CEO Mark Zuckerberg has said private communication is the future of the social network, which means these kinds of problems could escalate and multiply.
Facebook has been working on creating what it calls a Supreme Court-style social media council, or series of councils, that will be made up of third-party experts who could make decisions about what to do with problematic content. Will these councils be able to see and/or regulate the kind of hate speech or abusive content that occurs in private and secret groups, or in encrypted conversations on WhatsApp? And if so, how will they determine what is appropriate and what isn’t—will those decisions be based on Facebook’s standards, local laws, or universal human rights principles? That’s unclear. But the company is going to have to find an answer to some of these questions soon, as more and more attention is focused on the potential downsides of its private, encrypted future.
Here’s more on Facebook and its content moderation problems:
- Front-line workers: Whether it’s groups or just regular Facebook content, one of the main weapons the company has against the problem are the thousands of moderators it employs to review flagged content, most of whom work for third-party contractors. In a new book, researcher Sarah Roberts writes about how artificial intelligence is not a solution to the content moderation problem.
- The privacy dilemma: I wrote for CJR’s print edition about the challenges that Facebook faces as it tries to come up with a process for deciding what kinds of content it should host and what is unacceptable. Misinformation researcher Renee DiResta told me that hoaxes, hate speech and propaganda could be even more difficult to track and remove as more discussion moves into private groups.
- Crackdown backfiring: Facebook has been trying to take action against groups that contain problematic content by removing and even banning them, but some of those efforts are backfiring as trolls figure out how to game the process. Some groups have been forced to go private after malicious actors posted hate speech or offensive content and then reported the groups to Facebook moderators in an attempt to get them removed for misconduct.
- A fine for failure: Germany’s federal Office of Justice has handed down a fine of $2.3 million against Facebook for not acting quickly enough to remove hateful and illegal content. Under the country’s internet transparency law (known as NetzDG), companies like Facebook have to remove posts that contain hate speech or incite violence within 24 hours or face fines of as much as $35 million. They are also required to file reports on their progress every six months.
Other notable stories:
- A report from the Atlantic Council’s Digital Forensic Research Lab says that some of the accounts and pages that were recently removed from Facebook for what the social network calls “inauthentic behavior” were operated on behalf of a private Israeli public-relations firm called The Archimedes Group, and appeared to be trying to stir up dissent in Honduras, as well as Panama and Mexico.
- Soraya Roberts, a culture columnist with Longreads, writes about the controversy that was sparked on media Twitter when New York Times writer Taffy Brodesser-Akner confessed in an interview that she made $4 a word for her celebrity profiles. Roberts says the point of the uproar was that one journalist makes several times what the majority do, despite the industry complaining that it “has nothing left to give.”
- CJR is publishing articles from the latest print version of the magazine on our website this week, including a piece on what remains of the free press in Turkey, a special report on Benjamin Netanyahu’s relationship with the media in Israel, and a note from CJR’s editor Kyle Pope about how in the current environment, all news is global.
- A report from a group of newspapers including The Guardian and The New York Times report says that China’s border authorities routinely install an app on smartphones belonging to travelers who enter the Xinjiang region, including journalists. The app gathers personal data from phones, including text messages and contacts, and checks whether devices contain pictures, videos, documents, and audio files that match a list of more than 73,000 items stored within the app, including any mention of the Dalai Lama.
- Vice News says that Google’s Jigsaw unit (formerly known as Google Ideas), which was originally designed to help promote a free and independent internet, has turned into a “toxic mess.” Among other things, anonymous sources who spoke with Vice said that founder Jared Cohen “has a white savior complex,” and that the mission of the Google unit is “to save the day for the poor brown people.”
- In a recent interview, British politician Boris Johnson, a leading candidate to take over for Prime Minister Theresa May, talked about his hobby of making model buses out of old wine crates, a comment many observers found a bit bizarre. Glyn Moody of Techdirt thinks part of the reason Johnson confessed to such a strange pursuit was that it was a way of tricking the Google search algorithm into smothering other unflattering news items from Johnson’s past.
- Virginia has become one of the first states to impose criminal penalties on the distribution of non-consensual “deepfake” images and video. The new law amends an existing law that defines the distribution of nudes or sexual imagery without the subject’s consent (sometimes called revenge porn) as a Class 1 misdemeanor. The new bill updates the existing law by adding the category of “falsely created videographic or still image” to the text.
- Media Matters writes about how a rumor spread during recent protests in Portland, Oregon that anti-fascist demonstrators were throwing milkshakes made of quick-drying cement at right-wing groups. The rumor was picked up by alt-right commentators including Jack Posobiec, the site says, and eventually was cited in headlines by a number of news outlets, including Fox News, despite the fact that there was no evidence that it was true.
- A programming note from CJR: The daily newsletter will be taking a hiatus for the July 4th holiday, but will return next week.