War in Ukraine is the latest platform moderation challenge

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

On March 10, a Reuters headline announced that Facebook would temporarily allow users to post calls for the death of Vladimir Putin, Russia’s president, and also “calls for violence against Russians” (Reuters later modified its headline to clarify that only posts calling for “violence against invading Russians” would be allowed under the new rules). These kinds of posts would normally fall into what Meta calls “T1 violent speech,” which is automatically removed, without exception. A few days later, Nick Clegg, head of global affairs for Meta, the parent company of Facebook, said the new rules would not allow users to call for the death of Putin or Alexander Lukashenko, the president of Belarus. Clegg also said that calling for violence against Russians would only be allowed for users in Ukraine, and only when “the context is the Russian invasion.”

Ryan Mac, Mike Isaac, and Sheera Frenkel pointed out in a New York Times story on Wednesday that the rules about allowing calls for violence against Putin and Lukashenko were actually changed on February 26, two days after Russian troops first entered Ukraine, according to documents that the newspaper had access to. “After reports suggesting the policy reversal would allow users to call for violence against all Russians—which Russian authorities called “extremist”—Meta reversed itself,” the Times reported. According to an internal memo seen by Bloomberg, Clegg told staff “circumstances in Ukraine are fast moving. We try to think through all the consequences, and we keep our guidance under constant review because the context is always evolving.”

Allowing users to post calls for violence isn’t the only example of normally forbidden content that platforms like Facebook now allow because there is a war in Ukraine. As Will Oremus noted in a Washington Post piece, if you posted content praising a neo-Nazi militia before the Russian army invaded Ukraine, Facebook would probably have blocked your post, or even suspended your account. But not now: the company changed the rules so that supporters of Ukraine could post about that country’s Azov battalion, a unit of the Ukrainian army that has a history of being associated with neo-Nazis (which has helped fuel Putin’s claim that his aim is to de-Nazify Ukraine).

According to the Times, the many changes to what content is allowed and what isn’t has caused confusion inside Facebook and Instagram. In an unusual step for the company, the paper reported that Meta “suspended some of the quality controls that ensure that posts from users in Russia, Ukraine and other Eastern European countries meet its rules.” The company temporarily stopped tracking whether its workers who monitor Facebook and Instagram posts were accurately enforcing its content guidelines, according to sources. Meta did this, the sources said, because its employees couldn’t keep up with the shifting rules about what kinds of posts were allowed about the war in Ukraine.

The result of the reversals on what is permitted “has been internal confusion, especially among the content moderators who patrol Facebook and Instagram for text and images with gore, hate speech and incitements to violence,” the Times wrote. While some content is removed automatically by algorithms and other software, much of it is left to human moderators, who are on contract. To make matters worse for them, the Times said that in many cases they are given less than 90 seconds “to decide whether images of dead bodies, videos of limbs being blown off, or outright calls to violence violate the platform’s rules.” Moderators also said they were shown posts about the war in Chechen, Kazakh or Kyrgyz, despite not knowing those languages.

The moves at various platforms such as Facebook suggest that the rule books governing who can say what online “need a new chapter on geopolitical conflicts,” Oremus wrote. The companies and their defenders may feel that their approach to Ukraine is the correct one, but “they haven’t clearly articulated the basis on which they’ve taken that stand, or how that might apply in other settings, from Kashmir to Nagorno-Karabakh, Yemen and the West Bank.” Katie Harbath, a former public policy director at Facebook, told Oremus “we’re all thinking about the short term” in Ukraine, rather than the underlying principles that should guide how platforms approach wars.

Emerson Brooking, a fellow at the Atlantic Council’s Digital Forensic Research Lab, wrote in a piece for Tech Policy Press that moderation is supposed to stem the spread of violent content, but “wars are exercises in violence, fueled by cycles of hate. Accordingly, social media companies will never be able to write a sufficiently nuanced wartime content policy that somehow elides violence, hate, and death.” Meta’s struggles, he argued, “demonstrate an irreconcilable tension in trying to adapt content moderation policy to major conflict.”

Here’s more on the platforms and war:

Scale: Contributing to the moderation challenges at a company like Meta or Google is the vast scale of these platforms. Evelyn Douek, a lecturer at Harvard Law School and research scholar at the Knight First Amendment Institute, gave a talk last year at Stanford called “The Administrative State of Content Moderation,” in which she noted that in the 30 minutes it took for her to give the presentation, Facebook would have taken down 615,417 pieces of content, and YouTube about 271,440 videos and channels.

Blunders: In addition to its moderation challenges, Meta has made some high-profile mistakes as well, the Times noted. For example, it allowed a group called the Ukrainian Legion to run ads on its platforms this month in an attempt to recruit foreign fighters to assist the Ukrainian army, which is a violation of international laws. Meta later removed the ads—which were shown to people in the United States, Ireland, Germany and elsewhere—because the group may have misrepresented its ties to the Ukrainian government.

Orders: Google allegedly told translators not to use the word “war” to describe what’s happening in Ukraine, according to the Intercept—another example of how the legal requirements dictated by operating in a country can hamstring platforms and lead to censorship. “An internal email sent by management at a firm that translates corporate texts and app interfaces for Google and other clients said that the attack on Ukraine could no longer be referred to as a war but rather only vaguely as ‘extraordinary circumstances’,” the Intercept wrote.

Erasure: In 2014, The Atlantic wrote about how Facebook’s decision to shut down dozens of pages set up by dissidents in Syria “dealt a significant blow to peaceful activists who have grown reliant on Facebook for communication and uncensored—if bloody and graphic—reporting on the war’s atrocities.” Eliot Higgins, a former blogger who founded a crowdsourced investigative journalism network called Bellingcat, also complained that Facebook was making it difficult to document atrocities in Syria because it kept removing the evidence.

Other notable stories:

Neeraj Khemlani, co-president of CBS News, said that the network hired Mick Mulvaney, a former Trump official, as a commentator as part of an effort to “hire more Republicans to gain ‘access’ ahead of a ‘likely’ Democratic midterm wipeout,” the Washington Post reported. The network’s decision to hire Mulvaney as a paid contributor “is drawing backlash within the company because of his history of bashing the press and promoting the former president’s fact-free claims,” the Post said. “I know everyone I talked to today was embarrassed about the hiring,” said a CBS News employee.

Russian media are running a video interview with a refugee from Mariupol in Ukraine, but the interview was set up by Russian state police, according to a report from Mediazona, an independent media outlet started by two members of the band Pussy Riot. “In fact, the media had nothing to do with the interview that was distributed to state agencies by the FSB press service with a request to omit the source. Following her arrival in Russia, the woman was subjected to a long interrogation, her phone was searched, and she has been unable to contact any of her family members for more than a week.”

Meta is paying Targeted Victory, a major Republican consulting firm, as part of a campaign to turn the public against TikTok, according to a report by Taylor Lorenz, an online culture reporter for the Washington Post. “The campaign includes placing op-eds and letters to the editor in major regional news outlets, promotingdubious stories about alleged TikTok trends that actually originated on Facebook, and pushing to draw political reporters and local politicians into helping take down its biggest competitor,” Lorenz reported.

Apple and Meta provided customer data to hackers who masqueraded as law enforcement officials, Bloomberg reported, citing three people with knowledge of the matter who chose to remain anonymous. Apple and Meta provided basic subscriber details, such as a customer’s address, phone number and IP address, in mid-2021 in response to the forged “emergency data requests.” Normally, such requests are only provided with a search warrant or subpoena signed by a judge, according to the people.

Molly White, a software engineer who runs a site critical of cryptocurrency hype called Web3 Is Going Great, collaborated with fifteen other researchers and journalists to annotate an article by Kevin Roose, a New York Times reporter who wrote an introduction to cryptocurrency and the movement known as Web3. Roose wrote that his piece was intended to be a “sober, dispassionate explanation of what crypto actually is,” but the group who annotated it called it a “thinly-veiled advertisement for cryptocurrency that appeared to have received little in the way of fact-checking or critical editorial scrutiny.”

The Wrap reports that Truth Social, the social-networking app started by Donald Trump, has seen “a 93 percent drop in signups and similarly steep decline in traffic after a rocky rollout last month fraught with technical issues and an extensive waiting list for new signups to actually use the service.” The site says the Twitter-lookalike saw installs decline by more than 800,000 since its launch week, according to Sensor Tower, and installs on the Apple app store this month have fallen to about 60,000 per week.

Justin Hendrix of Tech Policy Press interviewed Jane Lytvynenko, a senior research fellow at Harvard’s Shorenstein Center, about the role of the social media platforms and news media in confronting disinformation during the war in Ukraine. Lytvynenko said she’s concerned about “the inequality in information environments when it comes to Western countries and primarily the English language, and the inequality in literally everywhere else where social media companies have not invested as much into moderation efforts.”

Priyanjana Bengani and Jon Keegan wrote for CJR about a checklist they created to help journalists and researchers try to find out who published a website. “This is meant to be used in conjunction with offline reporting techniques,” they said, and following the checklist “does not guarantee that you can unmask an owner of a website who does not want to be found, but it can help surface crucial clues and connections that can act as leads for further reporting.”

Leave a Reply

Your email address will not be published. Required fields are marked *