Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer
Like every other large social platform, YouTube has come under fire for not doing enough to remove videos that contain hate speech and disinformation, and the Google-owned company has said repeatedly that it is trying to get better at doing so. But in some cases, removing videos because they contain graphic imagery of violence can be a bad thing, at least when it comes to documenting war crimes in a country like Syria. That’s the case that Syrian human-rights activist and video archivist Hadi Al Khatib makes in a video that the New York Times published on Wednesday in its Opinion section. Khatib co-produced the clip with Dia Kayyali, who works for Witness, an organization that helps people use digital tools to document human rights violations. In the video, Khatib notes that videos of bombings the Syrian government has carried out on its own people—including attacks with barrel bombs, which Human Rights Watch and other groups consider to be a war crime—are important evidence, but that YouTube has removed more than 200,000 such videos.
“I’m pleading for YouTube and other companies to stop this censorship,” Khatib says in the piece. “All these takedowns amount to erasing history.” There are similar policies at Facebook and Twitter, both of which have also removed videos because they were flagged as being violent or propaganda, when those videos included evidence of government attacks in Syria and elsewhere. The problem, Kayyali says, is that most of the large social platforms use artificial intelligence to detect and remove content, but an automated filter can’t tell the difference between ISIS propaganda and a video documenting government atrocities. Many of the platforms have been placing even more emphasis on using automated filters because they are under increasing pressure from governments in the US and elsewhere to act more quickly when removing content. Facebook CEO Mark Zuckerberg bragged to Congress last year that the company’s automated systems take down more than 90 percent of the terrorism-related content posted to the service before it is ever flagged by a human being.
Khatib runs a project called The Syrian Archive, which has been tracking and preserving as many videos of war crimes in that country as it can. But YouTube’s policies are not making it easy, he says. And user-generated content is a crucial part of the documentation of what is happening in Syria, Khatib notes, because getting access to parts of the country where such attacks are taking place is extremely dangerous, even for experienced aids agencies, journalists, and human rights organizations. YouTube hasn’t just been removing videos either: Since 2017, it has taken down a number of accounts that were trying to document the Syrian conflict, including pages run by groups such as the Syrian Observatory for Human Rights, the Violation Documentation Center, and the Aleppo Media Center. Khatib says YouTube reinstated some of the videos it took down after he complained earlier this year, but that hundreds of thousands still remain unavailable.
The Syrian activist isn’t the only one to raise a warning flag about this problem. Eliot Higgins, the investigative journalist formerly known as Brown Moses, who now runs a crowdsourced journalism project called Bellingcat, started raising this issue in 2014, when he said Facebook was taking down pages and accounts that were documenting Syrian government attacks using the banned chemical Sarin gas. In many cases, both YouTube and Facebook have been targeted by pro-government forces who flag and report videos incorrectly, hoping to have them taken down. It’s not just Syrian content that is being taken down—Khatib says activists in Sudan, Yemen, and Burma have also had similar problems with important content being removed. And the major web platforms now have a shared database of videos to use for their automated removals, a partnership called the Global Internet Forum to Counter Terrorism. But the exact criteria used to define what constitutes a terrorist video is unknown.
In his Times op-ed, Khatib recommends that Google, Facebook, and Twitter hire content moderators in the countries where they are removing such videos, so that they can understand the context behind what is being removed. The platforms, he says, could also work with researchers and archivists to assess these takedowns and in some cases reverse them if necessary. A report co-authored with the Electronic Frontier Foundation and Witness warns: “The temptation to look to simple solutions to the complex problem of extremism online is strong, but governments and companies alike must not be hasty in rushing to solutions that compromise freedom of expression, the right to assembly, and the right to access information.”
Here’s more on the platforms and takedowns:
- Simple solutions: A report that Khatib co-authored with the Electronic Frontier Foundation and Witness talks about takedowns affecting Syria as well as groups in Chechnya and Turkey, and warns that: “The temptation to look to simple solutions to the complex problem of extremism online is strong, but governments and companies alike must not be hasty in rushing to solutions that compromise freedom of expression, the right to assembly, and the right to access information.”
- One hour: The European Union is considering a new content-takedown law that would require platforms like Facebook and Google to remove terrorist content and hate speech within one hour of it being flagged. The legislation would also force them to use a filter to ensure content isn’t re-uploaded, and, if they fail to do either of these things, governments are allowed to fine them up to 4 percent of their global annual revenue. For a company like Facebook, that could mean fines of as much as $680 million.
- Santa Clara principles: New America’s Open Technology Institute released a report earlier this year that looked at how well the major platforms had been sticking to the Santa Clara Principles, recommendations that were made by the group and other organizations last year, aimed at getting Google, Facebook, and Twitter to be more transparent about why they removed content. All three companies report takedowns, and in some cases say who asked for the removal (if it was a government) but don’t say much about why.
Other notable stories:
- Facebook plans to launch its News tab on Friday, according to a report in the Washington Post. The paper says the new feature will offer stories from hundreds of news organizations, some of which will be paid fees for supplying content to the service, including the Post itself, the Wall Street Journal, and BuzzFeed. The New York Times is likely to contribute to the feature, but is still negotiating with Facebook over the terms of its participation, according to the Post report.
- Facebook isn’t alone in building a news aggregator: The Information is reporting that CNN plans to launch a news aggregation service featuring content from a range of outlets, some of who may be paid for their articles. The project, codenamed NewsCo., comes just a few months after Rupert Murdoch’s News Corp. announced it was working on a news aggregation service called Knewz. The Information report says the CNN service would likely be a mix of subscription-based and advertising-based content.
- Sarah Lacy, a former TechCrunch journalist who started her own subscription-based news site called Pando Daily in 2011, says she is selling the site and getting out of journalism to run a digital community for mothers called Chairman Mom. Lacy said she has sold the company to an advertising firm called BuySellAds, which also acquired the website Digg last year for an undisclosed sum.
- Corey Hutchins writes for CJR about the first year of the Colorado Sun, a digital publication created in the wake of mass layoffs at the Denver Post, which led to a dramatic editorial rebellion against the paper’s owner, Alden Global Capital. Eventually, 10 journalists defected from the newspaper to launch the Sun, thanks in part to startup funding provided by Civil, the blockchain-powered platform for journalism.
- Medium, the content-hosting company founded by former Twitter CEO Evan Williams, says it is changing the way it compensates writers. In 2017, the company launched its Medium Partner Program, which paid writers based on the number of “claps” or likes their content received from readers. Medium says that system paid out more than $6 million to over 30,000 writers, but it is switching to a new model that will reward writers based on reading time, which it says is “a closer measure of quality.”
- The White House has said it won’t be renewing subscriptions to the New York Times and the Washington Post, after Donald Trump called the two “fake news” during an interview on Fox News’ Hannity program. The president described the Times as “a fake newspaper” and said “we don’t even want it in the White House anymore,” adding “we’re going to probably terminate that and The Washington Post.”
- Around 800 journalists, filmmakers, and media CEOs signed an open letter published in newspapers across Europe on Wednesday, urging governments to ensure that Google and other tech firms comply with a new EU rule that requires them to pay publishers a fee if they use even short excerpts of their stories. Google said recently that it will not pay fees, and instead will remove excerpts and images from its search. “The law risks being stripped of all meaning before it even comes into force,” the letter said, calling Google’s move “a fresh insult to national and European sovereignty.”
- Storyful, the social-media verification company that is owned by News Corp., has launched an investigative unit that is designed to help news organizations comb through social media networks to find stories or shore up existing projects. The unit has already worked on stories published in the Wall Street Journal, the Times of London, and broadcast on Sky News, according to a report by the Nieman Journalism Lab.