Late Tuesday, the terrorist group known as ISIS released a video that appeared to show members of the group beheading freelance journalist James Foley, who was kidnapped almost two years ago while reporting in Iraq. As they so often do, screenshots and links to the video circulated rapidly through social media — even as some journalists begged others to stop sharing them — while Twitter and YouTube tried to remove them as quickly as possible. But as well-meaning as their behavior might be, do we really want those platforms to be the ones deciding what content we can see or not see? It’s not an easy question.
When I asked that question on Twitter, Nu Wexler — a member of Twitter’s public policy team — said the company removed screenshots from the video at the request of Foley’s relatives, in accordance with a new company policy, which states that the company will remove images at the request of the family, although it “will consider public interest factors such as the newsworthiness of the content.” A number of people had their accounts suspended after they shared the images, including Zaid Benjamin of Radio Sawa, but media outlets that posted photos did not.
[tweet 501928035793502209 hide_thread=’true’]
It’s easy to understand why the victim’s family and friends wouldn’t want the video or screenshots circulating, just as the family of Wall Street Journal reporter Daniel Pearl — who was beheaded on video by Al-Qaeda in 2002 — or businessman Nick Berg didn’t want their sons’ deaths broadcast across the internet. And it’s not surprising that many of those who knew Foley, including a number of journalists, would implore others not to share those images, especially since doing so could be seen as promoting (even involuntarily) the interests of ISIS.
Who decides what qualifies as violence?
For whatever it’s worth, I think we owe it to Foley — and others who risk their lives to report the news — to watch the video, out of respect for their commitment. But regardless, shouldn’t that be our choice to make? Should Twitter and YouTube be so quick to remove content because it happens to be violent? And who defines what violence is? What if it was a photo of a young Vietnamese girl who had burned by napalm, or a man being shot by police?
[tweet 502088678949552128 hide_thread=’true’]
Some of those who responded to my question argued that removing images of someone being beheaded is a fairly obvious case where censorship should be required, if only because they are shocking and repulsive — and because Twitter in particular shows users photos and videos automatically now, unlike in the past when you had to click on a link (a change Twitter ironically made to increase engagement with multimedia content). TV networks don’t show violent or graphic images, the argument goes, so why should Twitter or YouTube?
The difference, of course, is that while Twitter may seem more and more like TV all the time — as Zach Seward at Quartz describes it — it’s supposed to be a channel that we control, not one that is moderated by unseen editors somewhere. Twitter has become a global force in part because it is a source of real-time information about conflicts like the Arab Spring in Egypt or the police action in Ferguson, and the company has repeatedly staked its reputation on being the “free-speech wing of the free-speech party.”
Sad that after a year+ of incitement to genocide, jihadi stuff is now being mass scrubbed from Twitter/FB because an American was killed.
Twitter management have been struggling for some time to find a happy medium between censorship and free speech when it comes to ISIS, a group that is renowned for its use of social media to promote its cause — accounts associated with the group have been suspended a number of times, but more keep appearing. Some, including conservative commentator Ronan Farrow, have argued that the company and other social platforms should do a lot more to keep terrorist propaganda and other content out of their networks.
How does Twitter define free speech?
A source at Twitter said that ISIS is especially difficult, because the group is on a U.S. government list of terrorist organizations, and it’s considered a criminal offence to provide “aid or comfort” to such groups — something that could theoretically cover providing them with a platform on social media. But then the Palestinian group Hamas is defined by many as a terrorist group, and it posts on Twitter regularly, including an infamous exchange with the official Twitter account for the Israeli army in 2012.
I deleted the link to the Foley video, but what is the logic? We have been linking to hundreds of ISIS videos beheading FSA & other Syrians
After Ronan Farrow compared ISIS content to the radio broadcasts in Rwanda that many believe helped fuel a genocide in that country in the 1990s, sociologist Zeynep Tufekci argued that in some cases social platforms probably should remove violent content, because of the risk that distributing it will help fuel similar behavior. But others, including First Look Media’s Glenn Greenwald, said leaving those decisions up to corporations like Twitter or YouTube is the last thing that a free society should want to promote.
In some ways, it’s a lot easier to let Twitter or YouTube or Facebook decide what content we should see and not see, since it protects us from being exposed to violent imagery and repulsive behavior. But in some cases it can also prevent us from knowing things that need to be known, as investigative blogger Brown Moses says Facebook does when it removes content posted by dissident groups in Syria. Shouldn’t that be our decision as users?
Post and thumbnail images courtesy of Thinkstock / Yuriz