Disinformation on social media adds to the fog of war surrounding Israel and Palestine

On Tuesday, a blast hit the Al Ahli Hospital in Gaza, apparently killing hundreds of people including patients and other civilians who had been using the building as a shelter from Israeli missile attacks. Within minutes of the first news report on the story, accusations were flying on social media: some said that Israel was to blame, and in some cases said that they had video evidence to prove it; Israel said that the blast was the result of a failed missile launch by Islamic Jihad, a group allied with Hamas. Amid a firehose of outrage and takes, journalists worked to try and verify—in some cases publicly and in real time—what had actually happened, wading through testimony and images from sources of varying reliability that said wildly different things at different times. 

An official Israeli account on X tweeted a video purporting to bolster its claims that Islamic Jihad was responsible, but took it down after users pointed out that its timestamp didn’t match the apparent time of the hospital bombing. Later, Israel said that its intelligence services had intercepted a conversation between two Hamas operatives referring to a failed Islamic Jihad strike, and released what it claimed was audio of the discussion. Yesterday morning, Shashank Joshi, defense editor at The Economist, said that the evidence he had seen so far was more consistent with the failed missile launch hypothesis than an Israeli strike, but cautioned that this was “NOT conclusive by any means.” (A user accused Joshi of relying on evidence provided by the Israel Defense Forces; Joshi replied that “the relevant image being analyzed, published this morning” was actually posted by an account “thought to be associated with Hamas.”) Other analysts reached a similar conclusion, as did the US government, the White House said. But other observers remained skeptical, pointing out, for example, that the IDF has wrongly blamed Islamic Jihad in the past. At time of writing, the online debate raged on. 

Since Hamas attacked Israel on October 7, a string of incidents have challenged journalists and other professional fact-checkers; the blast at the hospital was the latest example. A document appearing to show that the Biden administration gave Israel eight billion dollars in funding turned out to have been doctored. Video footage that some said showed a Hamas soldier shooting down an Israeli helicopter was from a video game. A report on mass desertions from the IDF was said to have come from an Israeli TV station—which shut down in 2019. A video of a young boy lying in a pool of blood, surrounded by men in Israeli military fatigues, was offered as evidence of brutality—but in reality was a behind-the-scenes shot from a Palestinian movie.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

As my colleague Jon Allsop noted in this newsletter last week, the tsunami of content claiming to be from the conflict has also included genuine social-media posts from the combatants themselves. Distinguishing the real from the doctored has not been easy. Hamas itself uploaded a number of video clips of the initial wave of attacks, many of which, CNN reported, appeared to have been “heavily edited.” Much of this content was uploaded initially to the messaging service Telegram, the one major social network that hasn’t banned Hamas, which is a proscribed terrorist organization in a number of countries, including the US. Often, however, such content has made its way from Telegram to platforms, like Meta and X (the platform formerly known as Twitter), that have then struggled to detect it and either remove it or add context before it goes viral.

As Axios noted recently, many of the major platforms have scaled back their moderation of misinformation and other hateful and violent content over the past year. They are now scrambling to adjust to the unfolding crisis in the Middle East, and the waves of fakes and graphic imagery that have come with it. Meta said that it has developed a “special operations center” staffed with experts, including fluent Hebrew and Arabic speakers; TikTok said that it plans to add more moderators who speak those two languages. YouTube told Axios that it has removed “tens of thousands of harmful videos and terminated hundreds of channels” since the conflict began. Over at X—whose gutting of its content-moderation staff has been much discussed since Elon Musk acquired the platform last year—Linda Yaccarino, the CEO, sent leaders of the European Union a letter detailing the firm’s efforts to tackle war-related disinformation after EU policymakers opened an investigation into its hosting and distribution of such content. (This was one of the bloc’s first enforcement actions under its newly passed Digital Services Act, which I wrote about recently in this newsletter.)

Although all of the platforms have failed to some extent in their attempts to remove misinformation about the conflict, various experts have said that X has been among, if not the, worst for misinformation and disinformation. In the aftermath of the initial Hamas attack, Shayan Sardarizadeh, a journalist with the BBC’s Verify service, said in a post on X that he has been fact-checking on the network for years and that there’s always been plenty of misinformation during major events, but that the “deluge of false posts” since the war broke out—many of them boosted by X users with blue checkmarks, which were once handed out to verify the identities of public figures (including many journalists) but have become a paid-for premium feature under Musk—was unlike anything he had seen before.

In the days that followed, Yael Eisenstat, a former senior policy official at Facebook and current vice president of the Anti-Defamation League (which Musk has accused of driving advertisers away from X) told the Washington Post that while it was hard to find anti-Semitic statements or outright calls for violence on YouTube and even Meta, it was “totally easy” to find the same on X. Mike Rothschild, a researcher focused on conspiracy theories and social media, told Bloomberg that the attack was “the first real test of Elon Musk’s version of Twitter, and it failed spectacularly,” adding that it’s now almost impossible to tell “what’s a fact, what’s a rumor, what’s a conspiracy theory, and what’s trolling.” Musk’s changes to the service haven’t just made X unhelpful during a time of crisis, Rothschild said, but have “made it actively worse.”

Justin Peden, a researcher known as “the Intel Crab,” posted on X that while news outlets with reporters on the ground in Israel and Gaza struggled to reach audiences in the aftermath of the attack, “xenophobic goons are boosted by the platform’s CEO”—a reference to a post, since deleted, in which Musk vouched for the usefulness of two accounts that have been guilty of sharing misinformation, and in some cases anti-Semitic content. Emerson Brooking, a researcher at the Atlantic Council’s Digital Forensics Research Lab, told Wired that the fact that X now shares advertising revenue with premium users based on engagement incentivizes those users to maximize view counts, irrespective of the truth. And analysts at the Center for Strategic and International Studies noted that X is very different now than it was when Russia invaded Ukraine last year, before Musk acquired the platform. (In addition to the steps noted above, X has since stopped labeling accounts that are affiliated with Iranian, Russian, and Chinese state media, and removed headlines from all news links.)

X now has a feature called Community Notes that allows approved users to add fact-checking comments to posts on the service—but researchers specializing in misinformation say that the feature has been overwhelmed by the sheer quantity of fakes and hoaxes that need to be moderated. Ben Goggin, a deputy tech editor at NBC News, said last week that he reviewed a hundred and twenty posts on X that shared fake news and found that only 8 percent had community notes appended to them; 26 percent had suggested notes that had yet to be approved, while 66 percent had neither. And a recent investigation by Wired magazine found that Community Notes “appears to be not functioning as designed, may be vulnerable to coordinated manipulation by outside groups, and lacks transparency about how notes are approved.”

Last week, Charlie Warzel wrote for The Atlantic that Musk has turned X into “a facsimile of the once-useful social network, altered just enough so as to be disorienting, even terrifying.” He has a point. The platform gained much of its reputation as a source of real-time, on-the-ground news during events such as the Arab Spring in Egypt in the early 2010s. But its performance during the Israeli-Hamas conflict so far shows that it has become a fun-house mirror version of itself: a circus filled with posts that present as accurate and newsworthy, but in reality are the opposite. If misinformation creates a fog of war, X does not seem interested in dispelling it. 

Warzel’s article was headlined “This War Shows Just How Broken Social Media Has Become.” Indeed, to this broader point the entire social media landscape—the global town square, as Warzel calls it—is now a virtual minefield. If conflicts like the current one in the Middle East are lenses through which we understand our information environment, he wrote, ”then one must surmise that, at present, our information environment is broken.” One only needed to follow the hospital bombing in real-time to know this. At the heart of it all, lives continue to be lost.

Leave a Reply

Your email address will not be published. Required fields are marked *