Ukraine, viral media, and the scale of war

If there’s one thing Twitter and Facebook and Instagram and TikTok are good at, it’s distributing content and making it go viral, and Russia’s invasion of Ukraine is no exception to that rule. Every day, there are new images and videos, and some become that day’s trending topic: the video clip of Ukrainian president Zelensky in military fatigues, speaking defiantly about resisting Russia’s attack; photos of Kyiv’s mayor, a six-foot-seven-inch former heavyweight boxing champion, in army fatigues; a man standing in front of a line of Russian tanks, an echo of what happened in China’s Tianenmen Square during an uprising in 1989; the old Ukrainian woman who told Russian soldiers to put sunflower seeds in their pockets, so sunflowers would grow on their graves; the soldiers on Snake Island who told a Russian warship to “fuck off.” The list goes on.

Not surprisingly, some of these viral images are fake, or cleverly designed misinformation and propaganda. But even if the inspiring pictures of Ukrainians rebelling against Russia are real (or mostly real, like the photo of Kyiv’s mayor in army fatigues, which was taken during a training exercise in 2021), what are we supposed to learn from them? They seem to tell us a story, with a clear and pleasing narrative arc: Ukrainians are fighting back! Russia is on the ropes! The Washington Post writes that the social-media wave “has blunted Kremlin propaganda and rallied the world to Ukraine’s side.” Has it? Perhaps. But will any of that actually affect the outcome of this war, or is it just a fairy tale we are telling ourselves because it’s better than the reality?

The virality of the images may drive attention, but, from a journalism perspective, it often does a poor job of representing the stakes and the scale at-hand. Social media is a little like pointillism—a collection of tiny dots that theoretically combine to reveal a broader picture. But over the long term, war defies this kind of approach. The 40-mile long convoy of Russian military vehicles is a good example: frantic tweets about it fill Twitter, as though users are getting ready for some epic battle that will win the war, but the next day the convoy has barely moved. Are some Ukrainians fighting back? Yes. But just because we see one dead soldier beside a burned-out tank doesn’t mean Ukraine is going to win, whatever “win” means. As Ryan Broderick wrote in his Garbage Day newsletter, “winning a content war is not the same as winning an actual war.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Ukraine, viral media, and the scale of war”

Resurrected bill raises red flags, including for journalists

In 2020, members of Congress introduced a bill they said would help rid the internet of child sexual-abuse material (CSAM). The proposed legislation was called the EARN IT Act—an abbreviation for the full name, which was the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act. In addition to establishing a national commission on online child sexual exploitation prevention to come up with the best practices for eliminating such content, the bill stated that any online platforms hosting child sexual-abuse material would lose the protection of Section 230 of the Communications Decency Act, which gives electronic service providers immunity from prosecution for most of the content that is posted by their users.

The bill immediately came under fire from a number of groups—including the Electronic Frontier Foundation, the Freedom of the Press Foundation, and others—who said it failed on a number of levels. For example, as Mike Masnick of Techdirt noted, Section 230 doesn’t protect electronic platforms from liability for illegal content such as child sexual-abuse material, so passing a law exempting them from that protection is redundant, and unnecessary. Critics of the bill also said it could cause online services to stop offering end-to-end encryption, used by activists and journalists around the world, because using encryption is a potential red flag for those investigating CSAM.

In the end, the bill was dropped. But it was resurrected earlier this year, reintroduced by Richard Blumenthal and Lindsey Graham (the House has revived its version as well), and many groups say the current version is as bad as the original, if not worse. The EFF said the bill would still “pave the way for a massive new surveillance system, run by private companies, that would roll back some of the most important privacy and security features in technology used by people around the globe.” The group says the act would allow “private actors to scan every message sent online and report violations to law enforcement,” and potentially allow anything hosted online—including backups, websites, cloud photos, and more—to be scanned by third parties.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Resurrected bill raises red flags, including for journalists”

Of platforms, publishers, and responsibility

Last week, criticism of Spotify for hosting the Joe Rogan podcast—and thereby enabling the distribution of misinformation about COVID, among other things—accelerated after music legend Neil Young chose to remove all of his work from the streaming service. “I am doing this because Spotify is spreading fake information about vaccines—potentially causing death to those who believe the disinformation being spread by them,” Young wrote in a letter on his website (which has since been removed). He was followed by a number of other artists, including fellow Canadian Joni Mitchell, Nils Lofgren, and the other former members of Crosby, Stills, Nash, and Young. Prince Harry, the Duke of York, and his wife Meghan Markle, also registered their concerns about the service, which they have partnered with for a series of podcasts.

Throughout this process, Spotify’s position has remained steadfast: it said it is sorry for any harm caused by Rogan’s podcast, and it plans to add content warnings and other measures, but it also maintained that it is a platform and not a publisher—in other words, simply a conduit for content produced by artists such as Rogan, and not a publisher that makes choices about which specific kinds of content to include. Daniel Ek, co-founder and CEO of Spotify, wrote in a blog post that the company supports “creator expression,” and that there are plenty of artists and statements carried on the service that he disagrees with. “We know we have a critical role to play in supporting creator expression while balancing it with the safety of our users,” he said. “In that role, it is important to me that we don’t take on the position of being content censor.”

The only problem with Spotify’s platform defense—at least as it pertains to Joe Rogan—is that it isn’t true (even some Spotify employees called it “a dubious assertion” according to the LA Times). Rogan’s podcast isn’t available through any other service such as YouTube Music, Amazon Music, etc. He has an exclusive contract with Spotify, a relationship the company paid $100 million for. In that sense, Spotify is his publisher. As Elizabeth Spiers, former editor of the New York Observer, pointed out, this is a clear editorial choice the company has made, just as the New York Times or the Washington Post choose whom they give a column to. If those columnists decide to say something wrong or dangerous, responsibility for that lies with the paper.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Of platforms, publishers, and responsibility”

The FTC’s second try at an antitrust case against Facebook gets the green light

Last June, James Boasberg, a judge with U.S. District Court in the District of Columbia, threw out an antitrust case that was filed by the Federal Trade Commission against Facebook (which has since changed its corporate name to Meta). The lawsuit alleged that the company has an illegal monopoly on social networking services, that it built this monopoly in part by acquiring competing services such as WhatsApp and Instagram, and that it uses its monopoly position in an anti-competitive way against other companies. In his dismissal of the case, Boasberg said that the federal regulator had failed to provide enough tangible evidence that Facebook had anything approaching a monopoly over a discrete market segment known as social networking (a similar antitrust lawsuit filed by 40 state attorneys-general was also dismissed by Boesberg last June, but the states have not yet filed an appeal).

The judge left the door open for the FTC, however, telling the agency it was welcome to try again, if and when it accumulated the evidence he sought. On Tuesday, Boasberg ruled that the majority of a new FTC lawsuit can proceed, based on evidence of a monopoly position provided by the agency in its revised submission. The judge said the FTC’s first attempt at a lawsuit “stumbled out of the starting blocks,” but that the facts provided by the agency this time were “far more robust and detailed than before, particularly in regard to the contours of defendant’s alleged monopoly.” Boasberg blocked another part of the case, which alleged that Facebook harmed competitors by illegally restricting access to its platform—he said Facebook “abandoned the policies in 2018, and its last alleged enforcement was even further in the past.

Although some critics of the FTC’s case, including technology analyst Ben Thompson, have questioned the accuracy of the agency’s attempts to define a specific market for “personal social networking” over which Facebook allegedly has a monopoly, Boasberg found no fault with this market definition. In his first ruling, he said that “while there are certainly bones one could pick with the FTC’s market-definition allegations, the Court does not find them fatally devoid of meat.” In terms of whether Facebook has anything approaching a monopoly, the judge seemed to be convinced in his latest decision by the addition of data from Comscore, a traffic measurement company, which said “Facebook’s share of DAUs [daily average users] of apps providing personal social networking services in the United States has exceeded 70 percent since 2016.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “The FTC’s second try at an antitrust case against Facebook gets the green light”