Florida, Texas, and the fight to control platform moderation

On May 23, the US Court of Appeals for the 11th Circuit struck down most of the provisions of a social-media law that the state of Florida enacted in 2021, which would have made it an offense for any social-media company to “deplatform” the account of “any political candidate or journalistic enterprise,” punishable by fines of up to $250,000 per day. In their 67-page decision, the 11th Circuit justices ruled that any moderation decisions made by social-media platforms such as Twitter and Facebook, including the banning of certain accounts, are effectively acts of speech, and therefore are protected by the First Amendment. Last week, however, the US Court of Appeals for the 5th Circuit came to almost the exact opposite conclusion, in a decision related to a social-media law that the state of Texas enacted last year. The law banned the major platforms from removing any content based on “the viewpoint of the user or another person [or] the viewpoint represented in the user’s expression or another person’s expression.”

In the 5th Circuit opinion, the court ruled that while the First Amendment guarantees every person’s right to free speech, it doesn’t guarantee corporations the right to “muzzle speech.” The Texas law, the justices said, “does not chill speech; if anything, it chills censorship. We reject the idea that corporations have a freewheeling First
Amendment right to censor what people say.” The court dismissed many of the arguments technology companies such as Twitter and Facebook mamde in defense of their right to moderate content, arguing that to allow such moderation would mean that “email providers, mobile phone companies, and banks could cancel the accounts of anyone who sends an email, makes a phone call, or spends money in support of a disfavored political party, candidate, or business.” The appeals court seemed to endorse a definition used in the Texas law, which states that the social media platforms “function as common carriers,” in much the same way that telephone and cable operators do.

NetChoice and the Computer and Communications Industry Association—trade groups that represent Facebook, Twitter, and Google—argued that the social-media platforms should have the same right to edit content that newpapers have, but the 5th Circuit court rejected this idea. “The platforms are not newspapers,” Judge Andrew Oldham wrote in the majority opinion. “Their censorship is not speech.” Given the conflicting arguments in the 11th Circuit case and the 5th Circuit decision, Ashley Moody, the Attorney General for Florida, on Wednesday asked the Supreme Court to decide whether states have the right to regulate how social media companies moderate. The answer will affect not just Florida and Texas, but dozens of other states—including Oklahoma, Indiana, Ohio, and West Virginia— that have either passed or are considering social-media laws that explicitly prevent the platforms from moderating content, laws with names such as The Internet Freedom Act, and The Social Media Anti-Censorship Bill.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Florida, Texas, and the fight to control platform moderation”

Facebook and paying for news

On June 9, Keach Hagey and Alexandra Bruell—two Wall Street Journal reporters who cover the major digital platforms—reported that Facebook, a subsidiary of Meta Platforms, was “re-examining its commitment to paying for news,” according to several unnamed sources who were described as being familiar with Facebook’s plans. The potential loss of those payments, the Journal reporters wrote, was “prompting some news organizations to prepare for a potential revenue shortfall of tens of millions of dollars.” The Journal story echoed a report published in May by The Information, a subscription-only site that covers technology; in that piece, reporters Sylvia Varnham O’Regan and Jessica Toonkel said Meta was “considering reducing the money it gives news organizations as it reevaluates the partnerships it struck over the past few years,” and that this reevaluation was part of a rethinking of “the value of including news in its flagship Facebook app.”

Meta wouldn’t comment to either the Journal or The Information, and a spokesperson told CJR the company “doesn’t comment on speculation.” But the loss of payments from Meta could have a noticeable impact for some outlets. According to the Journal report, for the past two years—since the original payment deals were announced in 2019— Meta has paid the Washington Post more than $15 million per year, the New York Times over $20 million per year, and the Journal more than $10 million per year (the payments to the Journal are part of a broader deal with Dow Jones, the newspaper’s parent, which is said to be worth more than $20 million per year). The deals, which are expected to expire this year, were part of a broader system of payments Meta made to a number of news outlets, including Bloomberg, ABC News, USA Today, Business Insider, and the right-wing news site Breitbart News. Smaller deals were typically for $3 million or less, the Journal said.

The payments were announced as part of the launch of the “News tab,” a dedicated section of the Facebook app where readers can find news from the outlets that partnered with Meta (higher payments were made to those with paywalls, according to a number of reports). The launch was a high-profile affair, including a one-on-one interview between Robert Thomson, CEO of News Corp.—parent company of Dow Jones and the Journal—and Mark Zuckerberg, the CEO of Meta. Emily Bell, director of the Tow Center for Digital Journalism at Columbia, wrote for CJR that the meeting was like “a Camp David for peace between the most truculent old media empire and one of its most noxious disruptors,” and wondered how much it had cost for News Corp. to forget about its long-standing opposition to Facebook’s media strategy. The event was “a publicity coup for Facebook; it tamed the biggest beast in the journalism jungle,” Bell wrote.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Facebook and paying for news”

Of Substack, apps, and strategy

Substack, a hosting and publishing platform for email newsletters, took what seemed like an innocuous step last week: it launched a standalone smartphone app. Not surprising, perhaps, since almost every content startup has an app. Substack’s app, however, is somewhat different, since the company is a middleman that stands in between writers and their audiences, rather than a startup offering a service directly to consumers. Those differences have led to questions about Substack’s long-term strategy, and whether that strategy is good or bad for the writers who use it. Some of the concern stems from the fact that Substack has raised over $80 million in venture financing from a range of VC groups, including Andreessen Horowitz, a leading Silicon Valley venture powerhouse. The funding has given Substack a theoretical market value of $650 million, but that level of investment can put pressure on companies to meet aggressive growth targets.

Substack’s founders, for their part, argue that the app is just an extension of those goals. Hamish McKenzie, Chris Best, and Jairaj Sethi wrote in a blog post on the Substack site that their intention in starting the company was to “build an alternative media ecosystem based on different laws of physics, where writers are rewarded with direct payments from readers, and where readers have total control over what they read.” The app, they argue, builds on those ideas, in that it is designed for “deep relationships, an alternative to the mindless scrolling and cheap dopamine hits that lie behind other home screen icons.” Among other things, they say the app will amplify the network effects that already exist on Substack, “making it easier for writers to get new subscribers, and for readers to explore and sample Substacks they might otherwise not have found.”

Casey Newton, a technology writer who publishes a newsletter called Platformer (which is hosted on Substack) writes that the app is a symbol of “the moment in the life of a young tech company when its ambitions grow from niche service provider to a giant global platform.” Newton writes that it is possible that the Substack app could help writers build growing businesses by advertising their publications to likely readers (the company says that a person who has a credit card on file with Substack is 2.5 times more likely to subscribe to a new publication than someone who hasn’t). But it is equally possible, he says, that the app “makes publications feel like cheap, interchangeable widgets: an endless pile of things to subscribe to, overwhelming readers with sheer volume.” In other words, an app that serves Substack’s interests rather than those of its newsletter authors.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Of Substack, apps, and strategy”

As Ukraine war continues, Russia becomes increasingly isolated

Since the invasion of Ukraine began two weeks ago, Russia has found itself cut off from the rest of the world not only economically but also in a number of other important ways. In some cases, Russia is the one that has been severing those ties, as it did recently when it banned Facebook, because the company refused to stop fact-checking Russia media outlets such as Russia Today and Sputnik (so far, Russian citizens are still allowed to use WhatsApp and Instagram). Twitter has also reportedly been partially blocked in the country, while other companies have voluntarily withdrawn their services. YouTube has banned RT and Sputnik, and so has the entire EU. TikTok said on Sunday that while it is still available in Russia, it will no longer allow users to livestream or upload video from that country, due in part to a flood of disinformation, and to the arrival of a new “fake news” law in Russia that carries stiff penalties.

Traditional media companies have also withdrawn their services, and in some cases their journalists, from the country since the invasion, in part because of the fake news law. Bloomberg News and the BBC were among the first to stop producing journalism from within Russia last week. John Micklethwait, editor in chief of Bloomberg, wrote in a note to staff that the Russian law seemed designed to “turn any independent reporter into a criminal purely by association” and as a result made it “impossible to continue any semblance of normal journalism inside the country.” The New York Times said Tuesday that it had decided to pull its journalists out of Russia, in part because of the uncertainty created by the new law, which makes it a punishable offence to refer to the invasion of Ukraine in a news story as a “war.”

It’s not just individual social networks or journalism outlets; several network connectivity providers have also withdrawn their services from Russia. They’re the giant telecom firms that supply the “backbone” connections between countries and the broader internet, and removing them means Russia is increasingly isolated from any information on the war that doesn’t come from inside the country or from Russian state media. Lumen, formerly known as CenturyLink, pulled the plug on Russia on Wednesday, withdrawing service from customers such as national internet provider Rostelecom, as well as a number of leading Russian mobile operators. Competitor Cogent Networks did the same with its broadband network last week.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “As Ukraine war continues, Russia becomes increasingly isolated”

Ukraine, viral media, and the scale of war

If there’s one thing Twitter and Facebook and Instagram and TikTok are good at, it’s distributing content and making it go viral, and Russia’s invasion of Ukraine is no exception to that rule. Every day, there are new images and videos, and some become that day’s trending topic: the video clip of Ukrainian president Zelensky in military fatigues, speaking defiantly about resisting Russia’s attack; photos of Kyiv’s mayor, a six-foot-seven-inch former heavyweight boxing champion, in army fatigues; a man standing in front of a line of Russian tanks, an echo of what happened in China’s Tianenmen Square during an uprising in 1989; the old Ukrainian woman who told Russian soldiers to put sunflower seeds in their pockets, so sunflowers would grow on their graves; the soldiers on Snake Island who told a Russian warship to “fuck off.” The list goes on.

Not surprisingly, some of these viral images are fake, or cleverly designed misinformation and propaganda. But even if the inspiring pictures of Ukrainians rebelling against Russia are real (or mostly real, like the photo of Kyiv’s mayor in army fatigues, which was taken during a training exercise in 2021), what are we supposed to learn from them? They seem to tell us a story, with a clear and pleasing narrative arc: Ukrainians are fighting back! Russia is on the ropes! The Washington Post writes that the social-media wave “has blunted Kremlin propaganda and rallied the world to Ukraine’s side.” Has it? Perhaps. But will any of that actually affect the outcome of this war, or is it just a fairy tale we are telling ourselves because it’s better than the reality?

The virality of the images may drive attention, but, from a journalism perspective, it often does a poor job of representing the stakes and the scale at-hand. Social media is a little like pointillism—a collection of tiny dots that theoretically combine to reveal a broader picture. But over the long term, war defies this kind of approach. The 40-mile long convoy of Russian military vehicles is a good example: frantic tweets about it fill Twitter, as though users are getting ready for some epic battle that will win the war, but the next day the convoy has barely moved. Are some Ukrainians fighting back? Yes. But just because we see one dead soldier beside a burned-out tank doesn’t mean Ukraine is going to win, whatever “win” means. As Ryan Broderick wrote in his Garbage Day newsletter, “winning a content war is not the same as winning an actual war.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Ukraine, viral media, and the scale of war”

Resurrected bill raises red flags, including for journalists

In 2020, members of Congress introduced a bill they said would help rid the internet of child sexual-abuse material (CSAM). The proposed legislation was called the EARN IT Act—an abbreviation for the full name, which was the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act. In addition to establishing a national commission on online child sexual exploitation prevention to come up with the best practices for eliminating such content, the bill stated that any online platforms hosting child sexual-abuse material would lose the protection of Section 230 of the Communications Decency Act, which gives electronic service providers immunity from prosecution for most of the content that is posted by their users.

The bill immediately came under fire from a number of groups—including the Electronic Frontier Foundation, the Freedom of the Press Foundation, and others—who said it failed on a number of levels. For example, as Mike Masnick of Techdirt noted, Section 230 doesn’t protect electronic platforms from liability for illegal content such as child sexual-abuse material, so passing a law exempting them from that protection is redundant, and unnecessary. Critics of the bill also said it could cause online services to stop offering end-to-end encryption, used by activists and journalists around the world, because using encryption is a potential red flag for those investigating CSAM.

In the end, the bill was dropped. But it was resurrected earlier this year, reintroduced by Richard Blumenthal and Lindsey Graham (the House has revived its version as well), and many groups say the current version is as bad as the original, if not worse. The EFF said the bill would still “pave the way for a massive new surveillance system, run by private companies, that would roll back some of the most important privacy and security features in technology used by people around the globe.” The group says the act would allow “private actors to scan every message sent online and report violations to law enforcement,” and potentially allow anything hosted online—including backups, websites, cloud photos, and more—to be scanned by third parties.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Resurrected bill raises red flags, including for journalists”

Of platforms, publishers, and responsibility

Last week, criticism of Spotify for hosting the Joe Rogan podcast—and thereby enabling the distribution of misinformation about COVID, among other things—accelerated after music legend Neil Young chose to remove all of his work from the streaming service. “I am doing this because Spotify is spreading fake information about vaccines—potentially causing death to those who believe the disinformation being spread by them,” Young wrote in a letter on his website (which has since been removed). He was followed by a number of other artists, including fellow Canadian Joni Mitchell, Nils Lofgren, and the other former members of Crosby, Stills, Nash, and Young. Prince Harry, the Duke of York, and his wife Meghan Markle, also registered their concerns about the service, which they have partnered with for a series of podcasts.

Throughout this process, Spotify’s position has remained steadfast: it said it is sorry for any harm caused by Rogan’s podcast, and it plans to add content warnings and other measures, but it also maintained that it is a platform and not a publisher—in other words, simply a conduit for content produced by artists such as Rogan, and not a publisher that makes choices about which specific kinds of content to include. Daniel Ek, co-founder and CEO of Spotify, wrote in a blog post that the company supports “creator expression,” and that there are plenty of artists and statements carried on the service that he disagrees with. “We know we have a critical role to play in supporting creator expression while balancing it with the safety of our users,” he said. “In that role, it is important to me that we don’t take on the position of being content censor.”

The only problem with Spotify’s platform defense—at least as it pertains to Joe Rogan—is that it isn’t true (even some Spotify employees called it “a dubious assertion” according to the LA Times). Rogan’s podcast isn’t available through any other service such as YouTube Music, Amazon Music, etc. He has an exclusive contract with Spotify, a relationship the company paid $100 million for. In that sense, Spotify is his publisher. As Elizabeth Spiers, former editor of the New York Observer, pointed out, this is a clear editorial choice the company has made, just as the New York Times or the Washington Post choose whom they give a column to. If those columnists decide to say something wrong or dangerous, responsibility for that lies with the paper.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Of platforms, publishers, and responsibility”

The FTC’s second try at an antitrust case against Facebook gets the green light

Last June, James Boasberg, a judge with U.S. District Court in the District of Columbia, threw out an antitrust case that was filed by the Federal Trade Commission against Facebook (which has since changed its corporate name to Meta). The lawsuit alleged that the company has an illegal monopoly on social networking services, that it built this monopoly in part by acquiring competing services such as WhatsApp and Instagram, and that it uses its monopoly position in an anti-competitive way against other companies. In his dismissal of the case, Boasberg said that the federal regulator had failed to provide enough tangible evidence that Facebook had anything approaching a monopoly over a discrete market segment known as social networking (a similar antitrust lawsuit filed by 40 state attorneys-general was also dismissed by Boesberg last June, but the states have not yet filed an appeal).

The judge left the door open for the FTC, however, telling the agency it was welcome to try again, if and when it accumulated the evidence he sought. On Tuesday, Boasberg ruled that the majority of a new FTC lawsuit can proceed, based on evidence of a monopoly position provided by the agency in its revised submission. The judge said the FTC’s first attempt at a lawsuit “stumbled out of the starting blocks,” but that the facts provided by the agency this time were “far more robust and detailed than before, particularly in regard to the contours of defendant’s alleged monopoly.” Boasberg blocked another part of the case, which alleged that Facebook harmed competitors by illegally restricting access to its platform—he said Facebook “abandoned the policies in 2018, and its last alleged enforcement was even further in the past.

Although some critics of the FTC’s case, including technology analyst Ben Thompson, have questioned the accuracy of the agency’s attempts to define a specific market for “personal social networking” over which Facebook allegedly has a monopoly, Boasberg found no fault with this market definition. In his first ruling, he said that “while there are certainly bones one could pick with the FTC’s market-definition allegations, the Court does not find them fatally devoid of meat.” In terms of whether Facebook has anything approaching a monopoly, the judge seemed to be convinced in his latest decision by the addition of data from Comscore, a traffic measurement company, which said “Facebook’s share of DAUs [daily average users] of apps providing personal social networking services in the United States has exceeded 70 percent since 2016.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “The FTC’s second try at an antitrust case against Facebook gets the green light”