Medium has pivoted so many times it has now come full circle

Note: This post was originally published as the daily newsletter of the Columbia Journalism Review, where I am the chief digital writer

When it opened to the public in 2013, Medium seemed to have a bright future. It was founded by Evan Williams, a co-founder and former CEO of Twitter, and also a co-founder of Blogger, one of the first self-publishing platforms during the early days of the social web. Those investments have made him rich: when Twitter went public in 2013, Williams’ stake gave him a net worth of almost $2 billion. What could be better than a digital media platform owned by a billionaire with such an illustrious track record? And Williams made it clear that he intended to reinvent publishing and possibly journalism, by giving writers a way to publish their work easily. The company has stumbled a number of times on the way towards realizing that dream, however: it has hired editors and writers and encouraged publications to move onto its platform, only to yank the rug out from under them—not just once, but multiple times.

This week, it announced another pivot that calls into question the future of the publications it launched with much fanfare two years ago. Williams announced in an email to staff on Tuesday that the company is changing its editorial strategy to focus on finding and developing new and promising writers from the pool of users who publish using its platform. He also announced a voluntary severance program for staff who “would rather get off this crazy ride,” which offers a lump sum of five months’ salary and six months worth of health benefits. According to The Information, the company told employees the buyout plan came about as a result of a recent union drive, which fell short of certification by one vote. What is to become of Medium’s existing magazines, including the tech-focused site OneZero, and Zora, which is focused on telling the stories of Black women, is unclear. Williams is vague on that point, saying only that “it will take a lot more experimentation to figure out what their role is.”

Two Medium staffers, who didn’t want to be named because they said it might jeopardize their jobs, told CJR it’s not clear what the Williams email means for them or their future, and one said they are considering the buyout package. “If they really wanted us to stay, they would have offered a plan and some assurances as to what happens if we stay,” the staffer said. “But they did not.” Another said they would likely stick around and try to carve out a future at Medium. “It’s not like Medium is the only bad place in media,” they said. “Everywhere is the bad place.” When it launched its owned-and-operated magazines, staffers say Medium gave them lots of resources and editorial independence, and allowed editors to commission freelancers at relatively high rates. The bet seemed to be that high-quality content from professionals would buttress the user-generated content coming in from the platform, but that clearly hasn’t panned out, and editors say freelance rates have plummeted.

Continue reading “Medium has pivoted so many times it has now come full circle”

Facebook goes after Substack

Note: This was originally published in the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

If you’re an independent writer or journalist, Facebook would like you to know that it wants to help you. With what? Just about everything: it wants to give you easy to use writing and publishing tools, so you can create websites and newsletters, and publish them in multiple places (including on Facebook, of course), and it wants to help you connect those sites and newsletters you create to groups that it will also help you create (on Facebook, naturally). And it wants to give you tools to attract subscribers to your writing, and other ways of generating revenue (i.e., ads), and all kinds of other non-specific helpful advice. We know all this because Campbell Brown, Facebook’s head of news partnerships, and Anthea Watson Strong, the company’s product manager for news, wrote a blog post in which they described all of these features and the ways in which they want to “empower independent writers, helping them reach new audiences and grow their businesses.” But the part that really caught the attention of those in the media is that Facebook says it is going to jump-start this new program by paying a “small subset” of independent writers.

Nowhere in this long statement of intent does anyone mention the name Substack, which is probably not surprising, because what Facebook is offering sounds like a carbon copy of what Substack provides to independent writers and journalists: a platform for their posts and newsletters, one in which Substack not only provides back-office support for subscriptions, but also doles out cash to a select group of writers in order to convince them to try out the platform. This “Substack Pro” program has been the source of some controversy recently, due to the fact that some of the writers the company has chosen to fund have expressed a range of what some find to be offensive opinions. For example, Scott Alexander, who writes a blog known as Astral Codex Ten (and was the subject of a controversial New York Times profile), has written positively about the idea of “human biodiversity,” which is often a code word for pseudo-scientific racism and/or eugenics. The list of others on the Substack Pro list (which the company has not made public) reportedly include Frederik de Boer, who promised he would retreat from public writing after he falsely accused another journalist of rape, but has since restarted his political blog.

While he isn’t being paid by Substack, Glenn Greenwald — the former Intercept writer — has used Substack to write about about New York Times journalist Taylor Lorenz, making it clear that he doesn’t think the harassment she has faced is as important as the harassment he has faced. And Graham Linehan has reportedly used his Substack newsletter to mock, mis-gender, and harass trans women. Substack co-founder Hamish Mackenzie said in a blog post that who they decide to pay isn’t based on the content of what they write, but merely on whether they will be successful (i.e., generate revenue) and that therefore these are not editorial decisions, but critics have pointed out that these are exactly the kinds of editorial decisions traditional media outlets often make. In any event, several writers and journalists have said they are leaving Substack, because they don’t want the revenue they generate from subscriptions to be used to fund opinions they disagree with. In response, Substack co-founder Chris Best tweeted “defund the thought police” (Substack’s founders have since tried to clarify that the Pro program supports a wide range of writers).

Continue reading “Facebook goes after Substack”

Facebook asks court to dismiss the FTC’s antitrust complaint

Note: This was originally published in the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Last fall, after more than a year-and-a-half’s worth of Congressional committee hearings and investigations into the power of technology platforms like Google, Facebook, and Twitter, the government released a comprehensive report on alleged anti-competitive conduct by the companies. The 450-page report, which called for a number of “structural remedies” including breakups of the companies, also helped lend momentum to an almost unprecedented number of state and federal antitrust actions. One of those was a lawsuit launched against Facebook by the Federal Trade Commission, backed by an investigation it and 49 states conducted, alleging a wide range of monopolistic behavior. At the time, the company responded with a blog post and public statement calling the lawsuit “revisionist history,” and arguing that it “ignores reality” when it comes to the nature of its business. On Wednesday, the company released a much more comprehensive response: a 54-page defense and a request to dismiss the suit.

“No government lawsuit similar to this one has been brought in the 130-year history of the Sherman Act, and for good reason,” Facebook’s statement begins. The FTC “has not alleged facts amounting to a plausible antitrust case,” it says; instead, the company alleges the federal regulator has chosen to file a case that “ignores its own prior decisions, controlling precedent, and the limits of its statutory authority.” In order to make a plausible case for antitrust action, Facebook’s defense argues, the FTC would have to prove a) that Facebook dominates a defined market, b) that it has the power in that market to raise prices or restrict output, and c) that it has maintained that monopoly power in ways that harm competition and/or injure consumers. The government’s complaint fails, the company says, “because the FTC has not pleaded facts sufficient to satisfy any of the three required elements of a claim.”

When it comes to the market the company allegedly dominates in an anti-competitive way, the FTC’s case argues that the relevant market is “personal social networking,” i.e. the sharing of photos and other personal information with family and friends. As some technology journalists like newsletter author Casey Newton have pointed out, this is a shaky definition on a number of counts: for one thing, the FTC’s claim carefully ignores popular apps like TikTok, which has managed to build a massive amount of market share — 800 million users or so by the end of last year — despite Facebook’s alleged monopolistic behavior. Such a market definition, the company argues, is contradicted by the FTC’s own allegations, which accuse the social network of restricting access to its data platform in order to keep out competitors, which then aren’t included in the regulator’s definition of Facebook’s market.

Continue reading “Facebook asks court to dismiss the FTC’s antitrust complaint”

What should we do about the algorithmic amplification of disinformation?

Note: This was originally published in the daily newsletter of the Columbia Journalism Review, where I am the chief digital writer

From the results of the 2020 presidential election to the alleged dangers of the COVID vaccine, disinformation continues to have a significant effect on almost every aspect of our lives, and some of the biggest sources of disinformation are the social platforms that we spend a large part of our lives using — Facebook, Twitter, YouTube, etc. On these platforms, conspiracy theories and hoaxes are distributed at the speed of light, thanks to the recommendation algorithms that all of these services use. But the algorithms themselves, and the inputs they use to choose what we see in our feeds, are opaque, known only to senior engineers within those companies. or to malicious actors who specialize in “computational propaganda” by weaponizing the algorithm. Is there anything we as a society can do about this problem, apart from hoping that Facebook and its ilk will figure out some kind of automated solution, even if that goes against their financial interests, as it almost certainly will?

We invited some veteran disinformation researchers and other experts to discuss this topic and related issues on CJR’s Galley discussion platform this week, including: Joan Donovan, who runs the Technology and Social Change research project at Harvard’s Shorenstein Center; Sam Woolley, an assistant professor in both the School of Journalism and the School of Information at the University of Texas in Austin; Anne Washington, an assistant professor of data policy at New York University and an expert in data governance issues; Khadijah Abdurahman, an independent researcher specializing in content moderation and surveillance in Ethiopia; Irene Pasquetto, who studies information ethics and digital curation as an assistant professor at the University of Michigan’s School of Information; and Lilly Irani, an associate professor in the department of communication at the University of California in San Diego.

Donovan, whose specialty is media manipulation, disinformation, and adversarial movements that target journalists, says she believes the US needs legislation similar to the Glass-Steagall Act, which put limits on what banks and investment companies could do. This kind of law would “define what these business can do and lay out some consumer protections, coupled with oversight of human and civil rights violations by tech companies,” Donovan says. “The pandemic and the election revealed just how broken our information ecosystem is when it comes to getting the right information in front of the right people at the right time.” The way that Facebook and other platforms operate, she says, means that “those with money and power were able to exert direct influence over content moderation decisions.”

Continue reading “What should we do about the algorithmic amplification of disinformation?”