The giant “acoustic mirrors” that once protected Britain

If you’re driving through Britain, you might see giant concrete blocks with concave openings. What are they? Acoustic mirrors. More than 100 years ago, these mirrors were built along the coast of England, with the intention of using them to detect the sound of approaching German zeppelins. Invented by William Sansome Tucker, and operated at differing scales between around 1915 and 1935, the acoustic mirrors were able to signal an aircraft from up to 24 kilometers away, giving enough time to allow British defence to prepare for counterattack. The concave structures responded to sound by focusing the waves to a single point, where a microphone was positioned. Not only were they able to announce the arrival of an aircraft, but they could also determine the direction of attack of the plane to an accuracy of 1.5 degrees. Their development continued until the mid-1930s, when the invention of radar made them obsolete.

This internet service provider’s security keys are generated by a wall of lava lamps

You might think that the best security keys would be generated by computers, but in the case of CloudFlare, which caches and distributes data for thousands of large companies, you would only be half right. Computers, being logical devices, struggle with generating randomness, so CloudFlare uses real objects to generate “entropy,” which in cryptography means unpredictability. Encryption keys need to be unpredictable, or else an attacker can try to detect patterns. That’s where lava lamps come in, because they’re an inherently random variable. CloudFlare has two other randomness generators that are being built: The first, in the company’s London office, is known as the “Chaotic Pendulums,” and features giant grandfather-clock style pendulums, and the second, under construction in the company’s Austin office, is called “Suspended Rainbows.” Entropy is generated via patterns of light that are projected on walls, the ceiling, and the floor.

Note: This is a version of my personal newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.

Continue reading “The giant “acoustic mirrors” that once protected Britain”

When is a library not a library? When it’s online, apparently

In March of 2020, the Internet Archive, a nonprofit created by entrepreneur Brewster Kahle, launched a new feature called the National Emergency Library. Since COVID-19 restrictions had made it difficult or impossible for people to buy books or visit libraries in person, the Archive removed any limits on the digital borrowing of the more than three million books in its database, and made them all publicly available, for free. The project was supported by a number of universities, researchers, and librarians, but some of the authors and publishers who owned the copyright to these books saw it not as a public service, but as theft. Four publishers—Hachette, HarperCollins, John Wiley & Sons, and Penguin Random House—filed a lawsuit. The Internet Archive shut the project down, and returned to its previous Controlled Digital Lending program, which allows only one person to borrow a digital copy of a book at any given time. But the lawsuit continued, with the publishers arguing that any digital lending by the Archive was copyright infringement.

Last week, Judge John G. Koeltl of the Southern District of New York ruled in favor of the publishers and dismissed every aspect of the Archive’s defense, including the claim that it is protected by the fair use exception in copyright law. Koeltl wrote that fair use protects transformative versions of copyrighted works, but that the Archive’s copies don’t qualify. The Archive tried to make the case that its digital lending is transformative because it “facilitates new and expanding interactions between library books and the web,” the judge noted. But he added that an infringing use does not become transformative simply by “making an invaluable contribution to the progress of science and cultivation of the arts.” A Google book-scanning project was found to be protected by fair use in a 2014 legal decision, but Koeltl pointed out that Google used the scans to create a database that could be searched, and thereby increased the utility of the books, rather than distributing complete digital copies. Any “alleged benefits” from the Archive’s lending “cannot outweigh the market harm to the publishers,” Koeltl wrote.

The scanning and lending of digital books is just one part of what the Internet Archive does. Founded in 1996, Kahle said he hoped the Archive would become a modern version of the ancient Library of Alexandria, and provide “universal access to all knowledge,” he told TechRadar. The Archive has created digital copies of more than seven hundred billion webpages, which are available for free through a service called the Wayback Machine. It has also archived millions of audio files, video games, and other software. A number of libraries, including some that have partnered with the Internet Archive, have offered a version of controlled digital lending for some time, based on the theory that limiting digital borrowing to a single copy of a book is similar to what libraries do with physical books. But publishers and authors were critical of it even before the current lawsuit—in 2018, the Authors Guild called the Archive’s lending program “a flagrant violation of copyright law”—and, until now, the legality of this model has never been tested in the courts.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “When is a library not a library? When it’s online, apparently”

The weird science behind what we call “glitter”

Each December, surrounded by wonderlands of white paper snowflakes, bright red winterberries, and forests of green conifers reclaiming their ancestral territory from inside the nation’s living rooms and hotel lobbies, children and adults delight to see the true harbinger of the holidays: aluminum metalized polyethylene terephthalate. Aluminum metalized polyethylene terephthalate settles over store windows like dazzling frost. It flashes like hot, molten gold across the nail plates of young women. It sparkles like pure precision-cut starlight on an ornament of a North American brown bear driving a car towing a camper van. Indeed, in Clement Clarke Moore’s seminal Christmas Eve poem, the eyes of Saint Nicholas himself are said to twinkle like aluminum metalized polyethylene terephthalate.

An updated history of a viral Internet video

In July, Defector published a story about an ancient internet video called “Basketball (so funny you’ll pee your pants).avi,” based on extensive archival research and interviews with the people involved. The video was filmed at The Shipley School in Bryn Mawr, Penn., in the mid-90s, during a basketball game against Delco Christian. It features a Shipley player heaving the ball across the length of the court, where it collides with a small child. Footage of the freak accident was submitted to America’s Funniest Home Videos, and eventually made its way across Web 1.0 video sites and peer-to-peer networks. It is one of the earliest viral videos on the internet. But recently, the story got a lot more complicated all of a sudden.

Note: This is a version of my personal newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.

Continue reading “The weird science behind what we call “glitter””

He won $30 million playing the lottery, and then he lost everything

One June morning in 2017, an Albanian American real-estate broker named Viktor Gjonaj parked outside a strip mall in Sterling Heights, a small suburb on the outskirts of Detroit. He hurried into the claim office of the Michigan Lottery. Gjonaj, who is 6 foot 5, loomed over the front desk and announced that he had won the Daily 4 lottery draw, worth $5,000. But Gjonaj did not have one winning ticket. He had 500. Skeptical lottery officials checked his tickets carefully. Each was genuine and contained the four winning numbers, but it was extremely unusual for someone to play the same numbers 500 times in one day. There were other red flags. Most people who present themselves at lottery claim centers are ecstatic, yet this winner waited for his prizes with the impatience of someone picking up dry cleaning.

The man who wants to make a do-it-yourself euthanasia machine

In a workshop in Rotterdam in the Netherlands, Philip Nitschke—“Dr. Death” or “the Elon Musk of assisted suicide” to some—is overseeing the last few rounds of testing on his new Sarco machine before shipping it to Switzerland, where he says its first user is waiting. This is the third prototype that Nitschke’s nonprofit, Exit International, has 3D-printed and wired up. Number one has been exhibited in Germany and Poland. “Number two was a disaster,” he says. Now he’s ironed out the manufacturing errors and is ready to launch: “This is the one that will be used.” A coffin-size pod with Star Trek stylings, the Sarco is the culmination of Nitschke’s 25-year campaign to “demedicalize death” through technology. Sealed inside the machine, a person who has chosen to die must answer three questions: Who are you? Where are you? And do you know what will happen when you press that button?  Here’s what will happen: The Sarco will fill with nitrogen gas. Its occupant will pass out in less than a minute and die by asphyxiation in around five.

Note: This is a version of my personal newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.

Continue reading “He won $30 million playing the lottery, and then he lost everything”

Meta, The Wire, and some fabricated emails

Last week, The Wire—an independent news outlet based in India—reported that Amit Malviya, the social-media manager for India’s ruling BJP party, was able to remove images from Instagram without having to go through the normal moderation channels. As evidence, The Wire published an internal Instagram report that appeared to corroborate its reporting, with timestamps for when the images were removed, and a note that the usual moderation process wasn’t required because they were flagged by Malviya. When Meta, the parent company of both Instagram and Facebook, denied that this was possible, The Wire published a second story, including a screenshot of what it said was an email from Andy Stone, a spokesman for Meta. In the email, Stone seemed upset about the leak of the original report, and asked his staff to put the journalists who published The Wire‘s initial story on a watchlist.

In a response to that story, Guy Rosen, chief information security officer at Meta, wrote that the email from Stone also appeared to have been fabricated. The Wire then published a third story, in which it described the technical method it used to verify the email, and included a video showing the process. The story also had screenshots of emails sent by two unnamed internet security experts, who said they had reviewed a copy of the Stone email and the process The Wire used to verify it, and they were convinced that it was genuine. Some reporters, however, noted that the emails from the experts were dated in 2021, not 2022. Devesh Kumar, the Wire reporter who handled the verification story, said this was a simple mistake due to a glitch in his operating system.

in an interview with Platformer, Casey Newton’s technology newsletter, Jahnavi Sen, deputy editor of The Wire, said someone from the site met with one of the original sources for the report about Instagram, and that this source verified their identity by providing a number of documents, including their work badge and pay slips. Kumar told Platformer that when The Wire approached its original source about the Instagram takedowns, the source send a copy of the internal report within 20 minutes. When The Wire reached out to a different source, they said they didn’t know anything about the Instagram report, but “they had insight into the discussions happening internally. Seven minutes later, the source responded with the email allegedly from Stone.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Meta, The Wire, and some fabricated emails”

Section 230, the platforms, and the Supreme Court

For the past several years, critics on both sides of the political spectrum have argued that Section 230 of the Communications Decency Act of 1996 gives social-media platforms such as Facebook, Twitter, and YouTube too much protection from legal liability for the content that appears on their networks. Right-wing critics argue that Section 230 allows social-media companies to censor conservative thinkers and groups without recourse, by removing their content (even though there is no evidence that this occurs), and liberal critics say the platforms use Section 230 as an excuse not to remove things they should be taking down, such as misinformation. Before the 2020 election, Joe Biden said he would abolish Section 230 if he became president, and he has made similar statements since he took office, saying the clause “should be revoked immediately.”

This week, the Supreme Court said it plans to hear two cases that are looking to chip away at Section 230 legal protections. One case claims that Google’s YouTube service violated the federal Anti-Terrorism Act by recommending videos featuring the ISIS terrorist group, and that these videos helped lead to the death of Nohemi Gonzalez, a 23-year-old US citizen who was killed in an ISIS attack in Paris in 2015. In the lawsuit, filed in 2016, Gonzalez’s family claims that while Section 230 protects YouTube from liability for hosting such content, it doesn’t protect the company from liability for promoting that content with its algorithms. The second case involves Twitter, which was also sued for violating the Anti-Terrorism Act; the family of Nawras Alassaf claimed ISIS-related content on Twitter contributed to his death in a terrorist attack in 2017.

The Supreme Court decided not to hear a similar case in 2020, which claimed that Facebook was responsible for attacks in Israel, because the social network promoted posts about the terrorist group Hamas. In March, the court also refused to review a decision which found Facebook was not liable for helping a man traffick a woman for sex. While Justice Clarence Thomas agreed with the decision not to hear that case, he also wrote that the court should consider the issue of “the proper scope of immunity” under Section 230. “Assuming Congress does not step in to clarify Section 230’s scope, we should do so in an appropriate case,” Thomas wrote. “It is hard to see why the protection that Section 230 grants publishers against being held strictly liable for third parties’ content should protect Facebook from liability for its own ‘acts and omissions.’”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Section 230, the platforms, and the Supreme Court”

Elon Musk, and the desire to believe in tech saviors

On July 12, in a lawsuit in Delaware’s Chancery Court, Twitter accused Elon Musk of failing to complete his $44 billion acquisition of the company, an offer he initially made in April. Musk subsequently filed a countersuit, in which he alleged that Twitter was not telling the truth about some aspects of its business, including the number of fake and automated accounts on the service. Although the case won’t be heard until October 17, some evidence has been filed in court, as a result of motions by Twitter or Musk. In one such motion that was filed last week, Twitter’s legal team claimed Musk has not turned over all of his text messages related to the deal, as required by the court. In particular, Twitter’s lawyers said there are “substantial gaps… corresponding to critical time periods,” including the period in which Musk was allegedly reconsidering the purchase.

As part of its submission, Twitter entered several pages worth of text messages it had received from Musk, including some from technology investors who appeared to be desperate to get a piece of the Twitter deal. “You have my sword,” Jason Calacanis, an angel investor and entrepreneur, said in one text message, in what seemed to be a reference to the movie Lord of the Rings. Antonio Gracias, another investor and a former member of the Tesla board of directors, told Musk in a message that free speech is “a principle we need to defend with our lives or we are lost to the darkness.” Other texts to Musk included suggestions about what the sender believed were the best ways to fix what’s wrong with Twitter (Mathias Döpfner, CEO of Axel Springer, argued that it would be best if he ran the company). One unnamed texter, identified only as TJ, exhorted Musk to “buy Twitter and delete it” and “please do something to fight woke-ism.”

In a column for The Atlantic, Charlie Warzel argued that the texts with Musk “shatter the myth of the tech genius.” The unavoidable conclusion, he says, is just how “unimpressive, unimaginative, and sycophantic the powerful men in Musk’s contacts appear to be. Whoever said there are no bad ideas in brainstorming never had access to Elon Musk’s phone.” According to one former social-media executive who spoke with Warzel, “the dominant reaction from all the threads I’m in is Everyone looks fucking dumb.” Another common reaction, this executive said, is to ask: “Is this really how business is done? There’s no real strategic thought or analysis. It’s just emotional and done without any real care for consequence.” In one text, Larry Ellison, the CEO of Oracle, says he is in for “a billion … or whatever you recommend;” in another, Marc Andreessen, a top Silicon Valley venture investor, says $250 million is available “with no additional work required.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Elon Musk, and the desire to believe in tech saviors”

TikTok and Congress try to cut a deal

In June, BuzzFeed News published an investigative report based on leaked audio from more than 80 internal meetings at TikTok, the popular Chinese-owned video-sharing app. Emily Baker-White of BuzzFeed wrote that the recordings—along with fourteen statements from nine TikTok employees—showed that China-based employees of the company “repeatedly accessed nonpublic data about US users of the video-sharing app between September 2021 and January 2022.” As Baker-White pointed out, this directly contradicted a senior TikTok executive’s sworn testimony in an October 2021 Senate hearing, in which the executive said that a “world-renowned, US-based security team” decided who would have access to US customer data. The reality illustrated by BuzzFeed’s recordings, Baker-White wrote, was “exactly the type of behavior that inspired former president Donald Trump to threaten to ban the app in the United States.”

That proposed ban never materialized, although Trump did issue an executive order banning US corporations from doing business with ByteDance. Joe Biden struck down the order, but concerns about TikTok’s Chinese ownership remained. Biden asked the Commerce Department to launch national security reviews of apps with links to foreign adversaries, including China, and BuzzFeed’s reporting about TikTok’s access to US data fueled those concerns. According to the Times, Marco Rubio, the Republican senator from Florida, met with Jake Sullivan, Biden’s national security adviser, last year, and expressed concern about China’s impact on US industrial policy, including Beijing’s influence over TikTok. Sullivan reportedly said he shared those concerns.

On Monday, the Times reported that the Biden administration and TikTok had drafted a preliminary agreement to resolve national security concerns posed by the app. The two sides have “more or less hammered out the foundations of a deal in which TikTok would make changes to its data security and governance without requiring its owner, ByteDance, to sell it,” the Times wrote, while adding that the Biden government and TikTok’s owners were “still wrangling over the potential agreement.” According to the Times, US Deputy Attorney General Lisa Monaco has concerns that the terms of the deal are not tough enough on China, and the Treasury Department is skeptical that the proposed agreement can sufficiently resolve national security issues. The Biden administration’s policy towards Beijing, the Times wrote, “is not substantially different from the posture of the Trump White House, reflecting a suspicion of China.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “TikTok and Congress try to cut a deal”

The social-media platforms and the Big Lie

In August, the major social-media platforms released statements about how they intended to handle misinformation in advance of the November 8 midterms, and for the most part Meta (the parent company of Facebook), Twitter, Google, and TikTok said it would be business as usual—in other words, that they weren’t planning to change much. As the midterms draw closer, however, a coalition of about 60 civil rights organizations say business as usual is not enough, and that the social platforms have not done nearly enough to stop continued misinformation about “the Big Lie”—that is, the unfounded claim that the 2020 election was somehow fraudulent. Jessica González, co-chief executive of the advocacy group Free Press, which is helping to lead the Change the Terms coalition, told the Washington Post: “There’s a question of: Are we going to have a democracy? And yet, I don’t think they are taking that question seriously. We can’t keep playing the same games over and over again, because the stakes are really high.”

González and other members of the coalition say they have spent months trying to convince the major platforms to do something to combat election-related disinformation, but their lobbying campaigns have had little or no impact. Naomi Nix reported for the Post last week that members of Change the Terms have sent multiple letters and emails, and raised their concerns through Zoom meetings with platform executives, but have seen little action as a result, apart from statements about how the companies plan to do their best to stop election misinformation. In April, the same 60 social-justice groups called on the platforms to “Fix the Feed” before the elections. Among their requests were that the companies change their algorithms in order to “stop promoting the most incendiary, hateful content”; that they “protect people equally,” regardless of what language they speak; and that they share details of their business models and moderation.

“The ‘big lie’ has become embedded in our political discourse, and it’s become a talking point for election-deniers to preemptively declare that the midterm elections are going to be stolen or filled with voter fraud,” Yosef Getachew, a media and democracy program director at the government watchdog Common Cause, told the Post in August. “What we’ve seen is that Facebook and Twitter aren’t really doing the best job, or any job, in terms of removing and combating disinformation that’s around the ‘big lie.’ ” According to an Associated Press report in August, Facebook “quietly curtailed” some of the internal safeguards designed to smother voting misinformation. “They’re not talking about it,” Katie Harbath, a former Facebook policy director who is now CEO of Anchor Change, a technology policy advisory firm, told the AP. “Best case scenario: They’re still doing a lot behind the scenes. Worst case scenario: They pull back, and we don’t know how that’s going to manifest itself for the midterms on the platforms.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “The social-media platforms and the Big Lie”

Florida, Texas, and the fight to control platform moderation

On May 23, the US Court of Appeals for the 11th Circuit struck down most of the provisions of a social-media law that the state of Florida enacted in 2021, which would have made it an offense for any social-media company to “deplatform” the account of “any political candidate or journalistic enterprise,” punishable by fines of up to $250,000 per day. In their 67-page decision, the 11th Circuit justices ruled that any moderation decisions made by social-media platforms such as Twitter and Facebook, including the banning of certain accounts, are effectively acts of speech, and therefore are protected by the First Amendment. Last week, however, the US Court of Appeals for the 5th Circuit came to almost the exact opposite conclusion, in a decision related to a social-media law that the state of Texas enacted last year. The law banned the major platforms from removing any content based on “the viewpoint of the user or another person [or] the viewpoint represented in the user’s expression or another person’s expression.”

In the 5th Circuit opinion, the court ruled that while the First Amendment guarantees every person’s right to free speech, it doesn’t guarantee corporations the right to “muzzle speech.” The Texas law, the justices said, “does not chill speech; if anything, it chills censorship. We reject the idea that corporations have a freewheeling First
Amendment right to censor what people say.” The court dismissed many of the arguments technology companies such as Twitter and Facebook mamde in defense of their right to moderate content, arguing that to allow such moderation would mean that “email providers, mobile phone companies, and banks could cancel the accounts of anyone who sends an email, makes a phone call, or spends money in support of a disfavored political party, candidate, or business.” The appeals court seemed to endorse a definition used in the Texas law, which states that the social media platforms “function as common carriers,” in much the same way that telephone and cable operators do.

NetChoice and the Computer and Communications Industry Association—trade groups that represent Facebook, Twitter, and Google—argued that the social-media platforms should have the same right to edit content that newpapers have, but the 5th Circuit court rejected this idea. “The platforms are not newspapers,” Judge Andrew Oldham wrote in the majority opinion. “Their censorship is not speech.” Given the conflicting arguments in the 11th Circuit case and the 5th Circuit decision, Ashley Moody, the Attorney General for Florida, on Wednesday asked the Supreme Court to decide whether states have the right to regulate how social media companies moderate. The answer will affect not just Florida and Texas, but dozens of other states—including Oklahoma, Indiana, Ohio, and West Virginia— that have either passed or are considering social-media laws that explicitly prevent the platforms from moderating content, laws with names such as The Internet Freedom Act, and The Social Media Anti-Censorship Bill.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Florida, Texas, and the fight to control platform moderation”

Facebook and paying for news

On June 9, Keach Hagey and Alexandra Bruell—two Wall Street Journal reporters who cover the major digital platforms—reported that Facebook, a subsidiary of Meta Platforms, was “re-examining its commitment to paying for news,” according to several unnamed sources who were described as being familiar with Facebook’s plans. The potential loss of those payments, the Journal reporters wrote, was “prompting some news organizations to prepare for a potential revenue shortfall of tens of millions of dollars.” The Journal story echoed a report published in May by The Information, a subscription-only site that covers technology; in that piece, reporters Sylvia Varnham O’Regan and Jessica Toonkel said Meta was “considering reducing the money it gives news organizations as it reevaluates the partnerships it struck over the past few years,” and that this reevaluation was part of a rethinking of “the value of including news in its flagship Facebook app.”

Meta wouldn’t comment to either the Journal or The Information, and a spokesperson told CJR the company “doesn’t comment on speculation.” But the loss of payments from Meta could have a noticeable impact for some outlets. According to the Journal report, for the past two years—since the original payment deals were announced in 2019— Meta has paid the Washington Post more than $15 million per year, the New York Times over $20 million per year, and the Journal more than $10 million per year (the payments to the Journal are part of a broader deal with Dow Jones, the newspaper’s parent, which is said to be worth more than $20 million per year). The deals, which are expected to expire this year, were part of a broader system of payments Meta made to a number of news outlets, including Bloomberg, ABC News, USA Today, Business Insider, and the right-wing news site Breitbart News. Smaller deals were typically for $3 million or less, the Journal said.

The payments were announced as part of the launch of the “News tab,” a dedicated section of the Facebook app where readers can find news from the outlets that partnered with Meta (higher payments were made to those with paywalls, according to a number of reports). The launch was a high-profile affair, including a one-on-one interview between Robert Thomson, CEO of News Corp.—parent company of Dow Jones and the Journal—and Mark Zuckerberg, the CEO of Meta. Emily Bell, director of the Tow Center for Digital Journalism at Columbia, wrote for CJR that the meeting was like “a Camp David for peace between the most truculent old media empire and one of its most noxious disruptors,” and wondered how much it had cost for News Corp. to forget about its long-standing opposition to Facebook’s media strategy. The event was “a publicity coup for Facebook; it tamed the biggest beast in the journalism jungle,” Bell wrote.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Facebook and paying for news”

Of Substack, apps, and strategy

Substack, a hosting and publishing platform for email newsletters, took what seemed like an innocuous step last week: it launched a standalone smartphone app. Not surprising, perhaps, since almost every content startup has an app. Substack’s app, however, is somewhat different, since the company is a middleman that stands in between writers and their audiences, rather than a startup offering a service directly to consumers. Those differences have led to questions about Substack’s long-term strategy, and whether that strategy is good or bad for the writers who use it. Some of the concern stems from the fact that Substack has raised over $80 million in venture financing from a range of VC groups, including Andreessen Horowitz, a leading Silicon Valley venture powerhouse. The funding has given Substack a theoretical market value of $650 million, but that level of investment can put pressure on companies to meet aggressive growth targets.

Substack’s founders, for their part, argue that the app is just an extension of those goals. Hamish McKenzie, Chris Best, and Jairaj Sethi wrote in a blog post on the Substack site that their intention in starting the company was to “build an alternative media ecosystem based on different laws of physics, where writers are rewarded with direct payments from readers, and where readers have total control over what they read.” The app, they argue, builds on those ideas, in that it is designed for “deep relationships, an alternative to the mindless scrolling and cheap dopamine hits that lie behind other home screen icons.” Among other things, they say the app will amplify the network effects that already exist on Substack, “making it easier for writers to get new subscribers, and for readers to explore and sample Substacks they might otherwise not have found.”

Casey Newton, a technology writer who publishes a newsletter called Platformer (which is hosted on Substack) writes that the app is a symbol of “the moment in the life of a young tech company when its ambitions grow from niche service provider to a giant global platform.” Newton writes that it is possible that the Substack app could help writers build growing businesses by advertising their publications to likely readers (the company says that a person who has a credit card on file with Substack is 2.5 times more likely to subscribe to a new publication than someone who hasn’t). But it is equally possible, he says, that the app “makes publications feel like cheap, interchangeable widgets: an endless pile of things to subscribe to, overwhelming readers with sheer volume.” In other words, an app that serves Substack’s interests rather than those of its newsletter authors.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Of Substack, apps, and strategy”

As Ukraine war continues, Russia becomes increasingly isolated

Since the invasion of Ukraine began two weeks ago, Russia has found itself cut off from the rest of the world not only economically but also in a number of other important ways. In some cases, Russia is the one that has been severing those ties, as it did recently when it banned Facebook, because the company refused to stop fact-checking Russia media outlets such as Russia Today and Sputnik (so far, Russian citizens are still allowed to use WhatsApp and Instagram). Twitter has also reportedly been partially blocked in the country, while other companies have voluntarily withdrawn their services. YouTube has banned RT and Sputnik, and so has the entire EU. TikTok said on Sunday that while it is still available in Russia, it will no longer allow users to livestream or upload video from that country, due in part to a flood of disinformation, and to the arrival of a new “fake news” law in Russia that carries stiff penalties.

Traditional media companies have also withdrawn their services, and in some cases their journalists, from the country since the invasion, in part because of the fake news law. Bloomberg News and the BBC were among the first to stop producing journalism from within Russia last week. John Micklethwait, editor in chief of Bloomberg, wrote in a note to staff that the Russian law seemed designed to “turn any independent reporter into a criminal purely by association” and as a result made it “impossible to continue any semblance of normal journalism inside the country.” The New York Times said Tuesday that it had decided to pull its journalists out of Russia, in part because of the uncertainty created by the new law, which makes it a punishable offence to refer to the invasion of Ukraine in a news story as a “war.”

It’s not just individual social networks or journalism outlets; several network connectivity providers have also withdrawn their services from Russia. They’re the giant telecom firms that supply the “backbone” connections between countries and the broader internet, and removing them means Russia is increasingly isolated from any information on the war that doesn’t come from inside the country or from Russian state media. Lumen, formerly known as CenturyLink, pulled the plug on Russia on Wednesday, withdrawing service from customers such as national internet provider Rostelecom, as well as a number of leading Russian mobile operators. Competitor Cogent Networks did the same with its broadband network last week.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “As Ukraine war continues, Russia becomes increasingly isolated”

Ukraine, viral media, and the scale of war

If there’s one thing Twitter and Facebook and Instagram and TikTok are good at, it’s distributing content and making it go viral, and Russia’s invasion of Ukraine is no exception to that rule. Every day, there are new images and videos, and some become that day’s trending topic: the video clip of Ukrainian president Zelensky in military fatigues, speaking defiantly about resisting Russia’s attack; photos of Kyiv’s mayor, a six-foot-seven-inch former heavyweight boxing champion, in army fatigues; a man standing in front of a line of Russian tanks, an echo of what happened in China’s Tianenmen Square during an uprising in 1989; the old Ukrainian woman who told Russian soldiers to put sunflower seeds in their pockets, so sunflowers would grow on their graves; the soldiers on Snake Island who told a Russian warship to “fuck off.” The list goes on.

Not surprisingly, some of these viral images are fake, or cleverly designed misinformation and propaganda. But even if the inspiring pictures of Ukrainians rebelling against Russia are real (or mostly real, like the photo of Kyiv’s mayor in army fatigues, which was taken during a training exercise in 2021), what are we supposed to learn from them? They seem to tell us a story, with a clear and pleasing narrative arc: Ukrainians are fighting back! Russia is on the ropes! The Washington Post writes that the social-media wave “has blunted Kremlin propaganda and rallied the world to Ukraine’s side.” Has it? Perhaps. But will any of that actually affect the outcome of this war, or is it just a fairy tale we are telling ourselves because it’s better than the reality?

The virality of the images may drive attention, but, from a journalism perspective, it often does a poor job of representing the stakes and the scale at-hand. Social media is a little like pointillism—a collection of tiny dots that theoretically combine to reveal a broader picture. But over the long term, war defies this kind of approach. The 40-mile long convoy of Russian military vehicles is a good example: frantic tweets about it fill Twitter, as though users are getting ready for some epic battle that will win the war, but the next day the convoy has barely moved. Are some Ukrainians fighting back? Yes. But just because we see one dead soldier beside a burned-out tank doesn’t mean Ukraine is going to win, whatever “win” means. As Ryan Broderick wrote in his Garbage Day newsletter, “winning a content war is not the same as winning an actual war.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Ukraine, viral media, and the scale of war”

Resurrected bill raises red flags, including for journalists

In 2020, members of Congress introduced a bill they said would help rid the internet of child sexual-abuse material (CSAM). The proposed legislation was called the EARN IT Act—an abbreviation for the full name, which was the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act. In addition to establishing a national commission on online child sexual exploitation prevention to come up with the best practices for eliminating such content, the bill stated that any online platforms hosting child sexual-abuse material would lose the protection of Section 230 of the Communications Decency Act, which gives electronic service providers immunity from prosecution for most of the content that is posted by their users.

The bill immediately came under fire from a number of groups—including the Electronic Frontier Foundation, the Freedom of the Press Foundation, and others—who said it failed on a number of levels. For example, as Mike Masnick of Techdirt noted, Section 230 doesn’t protect electronic platforms from liability for illegal content such as child sexual-abuse material, so passing a law exempting them from that protection is redundant, and unnecessary. Critics of the bill also said it could cause online services to stop offering end-to-end encryption, used by activists and journalists around the world, because using encryption is a potential red flag for those investigating CSAM.

In the end, the bill was dropped. But it was resurrected earlier this year, reintroduced by Richard Blumenthal and Lindsey Graham (the House has revived its version as well), and many groups say the current version is as bad as the original, if not worse. The EFF said the bill would still “pave the way for a massive new surveillance system, run by private companies, that would roll back some of the most important privacy and security features in technology used by people around the globe.” The group says the act would allow “private actors to scan every message sent online and report violations to law enforcement,” and potentially allow anything hosted online—including backups, websites, cloud photos, and more—to be scanned by third parties.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Resurrected bill raises red flags, including for journalists”