When is a library not a library? When it’s online, apparently

In March of 2020, the Internet Archive, a nonprofit created by entrepreneur Brewster Kahle, launched a new feature called the National Emergency Library. Since COVID-19 restrictions had made it difficult or impossible for people to buy books or visit libraries in person, the Archive removed any limits on the digital borrowing of the more than three million books in its database, and made them all publicly available, for free. The project was supported by a number of universities, researchers, and librarians, but some of the authors and publishers who owned the copyright to these books saw it not as a public service, but as theft. Four publishers—Hachette, HarperCollins, John Wiley & Sons, and Penguin Random House—filed a lawsuit. The Internet Archive shut the project down, and returned to its previous Controlled Digital Lending program, which allows only one person to borrow a digital copy of a book at any given time. But the lawsuit continued, with the publishers arguing that any digital lending by the Archive was copyright infringement.

Last week, Judge John G. Koeltl of the Southern District of New York ruled in favor of the publishers and dismissed every aspect of the Archive’s defense, including the claim that it is protected by the fair use exception in copyright law. Koeltl wrote that fair use protects transformative versions of copyrighted works, but that the Archive’s copies don’t qualify. The Archive tried to make the case that its digital lending is transformative because it “facilitates new and expanding interactions between library books and the web,” the judge noted. But he added that an infringing use does not become transformative simply by “making an invaluable contribution to the progress of science and cultivation of the arts.” A Google book-scanning project was found to be protected by fair use in a 2014 legal decision, but Koeltl pointed out that Google used the scans to create a database that could be searched, and thereby increased the utility of the books, rather than distributing complete digital copies. Any “alleged benefits” from the Archive’s lending “cannot outweigh the market harm to the publishers,” Koeltl wrote.

The scanning and lending of digital books is just one part of what the Internet Archive does. Founded in 1996, Kahle said he hoped the Archive would become a modern version of the ancient Library of Alexandria, and provide “universal access to all knowledge,” he told TechRadar. The Archive has created digital copies of more than seven hundred billion webpages, which are available for free through a service called the Wayback Machine. It has also archived millions of audio files, video games, and other software. A number of libraries, including some that have partnered with the Internet Archive, have offered a version of controlled digital lending for some time, based on the theory that limiting digital borrowing to a single copy of a book is similar to what libraries do with physical books. But publishers and authors were critical of it even before the current lawsuit—in 2018, the Authors Guild called the Archive’s lending program “a flagrant violation of copyright law”—and, until now, the legality of this model has never been tested in the courts.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “When is a library not a library? When it’s online, apparently”

The weird science behind what we call “glitter”

Each December, surrounded by wonderlands of white paper snowflakes, bright red winterberries, and forests of green conifers reclaiming their ancestral territory from inside the nation’s living rooms and hotel lobbies, children and adults delight to see the true harbinger of the holidays: aluminum metalized polyethylene terephthalate. Aluminum metalized polyethylene terephthalate settles over store windows like dazzling frost. It flashes like hot, molten gold across the nail plates of young women. It sparkles like pure precision-cut starlight on an ornament of a North American brown bear driving a car towing a camper van. Indeed, in Clement Clarke Moore’s seminal Christmas Eve poem, the eyes of Saint Nicholas himself are said to twinkle like aluminum metalized polyethylene terephthalate.

An updated history of a viral Internet video

In July, Defector published a story about an ancient internet video called “Basketball (so funny you’ll pee your pants).avi,” based on extensive archival research and interviews with the people involved. The video was filmed at The Shipley School in Bryn Mawr, Penn., in the mid-90s, during a basketball game against Delco Christian. It features a Shipley player heaving the ball across the length of the court, where it collides with a small child. Footage of the freak accident was submitted to America’s Funniest Home Videos, and eventually made its way across Web 1.0 video sites and peer-to-peer networks. It is one of the earliest viral videos on the internet. But recently, the story got a lot more complicated all of a sudden.

Note: This is a version of my personal newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.

Continue reading “The weird science behind what we call “glitter””

He won $30 million playing the lottery, and then he lost everything

One June morning in 2017, an Albanian American real-estate broker named Viktor Gjonaj parked outside a strip mall in Sterling Heights, a small suburb on the outskirts of Detroit. He hurried into the claim office of the Michigan Lottery. Gjonaj, who is 6 foot 5, loomed over the front desk and announced that he had won the Daily 4 lottery draw, worth $5,000. But Gjonaj did not have one winning ticket. He had 500. Skeptical lottery officials checked his tickets carefully. Each was genuine and contained the four winning numbers, but it was extremely unusual for someone to play the same numbers 500 times in one day. There were other red flags. Most people who present themselves at lottery claim centers are ecstatic, yet this winner waited for his prizes with the impatience of someone picking up dry cleaning.

The man who wants to make a do-it-yourself euthanasia machine

In a workshop in Rotterdam in the Netherlands, Philip Nitschke—“Dr. Death” or “the Elon Musk of assisted suicide” to some—is overseeing the last few rounds of testing on his new Sarco machine before shipping it to Switzerland, where he says its first user is waiting. This is the third prototype that Nitschke’s nonprofit, Exit International, has 3D-printed and wired up. Number one has been exhibited in Germany and Poland. “Number two was a disaster,” he says. Now he’s ironed out the manufacturing errors and is ready to launch: “This is the one that will be used.” A coffin-size pod with Star Trek stylings, the Sarco is the culmination of Nitschke’s 25-year campaign to “demedicalize death” through technology. Sealed inside the machine, a person who has chosen to die must answer three questions: Who are you? Where are you? And do you know what will happen when you press that button?  Here’s what will happen: The Sarco will fill with nitrogen gas. Its occupant will pass out in less than a minute and die by asphyxiation in around five.

Note: This is a version of my personal newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.

Continue reading “He won $30 million playing the lottery, and then he lost everything”

Meta, The Wire, and some fabricated emails

Last week, The Wire—an independent news outlet based in India—reported that Amit Malviya, the social-media manager for India’s ruling BJP party, was able to remove images from Instagram without having to go through the normal moderation channels. As evidence, The Wire published an internal Instagram report that appeared to corroborate its reporting, with timestamps for when the images were removed, and a note that the usual moderation process wasn’t required because they were flagged by Malviya. When Meta, the parent company of both Instagram and Facebook, denied that this was possible, The Wire published a second story, including a screenshot of what it said was an email from Andy Stone, a spokesman for Meta. In the email, Stone seemed upset about the leak of the original report, and asked his staff to put the journalists who published The Wire‘s initial story on a watchlist.

In a response to that story, Guy Rosen, chief information security officer at Meta, wrote that the email from Stone also appeared to have been fabricated. The Wire then published a third story, in which it described the technical method it used to verify the email, and included a video showing the process. The story also had screenshots of emails sent by two unnamed internet security experts, who said they had reviewed a copy of the Stone email and the process The Wire used to verify it, and they were convinced that it was genuine. Some reporters, however, noted that the emails from the experts were dated in 2021, not 2022. Devesh Kumar, the Wire reporter who handled the verification story, said this was a simple mistake due to a glitch in his operating system.

in an interview with Platformer, Casey Newton’s technology newsletter, Jahnavi Sen, deputy editor of The Wire, said someone from the site met with one of the original sources for the report about Instagram, and that this source verified their identity by providing a number of documents, including their work badge and pay slips. Kumar told Platformer that when The Wire approached its original source about the Instagram takedowns, the source send a copy of the internal report within 20 minutes. When The Wire reached out to a different source, they said they didn’t know anything about the Instagram report, but “they had insight into the discussions happening internally. Seven minutes later, the source responded with the email allegedly from Stone.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Meta, The Wire, and some fabricated emails”

Section 230, the platforms, and the Supreme Court

For the past several years, critics on both sides of the political spectrum have argued that Section 230 of the Communications Decency Act of 1996 gives social-media platforms such as Facebook, Twitter, and YouTube too much protection from legal liability for the content that appears on their networks. Right-wing critics argue that Section 230 allows social-media companies to censor conservative thinkers and groups without recourse, by removing their content (even though there is no evidence that this occurs), and liberal critics say the platforms use Section 230 as an excuse not to remove things they should be taking down, such as misinformation. Before the 2020 election, Joe Biden said he would abolish Section 230 if he became president, and he has made similar statements since he took office, saying the clause “should be revoked immediately.”

This week, the Supreme Court said it plans to hear two cases that are looking to chip away at Section 230 legal protections. One case claims that Google’s YouTube service violated the federal Anti-Terrorism Act by recommending videos featuring the ISIS terrorist group, and that these videos helped lead to the death of Nohemi Gonzalez, a 23-year-old US citizen who was killed in an ISIS attack in Paris in 2015. In the lawsuit, filed in 2016, Gonzalez’s family claims that while Section 230 protects YouTube from liability for hosting such content, it doesn’t protect the company from liability for promoting that content with its algorithms. The second case involves Twitter, which was also sued for violating the Anti-Terrorism Act; the family of Nawras Alassaf claimed ISIS-related content on Twitter contributed to his death in a terrorist attack in 2017.

The Supreme Court decided not to hear a similar case in 2020, which claimed that Facebook was responsible for attacks in Israel, because the social network promoted posts about the terrorist group Hamas. In March, the court also refused to review a decision which found Facebook was not liable for helping a man traffick a woman for sex. While Justice Clarence Thomas agreed with the decision not to hear that case, he also wrote that the court should consider the issue of “the proper scope of immunity” under Section 230. “Assuming Congress does not step in to clarify Section 230’s scope, we should do so in an appropriate case,” Thomas wrote. “It is hard to see why the protection that Section 230 grants publishers against being held strictly liable for third parties’ content should protect Facebook from liability for its own ‘acts and omissions.’”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Section 230, the platforms, and the Supreme Court”

Elon Musk, and the desire to believe in tech saviors

On July 12, in a lawsuit in Delaware’s Chancery Court, Twitter accused Elon Musk of failing to complete his $44 billion acquisition of the company, an offer he initially made in April. Musk subsequently filed a countersuit, in which he alleged that Twitter was not telling the truth about some aspects of its business, including the number of fake and automated accounts on the service. Although the case won’t be heard until October 17, some evidence has been filed in court, as a result of motions by Twitter or Musk. In one such motion that was filed last week, Twitter’s legal team claimed Musk has not turned over all of his text messages related to the deal, as required by the court. In particular, Twitter’s lawyers said there are “substantial gaps… corresponding to critical time periods,” including the period in which Musk was allegedly reconsidering the purchase.

As part of its submission, Twitter entered several pages worth of text messages it had received from Musk, including some from technology investors who appeared to be desperate to get a piece of the Twitter deal. “You have my sword,” Jason Calacanis, an angel investor and entrepreneur, said in one text message, in what seemed to be a reference to the movie Lord of the Rings. Antonio Gracias, another investor and a former member of the Tesla board of directors, told Musk in a message that free speech is “a principle we need to defend with our lives or we are lost to the darkness.” Other texts to Musk included suggestions about what the sender believed were the best ways to fix what’s wrong with Twitter (Mathias Döpfner, CEO of Axel Springer, argued that it would be best if he ran the company). One unnamed texter, identified only as TJ, exhorted Musk to “buy Twitter and delete it” and “please do something to fight woke-ism.”

In a column for The Atlantic, Charlie Warzel argued that the texts with Musk “shatter the myth of the tech genius.” The unavoidable conclusion, he says, is just how “unimpressive, unimaginative, and sycophantic the powerful men in Musk’s contacts appear to be. Whoever said there are no bad ideas in brainstorming never had access to Elon Musk’s phone.” According to one former social-media executive who spoke with Warzel, “the dominant reaction from all the threads I’m in is Everyone looks fucking dumb.” Another common reaction, this executive said, is to ask: “Is this really how business is done? There’s no real strategic thought or analysis. It’s just emotional and done without any real care for consequence.” In one text, Larry Ellison, the CEO of Oracle, says he is in for “a billion … or whatever you recommend;” in another, Marc Andreessen, a top Silicon Valley venture investor, says $250 million is available “with no additional work required.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Elon Musk, and the desire to believe in tech saviors”

TikTok and Congress try to cut a deal

In June, BuzzFeed News published an investigative report based on leaked audio from more than 80 internal meetings at TikTok, the popular Chinese-owned video-sharing app. Emily Baker-White of BuzzFeed wrote that the recordings—along with fourteen statements from nine TikTok employees—showed that China-based employees of the company “repeatedly accessed nonpublic data about US users of the video-sharing app between September 2021 and January 2022.” As Baker-White pointed out, this directly contradicted a senior TikTok executive’s sworn testimony in an October 2021 Senate hearing, in which the executive said that a “world-renowned, US-based security team” decided who would have access to US customer data. The reality illustrated by BuzzFeed’s recordings, Baker-White wrote, was “exactly the type of behavior that inspired former president Donald Trump to threaten to ban the app in the United States.”

That proposed ban never materialized, although Trump did issue an executive order banning US corporations from doing business with ByteDance. Joe Biden struck down the order, but concerns about TikTok’s Chinese ownership remained. Biden asked the Commerce Department to launch national security reviews of apps with links to foreign adversaries, including China, and BuzzFeed’s reporting about TikTok’s access to US data fueled those concerns. According to the Times, Marco Rubio, the Republican senator from Florida, met with Jake Sullivan, Biden’s national security adviser, last year, and expressed concern about China’s impact on US industrial policy, including Beijing’s influence over TikTok. Sullivan reportedly said he shared those concerns.

On Monday, the Times reported that the Biden administration and TikTok had drafted a preliminary agreement to resolve national security concerns posed by the app. The two sides have “more or less hammered out the foundations of a deal in which TikTok would make changes to its data security and governance without requiring its owner, ByteDance, to sell it,” the Times wrote, while adding that the Biden government and TikTok’s owners were “still wrangling over the potential agreement.” According to the Times, US Deputy Attorney General Lisa Monaco has concerns that the terms of the deal are not tough enough on China, and the Treasury Department is skeptical that the proposed agreement can sufficiently resolve national security issues. The Biden administration’s policy towards Beijing, the Times wrote, “is not substantially different from the posture of the Trump White House, reflecting a suspicion of China.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “TikTok and Congress try to cut a deal”

The social-media platforms and the Big Lie

In August, the major social-media platforms released statements about how they intended to handle misinformation in advance of the November 8 midterms, and for the most part Meta (the parent company of Facebook), Twitter, Google, and TikTok said it would be business as usual—in other words, that they weren’t planning to change much. As the midterms draw closer, however, a coalition of about 60 civil rights organizations say business as usual is not enough, and that the social platforms have not done nearly enough to stop continued misinformation about “the Big Lie”—that is, the unfounded claim that the 2020 election was somehow fraudulent. Jessica González, co-chief executive of the advocacy group Free Press, which is helping to lead the Change the Terms coalition, told the Washington Post: “There’s a question of: Are we going to have a democracy? And yet, I don’t think they are taking that question seriously. We can’t keep playing the same games over and over again, because the stakes are really high.”

González and other members of the coalition say they have spent months trying to convince the major platforms to do something to combat election-related disinformation, but their lobbying campaigns have had little or no impact. Naomi Nix reported for the Post last week that members of Change the Terms have sent multiple letters and emails, and raised their concerns through Zoom meetings with platform executives, but have seen little action as a result, apart from statements about how the companies plan to do their best to stop election misinformation. In April, the same 60 social-justice groups called on the platforms to “Fix the Feed” before the elections. Among their requests were that the companies change their algorithms in order to “stop promoting the most incendiary, hateful content”; that they “protect people equally,” regardless of what language they speak; and that they share details of their business models and moderation.

“The ‘big lie’ has become embedded in our political discourse, and it’s become a talking point for election-deniers to preemptively declare that the midterm elections are going to be stolen or filled with voter fraud,” Yosef Getachew, a media and democracy program director at the government watchdog Common Cause, told the Post in August. “What we’ve seen is that Facebook and Twitter aren’t really doing the best job, or any job, in terms of removing and combating disinformation that’s around the ‘big lie.’ ” According to an Associated Press report in August, Facebook “quietly curtailed” some of the internal safeguards designed to smother voting misinformation. “They’re not talking about it,” Katie Harbath, a former Facebook policy director who is now CEO of Anchor Change, a technology policy advisory firm, told the AP. “Best case scenario: They’re still doing a lot behind the scenes. Worst case scenario: They pull back, and we don’t know how that’s going to manifest itself for the midterms on the platforms.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “The social-media platforms and the Big Lie”

Florida, Texas, and the fight to control platform moderation

On May 23, the US Court of Appeals for the 11th Circuit struck down most of the provisions of a social-media law that the state of Florida enacted in 2021, which would have made it an offense for any social-media company to “deplatform” the account of “any political candidate or journalistic enterprise,” punishable by fines of up to $250,000 per day. In their 67-page decision, the 11th Circuit justices ruled that any moderation decisions made by social-media platforms such as Twitter and Facebook, including the banning of certain accounts, are effectively acts of speech, and therefore are protected by the First Amendment. Last week, however, the US Court of Appeals for the 5th Circuit came to almost the exact opposite conclusion, in a decision related to a social-media law that the state of Texas enacted last year. The law banned the major platforms from removing any content based on “the viewpoint of the user or another person [or] the viewpoint represented in the user’s expression or another person’s expression.”

In the 5th Circuit opinion, the court ruled that while the First Amendment guarantees every person’s right to free speech, it doesn’t guarantee corporations the right to “muzzle speech.” The Texas law, the justices said, “does not chill speech; if anything, it chills censorship. We reject the idea that corporations have a freewheeling First
Amendment right to censor what people say.” The court dismissed many of the arguments technology companies such as Twitter and Facebook mamde in defense of their right to moderate content, arguing that to allow such moderation would mean that “email providers, mobile phone companies, and banks could cancel the accounts of anyone who sends an email, makes a phone call, or spends money in support of a disfavored political party, candidate, or business.” The appeals court seemed to endorse a definition used in the Texas law, which states that the social media platforms “function as common carriers,” in much the same way that telephone and cable operators do.

NetChoice and the Computer and Communications Industry Association—trade groups that represent Facebook, Twitter, and Google—argued that the social-media platforms should have the same right to edit content that newpapers have, but the 5th Circuit court rejected this idea. “The platforms are not newspapers,” Judge Andrew Oldham wrote in the majority opinion. “Their censorship is not speech.” Given the conflicting arguments in the 11th Circuit case and the 5th Circuit decision, Ashley Moody, the Attorney General for Florida, on Wednesday asked the Supreme Court to decide whether states have the right to regulate how social media companies moderate. The answer will affect not just Florida and Texas, but dozens of other states—including Oklahoma, Indiana, Ohio, and West Virginia— that have either passed or are considering social-media laws that explicitly prevent the platforms from moderating content, laws with names such as The Internet Freedom Act, and The Social Media Anti-Censorship Bill.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Florida, Texas, and the fight to control platform moderation”

Facebook and paying for news

On June 9, Keach Hagey and Alexandra Bruell—two Wall Street Journal reporters who cover the major digital platforms—reported that Facebook, a subsidiary of Meta Platforms, was “re-examining its commitment to paying for news,” according to several unnamed sources who were described as being familiar with Facebook’s plans. The potential loss of those payments, the Journal reporters wrote, was “prompting some news organizations to prepare for a potential revenue shortfall of tens of millions of dollars.” The Journal story echoed a report published in May by The Information, a subscription-only site that covers technology; in that piece, reporters Sylvia Varnham O’Regan and Jessica Toonkel said Meta was “considering reducing the money it gives news organizations as it reevaluates the partnerships it struck over the past few years,” and that this reevaluation was part of a rethinking of “the value of including news in its flagship Facebook app.”

Meta wouldn’t comment to either the Journal or The Information, and a spokesperson told CJR the company “doesn’t comment on speculation.” But the loss of payments from Meta could have a noticeable impact for some outlets. According to the Journal report, for the past two years—since the original payment deals were announced in 2019— Meta has paid the Washington Post more than $15 million per year, the New York Times over $20 million per year, and the Journal more than $10 million per year (the payments to the Journal are part of a broader deal with Dow Jones, the newspaper’s parent, which is said to be worth more than $20 million per year). The deals, which are expected to expire this year, were part of a broader system of payments Meta made to a number of news outlets, including Bloomberg, ABC News, USA Today, Business Insider, and the right-wing news site Breitbart News. Smaller deals were typically for $3 million or less, the Journal said.

The payments were announced as part of the launch of the “News tab,” a dedicated section of the Facebook app where readers can find news from the outlets that partnered with Meta (higher payments were made to those with paywalls, according to a number of reports). The launch was a high-profile affair, including a one-on-one interview between Robert Thomson, CEO of News Corp.—parent company of Dow Jones and the Journal—and Mark Zuckerberg, the CEO of Meta. Emily Bell, director of the Tow Center for Digital Journalism at Columbia, wrote for CJR that the meeting was like “a Camp David for peace between the most truculent old media empire and one of its most noxious disruptors,” and wondered how much it had cost for News Corp. to forget about its long-standing opposition to Facebook’s media strategy. The event was “a publicity coup for Facebook; it tamed the biggest beast in the journalism jungle,” Bell wrote.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Facebook and paying for news”