He won $30 million playing the lottery, and then he lost everything

One June morning in 2017, an Albanian American real-estate broker named Viktor Gjonaj parked outside a strip mall in Sterling Heights, a small suburb on the outskirts of Detroit. He hurried into the claim office of the Michigan Lottery. Gjonaj, who is 6 foot 5, loomed over the front desk and announced that he had won the Daily 4 lottery draw, worth $5,000. But Gjonaj did not have one winning ticket. He had 500. Skeptical lottery officials checked his tickets carefully. Each was genuine and contained the four winning numbers, but it was extremely unusual for someone to play the same numbers 500 times in one day. There were other red flags. Most people who present themselves at lottery claim centers are ecstatic, yet this winner waited for his prizes with the impatience of someone picking up dry cleaning.

The man who wants to make a do-it-yourself euthanasia machine

In a workshop in Rotterdam in the Netherlands, Philip Nitschke—“Dr. Death” or “the Elon Musk of assisted suicide” to some—is overseeing the last few rounds of testing on his new Sarco machine before shipping it to Switzerland, where he says its first user is waiting. This is the third prototype that Nitschke’s nonprofit, Exit International, has 3D-printed and wired up. Number one has been exhibited in Germany and Poland. “Number two was a disaster,” he says. Now he’s ironed out the manufacturing errors and is ready to launch: “This is the one that will be used.” A coffin-size pod with Star Trek stylings, the Sarco is the culmination of Nitschke’s 25-year campaign to “demedicalize death” through technology. Sealed inside the machine, a person who has chosen to die must answer three questions: Who are you? Where are you? And do you know what will happen when you press that button?  Here’s what will happen: The Sarco will fill with nitrogen gas. Its occupant will pass out in less than a minute and die by asphyxiation in around five.

Note: This is a version of my personal newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.

Continue reading “He won $30 million playing the lottery, and then he lost everything”

Meta, The Wire, and some fabricated emails

Last week, The Wire—an independent news outlet based in India—reported that Amit Malviya, the social-media manager for India’s ruling BJP party, was able to remove images from Instagram without having to go through the normal moderation channels. As evidence, The Wire published an internal Instagram report that appeared to corroborate its reporting, with timestamps for when the images were removed, and a note that the usual moderation process wasn’t required because they were flagged by Malviya. When Meta, the parent company of both Instagram and Facebook, denied that this was possible, The Wire published a second story, including a screenshot of what it said was an email from Andy Stone, a spokesman for Meta. In the email, Stone seemed upset about the leak of the original report, and asked his staff to put the journalists who published The Wire‘s initial story on a watchlist.

In a response to that story, Guy Rosen, chief information security officer at Meta, wrote that the email from Stone also appeared to have been fabricated. The Wire then published a third story, in which it described the technical method it used to verify the email, and included a video showing the process. The story also had screenshots of emails sent by two unnamed internet security experts, who said they had reviewed a copy of the Stone email and the process The Wire used to verify it, and they were convinced that it was genuine. Some reporters, however, noted that the emails from the experts were dated in 2021, not 2022. Devesh Kumar, the Wire reporter who handled the verification story, said this was a simple mistake due to a glitch in his operating system.

in an interview with Platformer, Casey Newton’s technology newsletter, Jahnavi Sen, deputy editor of The Wire, said someone from the site met with one of the original sources for the report about Instagram, and that this source verified their identity by providing a number of documents, including their work badge and pay slips. Kumar told Platformer that when The Wire approached its original source about the Instagram takedowns, the source send a copy of the internal report within 20 minutes. When The Wire reached out to a different source, they said they didn’t know anything about the Instagram report, but “they had insight into the discussions happening internally. Seven minutes later, the source responded with the email allegedly from Stone.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Meta, The Wire, and some fabricated emails”

Section 230, the platforms, and the Supreme Court

For the past several years, critics on both sides of the political spectrum have argued that Section 230 of the Communications Decency Act of 1996 gives social-media platforms such as Facebook, Twitter, and YouTube too much protection from legal liability for the content that appears on their networks. Right-wing critics argue that Section 230 allows social-media companies to censor conservative thinkers and groups without recourse, by removing their content (even though there is no evidence that this occurs), and liberal critics say the platforms use Section 230 as an excuse not to remove things they should be taking down, such as misinformation. Before the 2020 election, Joe Biden said he would abolish Section 230 if he became president, and he has made similar statements since he took office, saying the clause “should be revoked immediately.”

This week, the Supreme Court said it plans to hear two cases that are looking to chip away at Section 230 legal protections. One case claims that Google’s YouTube service violated the federal Anti-Terrorism Act by recommending videos featuring the ISIS terrorist group, and that these videos helped lead to the death of Nohemi Gonzalez, a 23-year-old US citizen who was killed in an ISIS attack in Paris in 2015. In the lawsuit, filed in 2016, Gonzalez’s family claims that while Section 230 protects YouTube from liability for hosting such content, it doesn’t protect the company from liability for promoting that content with its algorithms. The second case involves Twitter, which was also sued for violating the Anti-Terrorism Act; the family of Nawras Alassaf claimed ISIS-related content on Twitter contributed to his death in a terrorist attack in 2017.

The Supreme Court decided not to hear a similar case in 2020, which claimed that Facebook was responsible for attacks in Israel, because the social network promoted posts about the terrorist group Hamas. In March, the court also refused to review a decision which found Facebook was not liable for helping a man traffick a woman for sex. While Justice Clarence Thomas agreed with the decision not to hear that case, he also wrote that the court should consider the issue of “the proper scope of immunity” under Section 230. “Assuming Congress does not step in to clarify Section 230’s scope, we should do so in an appropriate case,” Thomas wrote. “It is hard to see why the protection that Section 230 grants publishers against being held strictly liable for third parties’ content should protect Facebook from liability for its own ‘acts and omissions.’”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Section 230, the platforms, and the Supreme Court”

Elon Musk, and the desire to believe in tech saviors

On July 12, in a lawsuit in Delaware’s Chancery Court, Twitter accused Elon Musk of failing to complete his $44 billion acquisition of the company, an offer he initially made in April. Musk subsequently filed a countersuit, in which he alleged that Twitter was not telling the truth about some aspects of its business, including the number of fake and automated accounts on the service. Although the case won’t be heard until October 17, some evidence has been filed in court, as a result of motions by Twitter or Musk. In one such motion that was filed last week, Twitter’s legal team claimed Musk has not turned over all of his text messages related to the deal, as required by the court. In particular, Twitter’s lawyers said there are “substantial gaps… corresponding to critical time periods,” including the period in which Musk was allegedly reconsidering the purchase.

As part of its submission, Twitter entered several pages worth of text messages it had received from Musk, including some from technology investors who appeared to be desperate to get a piece of the Twitter deal. “You have my sword,” Jason Calacanis, an angel investor and entrepreneur, said in one text message, in what seemed to be a reference to the movie Lord of the Rings. Antonio Gracias, another investor and a former member of the Tesla board of directors, told Musk in a message that free speech is “a principle we need to defend with our lives or we are lost to the darkness.” Other texts to Musk included suggestions about what the sender believed were the best ways to fix what’s wrong with Twitter (Mathias Döpfner, CEO of Axel Springer, argued that it would be best if he ran the company). One unnamed texter, identified only as TJ, exhorted Musk to “buy Twitter and delete it” and “please do something to fight woke-ism.”

In a column for The Atlantic, Charlie Warzel argued that the texts with Musk “shatter the myth of the tech genius.” The unavoidable conclusion, he says, is just how “unimpressive, unimaginative, and sycophantic the powerful men in Musk’s contacts appear to be. Whoever said there are no bad ideas in brainstorming never had access to Elon Musk’s phone.” According to one former social-media executive who spoke with Warzel, “the dominant reaction from all the threads I’m in is Everyone looks fucking dumb.” Another common reaction, this executive said, is to ask: “Is this really how business is done? There’s no real strategic thought or analysis. It’s just emotional and done without any real care for consequence.” In one text, Larry Ellison, the CEO of Oracle, says he is in for “a billion … or whatever you recommend;” in another, Marc Andreessen, a top Silicon Valley venture investor, says $250 million is available “with no additional work required.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Elon Musk, and the desire to believe in tech saviors”

TikTok and Congress try to cut a deal

In June, BuzzFeed News published an investigative report based on leaked audio from more than 80 internal meetings at TikTok, the popular Chinese-owned video-sharing app. Emily Baker-White of BuzzFeed wrote that the recordings—along with fourteen statements from nine TikTok employees—showed that China-based employees of the company “repeatedly accessed nonpublic data about US users of the video-sharing app between September 2021 and January 2022.” As Baker-White pointed out, this directly contradicted a senior TikTok executive’s sworn testimony in an October 2021 Senate hearing, in which the executive said that a “world-renowned, US-based security team” decided who would have access to US customer data. The reality illustrated by BuzzFeed’s recordings, Baker-White wrote, was “exactly the type of behavior that inspired former president Donald Trump to threaten to ban the app in the United States.”

That proposed ban never materialized, although Trump did issue an executive order banning US corporations from doing business with ByteDance. Joe Biden struck down the order, but concerns about TikTok’s Chinese ownership remained. Biden asked the Commerce Department to launch national security reviews of apps with links to foreign adversaries, including China, and BuzzFeed’s reporting about TikTok’s access to US data fueled those concerns. According to the Times, Marco Rubio, the Republican senator from Florida, met with Jake Sullivan, Biden’s national security adviser, last year, and expressed concern about China’s impact on US industrial policy, including Beijing’s influence over TikTok. Sullivan reportedly said he shared those concerns.

On Monday, the Times reported that the Biden administration and TikTok had drafted a preliminary agreement to resolve national security concerns posed by the app. The two sides have “more or less hammered out the foundations of a deal in which TikTok would make changes to its data security and governance without requiring its owner, ByteDance, to sell it,” the Times wrote, while adding that the Biden government and TikTok’s owners were “still wrangling over the potential agreement.” According to the Times, US Deputy Attorney General Lisa Monaco has concerns that the terms of the deal are not tough enough on China, and the Treasury Department is skeptical that the proposed agreement can sufficiently resolve national security issues. The Biden administration’s policy towards Beijing, the Times wrote, “is not substantially different from the posture of the Trump White House, reflecting a suspicion of China.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “TikTok and Congress try to cut a deal”

The social-media platforms and the Big Lie

In August, the major social-media platforms released statements about how they intended to handle misinformation in advance of the November 8 midterms, and for the most part Meta (the parent company of Facebook), Twitter, Google, and TikTok said it would be business as usual—in other words, that they weren’t planning to change much. As the midterms draw closer, however, a coalition of about 60 civil rights organizations say business as usual is not enough, and that the social platforms have not done nearly enough to stop continued misinformation about “the Big Lie”—that is, the unfounded claim that the 2020 election was somehow fraudulent. Jessica González, co-chief executive of the advocacy group Free Press, which is helping to lead the Change the Terms coalition, told the Washington Post: “There’s a question of: Are we going to have a democracy? And yet, I don’t think they are taking that question seriously. We can’t keep playing the same games over and over again, because the stakes are really high.”

González and other members of the coalition say they have spent months trying to convince the major platforms to do something to combat election-related disinformation, but their lobbying campaigns have had little or no impact. Naomi Nix reported for the Post last week that members of Change the Terms have sent multiple letters and emails, and raised their concerns through Zoom meetings with platform executives, but have seen little action as a result, apart from statements about how the companies plan to do their best to stop election misinformation. In April, the same 60 social-justice groups called on the platforms to “Fix the Feed” before the elections. Among their requests were that the companies change their algorithms in order to “stop promoting the most incendiary, hateful content”; that they “protect people equally,” regardless of what language they speak; and that they share details of their business models and moderation.

“The ‘big lie’ has become embedded in our political discourse, and it’s become a talking point for election-deniers to preemptively declare that the midterm elections are going to be stolen or filled with voter fraud,” Yosef Getachew, a media and democracy program director at the government watchdog Common Cause, told the Post in August. “What we’ve seen is that Facebook and Twitter aren’t really doing the best job, or any job, in terms of removing and combating disinformation that’s around the ‘big lie.’ ” According to an Associated Press report in August, Facebook “quietly curtailed” some of the internal safeguards designed to smother voting misinformation. “They’re not talking about it,” Katie Harbath, a former Facebook policy director who is now CEO of Anchor Change, a technology policy advisory firm, told the AP. “Best case scenario: They’re still doing a lot behind the scenes. Worst case scenario: They pull back, and we don’t know how that’s going to manifest itself for the midterms on the platforms.”

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “The social-media platforms and the Big Lie”

Florida, Texas, and the fight to control platform moderation

On May 23, the US Court of Appeals for the 11th Circuit struck down most of the provisions of a social-media law that the state of Florida enacted in 2021, which would have made it an offense for any social-media company to “deplatform” the account of “any political candidate or journalistic enterprise,” punishable by fines of up to $250,000 per day. In their 67-page decision, the 11th Circuit justices ruled that any moderation decisions made by social-media platforms such as Twitter and Facebook, including the banning of certain accounts, are effectively acts of speech, and therefore are protected by the First Amendment. Last week, however, the US Court of Appeals for the 5th Circuit came to almost the exact opposite conclusion, in a decision related to a social-media law that the state of Texas enacted last year. The law banned the major platforms from removing any content based on “the viewpoint of the user or another person [or] the viewpoint represented in the user’s expression or another person’s expression.”

In the 5th Circuit opinion, the court ruled that while the First Amendment guarantees every person’s right to free speech, it doesn’t guarantee corporations the right to “muzzle speech.” The Texas law, the justices said, “does not chill speech; if anything, it chills censorship. We reject the idea that corporations have a freewheeling First
Amendment right to censor what people say.” The court dismissed many of the arguments technology companies such as Twitter and Facebook mamde in defense of their right to moderate content, arguing that to allow such moderation would mean that “email providers, mobile phone companies, and banks could cancel the accounts of anyone who sends an email, makes a phone call, or spends money in support of a disfavored political party, candidate, or business.” The appeals court seemed to endorse a definition used in the Texas law, which states that the social media platforms “function as common carriers,” in much the same way that telephone and cable operators do.

NetChoice and the Computer and Communications Industry Association—trade groups that represent Facebook, Twitter, and Google—argued that the social-media platforms should have the same right to edit content that newpapers have, but the 5th Circuit court rejected this idea. “The platforms are not newspapers,” Judge Andrew Oldham wrote in the majority opinion. “Their censorship is not speech.” Given the conflicting arguments in the 11th Circuit case and the 5th Circuit decision, Ashley Moody, the Attorney General for Florida, on Wednesday asked the Supreme Court to decide whether states have the right to regulate how social media companies moderate. The answer will affect not just Florida and Texas, but dozens of other states—including Oklahoma, Indiana, Ohio, and West Virginia— that have either passed or are considering social-media laws that explicitly prevent the platforms from moderating content, laws with names such as The Internet Freedom Act, and The Social Media Anti-Censorship Bill.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Florida, Texas, and the fight to control platform moderation”

Facebook and paying for news

On June 9, Keach Hagey and Alexandra Bruell—two Wall Street Journal reporters who cover the major digital platforms—reported that Facebook, a subsidiary of Meta Platforms, was “re-examining its commitment to paying for news,” according to several unnamed sources who were described as being familiar with Facebook’s plans. The potential loss of those payments, the Journal reporters wrote, was “prompting some news organizations to prepare for a potential revenue shortfall of tens of millions of dollars.” The Journal story echoed a report published in May by The Information, a subscription-only site that covers technology; in that piece, reporters Sylvia Varnham O’Regan and Jessica Toonkel said Meta was “considering reducing the money it gives news organizations as it reevaluates the partnerships it struck over the past few years,” and that this reevaluation was part of a rethinking of “the value of including news in its flagship Facebook app.”

Meta wouldn’t comment to either the Journal or The Information, and a spokesperson told CJR the company “doesn’t comment on speculation.” But the loss of payments from Meta could have a noticeable impact for some outlets. According to the Journal report, for the past two years—since the original payment deals were announced in 2019— Meta has paid the Washington Post more than $15 million per year, the New York Times over $20 million per year, and the Journal more than $10 million per year (the payments to the Journal are part of a broader deal with Dow Jones, the newspaper’s parent, which is said to be worth more than $20 million per year). The deals, which are expected to expire this year, were part of a broader system of payments Meta made to a number of news outlets, including Bloomberg, ABC News, USA Today, Business Insider, and the right-wing news site Breitbart News. Smaller deals were typically for $3 million or less, the Journal said.

The payments were announced as part of the launch of the “News tab,” a dedicated section of the Facebook app where readers can find news from the outlets that partnered with Meta (higher payments were made to those with paywalls, according to a number of reports). The launch was a high-profile affair, including a one-on-one interview between Robert Thomson, CEO of News Corp.—parent company of Dow Jones and the Journal—and Mark Zuckerberg, the CEO of Meta. Emily Bell, director of the Tow Center for Digital Journalism at Columbia, wrote for CJR that the meeting was like “a Camp David for peace between the most truculent old media empire and one of its most noxious disruptors,” and wondered how much it had cost for News Corp. to forget about its long-standing opposition to Facebook’s media strategy. The event was “a publicity coup for Facebook; it tamed the biggest beast in the journalism jungle,” Bell wrote.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Facebook and paying for news”

Of Substack, apps, and strategy

Substack, a hosting and publishing platform for email newsletters, took what seemed like an innocuous step last week: it launched a standalone smartphone app. Not surprising, perhaps, since almost every content startup has an app. Substack’s app, however, is somewhat different, since the company is a middleman that stands in between writers and their audiences, rather than a startup offering a service directly to consumers. Those differences have led to questions about Substack’s long-term strategy, and whether that strategy is good or bad for the writers who use it. Some of the concern stems from the fact that Substack has raised over $80 million in venture financing from a range of VC groups, including Andreessen Horowitz, a leading Silicon Valley venture powerhouse. The funding has given Substack a theoretical market value of $650 million, but that level of investment can put pressure on companies to meet aggressive growth targets.

Substack’s founders, for their part, argue that the app is just an extension of those goals. Hamish McKenzie, Chris Best, and Jairaj Sethi wrote in a blog post on the Substack site that their intention in starting the company was to “build an alternative media ecosystem based on different laws of physics, where writers are rewarded with direct payments from readers, and where readers have total control over what they read.” The app, they argue, builds on those ideas, in that it is designed for “deep relationships, an alternative to the mindless scrolling and cheap dopamine hits that lie behind other home screen icons.” Among other things, they say the app will amplify the network effects that already exist on Substack, “making it easier for writers to get new subscribers, and for readers to explore and sample Substacks they might otherwise not have found.”

Casey Newton, a technology writer who publishes a newsletter called Platformer (which is hosted on Substack) writes that the app is a symbol of “the moment in the life of a young tech company when its ambitions grow from niche service provider to a giant global platform.” Newton writes that it is possible that the Substack app could help writers build growing businesses by advertising their publications to likely readers (the company says that a person who has a credit card on file with Substack is 2.5 times more likely to subscribe to a new publication than someone who hasn’t). But it is equally possible, he says, that the app “makes publications feel like cheap, interchangeable widgets: an endless pile of things to subscribe to, overwhelming readers with sheer volume.” In other words, an app that serves Substack’s interests rather than those of its newsletter authors.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “Of Substack, apps, and strategy”

As Ukraine war continues, Russia becomes increasingly isolated

Since the invasion of Ukraine began two weeks ago, Russia has found itself cut off from the rest of the world not only economically but also in a number of other important ways. In some cases, Russia is the one that has been severing those ties, as it did recently when it banned Facebook, because the company refused to stop fact-checking Russia media outlets such as Russia Today and Sputnik (so far, Russian citizens are still allowed to use WhatsApp and Instagram). Twitter has also reportedly been partially blocked in the country, while other companies have voluntarily withdrawn their services. YouTube has banned RT and Sputnik, and so has the entire EU. TikTok said on Sunday that while it is still available in Russia, it will no longer allow users to livestream or upload video from that country, due in part to a flood of disinformation, and to the arrival of a new “fake news” law in Russia that carries stiff penalties.

Traditional media companies have also withdrawn their services, and in some cases their journalists, from the country since the invasion, in part because of the fake news law. Bloomberg News and the BBC were among the first to stop producing journalism from within Russia last week. John Micklethwait, editor in chief of Bloomberg, wrote in a note to staff that the Russian law seemed designed to “turn any independent reporter into a criminal purely by association” and as a result made it “impossible to continue any semblance of normal journalism inside the country.” The New York Times said Tuesday that it had decided to pull its journalists out of Russia, in part because of the uncertainty created by the new law, which makes it a punishable offence to refer to the invasion of Ukraine in a news story as a “war.”

It’s not just individual social networks or journalism outlets; several network connectivity providers have also withdrawn their services from Russia. They’re the giant telecom firms that supply the “backbone” connections between countries and the broader internet, and removing them means Russia is increasingly isolated from any information on the war that doesn’t come from inside the country or from Russian state media. Lumen, formerly known as CenturyLink, pulled the plug on Russia on Wednesday, withdrawing service from customers such as national internet provider Rostelecom, as well as a number of leading Russian mobile operators. Competitor Cogent Networks did the same with its broadband network last week.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Continue reading “As Ukraine war continues, Russia becomes increasingly isolated”