Crypto and the rise of “speculative communities”

Max Read, a former editor-in-chief of Gawker, writes a newsletter called Read Max, and in one of his latest editions he talks about a book he read and reviewed, called “Speculative Communities: Living With Uncertainty in a Financialized World,” by a London-based sociologist named Aris Komporozos-Athanasiou. It’s about more than just crypto, but it explains a lot about the rise of that kind of niche community:

Where prediction once reigned, speculation now dominates. You can see this at the most literal level in the rise of gig-platform apps like Uber, where the once-simple acts of hailing or driving a cab become adventures in speculation—wagers on whether the price of a ride will rise or fall in the next five minutes—but you can also see it on a discursive level in social media, where users stake out speculative positions (called “takes”) on volatile reputational marketplaces. You even see it, Komporozos-Athanasiou argues, in the success of “populist” politicians and initiatives from the Greek bailout referendum to Brexit to Trump, votes for which can be understood as speculative wagers on “possible, yet uncertain, outcomes”

[H]omo economicus is an isolated individual, while homo speculans, in Komporozos-Athanasiou’s formulation, is a member of a “speculative community.” The delegitimation of neoliberal reason not only increases volatility, it also undermines the previous regime’s insistence on atomized individuals and family units. “Struggles of speculation and insurance,” then, “are experienced more intensely but also more collectively.” Here Speculative Communities draws on Benedict Anderson’s famous study of the origins of nationalism, Imagined Communities, which argued that the collapse of anciens régimes around the world and the rise of print-media capitalism in the wake of the industrial revolution created new uncertainties around which the “imagined communities” of nationalism could coalesce.

Hamlet is actually about doomscrolling on Twitter

This is the only slightly tongue-in-cheek thesis of Allegra Rosenberg, writing in Ryan Broderick’s excellent “Garbage Day” newsletter. In a section of the newsletter, Allegra talks about reading and trying to memorize Hamlet’s soliloquy, how it reminds her both of Twitter and of Derrida, and of a recent edition of Charlie Warzel’s Galaxy Brain newsletter in which he discusses theories of contemporary society as posed by L.M. Sacasas:

In Hamlet’s keen analysis from inside his own cloud of hesitation, it is the fear of the unknown which prevents him or anyone from taking the craved-for plunge into the sweet release of death. It makes us rather bear the ills we have / than fly to those we know not of. He understood how stuckness self-perpetuates. The equally frustrating presentness perpetuates, too, in Sacasas’ contemporary formulation: when everything is commentary, what else is there to comment on, but prior commentary?

He says: “We’re not building toward new ideas; we’re relating things that just happened to other things that happened before that” — and thus the native hue of resolution / Is sicklied o’er with the pale cast of thought. The internet is a forest of inscriptions, so dense that we are far too caught up in infinite fractal brambles of things said and done to actually make any real choices, and/or to understand our situation insofar as we can affect it. 

This got me thinking about Jacques Derrida (I know…). In Archive Fever, a later work dealing with the looming digital age, he speaks about how the titular fever — what he identifies as a death or destruction drive — allows in itself for the ongoing existence of the archive: “There would indeed be no archive desire without the radical finitude, without the possibility of a forgetfulness which does not limit itself to repression.” Basically the only reason we’re stuck in the “doom loop” of forever talking about the past, as Warzel puts it, is because the internet contains both the constant production of the past as well as an intense feeling of ephemerality.

Lumiere Brothers short from 1895 upscaled to 4K

The 1895 short film “Arrival of a Train at La Ciotat” is probably one of the most famous — and the earliest — film clips in history. It was directed and produced by Auguste and Louis Lumière, brothers who were among the first to create such films. The 50-second silent movie shows a train pulled by a steam locomotive into the gare de La Ciotat, the train station in a French southern coastal town near Marseille. It consists of one continuous real-time shot. There were rumors that when it was first shown, the audience screamed and some fled from their seats, convinced a real train was coming at them. But many film historians doubt that this actually happened.

In 2020, YouTuber Denis Shiryaev wanted to update the look of the clip, so — with the help of several neural networks — he upscaled the clip to 4K resolution and 60 FPS

Have the dangers of social media been overstated?

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

On April 11, The Atlantic published an essay by Jonathan Haidt entitled Why the Past 10 Years of American Life Have Been Uniquely Stupid. In the piece, Haidt—a social psychologist at the New York University Stern School of Business, and the co-author of a book called The Coddling of the American Mind—argued that social media platforms such as Facebook, Twitter, and YouTube have constructed a modern-day tower of Babel. The societal chaos that these kinds of services have unleashed, Haidt wrote, have “dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.” The arguments made in the piece were similar to an earlier Atlantic essay by Haidt and Tobias Rose-Stockwell, about the “dark psychology” of social networks, and how they have created a world in which “networks of partisans co-create worldviews that can become more and more extreme, disinformation campaigns flourish [and] violent ideologies lure recruits.”

Haidt’s essay was the latest in a long series of research papers and articles on the ills of social media, and specifically the idea that services such as Facebook and Twitter have fractured, polarized, and enraged Americans, because of the way recommendation and targeting algorithms work. Concepts such as the “filter bubble,” the “echo chamber,” and the idea that social networks can “radicalize” otherwise normal users—turning them into right-wing conspiracy theorists—have become commonplace. “Something went terribly wrong, very suddenly,” when services such as Facebook and Twitter became widespread, Haidt argued in his most recent essay. “We are disoriented, unable to speak the same language or recognize the same truth. We are cut off from one another and from the past. The newly tweaked platforms were almost perfectly designed to bring out our most moralistic and least reflective selves. The volume of outrage was shocking.”

As repetitive as some of Haidt’s arguments about social media were, his Atlantic essay did introduce something relatively new to the field. After some criticism of his conclusions, and some of the research he relied on for his piece, Haidt and Chris Bail—a professor of sociology and public policy at Duke University, where he directs the Polarization Lab—created a collaborative Google document they titled Social Media and Political Dysfunction: A Collaborative Review. The idea behind it, the two men explained in a preface, was to collect research that might help to shed light on the question: “Is social media a major contributor to the rise of political dysfunction seen in the USA and some other democracies?” (This is the third such collaborative document Haidt has created to track research on related topics; he created two with Jean Twenge, a professor of psychology from San Diego State University—one that collects research related to adolescent mood disorders, and one that does so for social media and mental health.)

Continue reading “Have the dangers of social media been overstated?”

I worship at the temple of everything

From Heather Havrilesky’s excellent newsletter:

“I’m not stooping to lick mud puddles anymore. I worship at the temple of everything now, updrafts of wet oak tree and bruised lip and salty oyster shell, hints of sheer rock cliff and band director and broken typewriter and my dad’s sad stories about the Great Flood, the one that swept everything away, the one that took everything, the kitchen table and the chicken coop and the tattered books, the crocheted blankets and the boxes of love letters, the pickled cabbage, the black rosary beads, the love worn chair, the long exhale of smoke across the garden at twilight, the years of waiting, of saying too little, of backing away slowly, of disappearing for good, everything.”

The Estemere Manion in Palmer Lake, Colorado

This fascinating estate was right across the street from our Airbnb in the tiny mountain town of Palmer Lake, Colorado. According to what I’ve been able to find out online, it’s called the Estemere Mansion, and it was built in 1887 by a dentist from Baltimore named William Finley Thompson. He was hoping that the Palmer Lake area would become a tourist destination, and that people would be looking for luxurious estates, so he built a 7.000 square-foot mansion with 18 rooms — including eleven bedrooms — and six fireplaces, on an estate that includes a large garden, a carriage house and a chapel. He eventually went bankrupt and moved back to the Baltimore area.

In 1898, Eben Smith, who made his fortune in gold milling, bought the estate for $5,000. It was primarily a residence, though during the 1920s and ’30s it was also the site of a number of schools, including one for the underprivileged students and one for children with special needs. Roger and Kim Ward bought the property in 1998, and it underwent significant renovations, including the addition of structural support, new carpeting and flooring, a new gas furnace, and an entrance into the tower. The Wards put the estate up for sale in 2020 for $2.5 million, after Kim started suffering from dementia, but I haven’t been able to find out whether it sold or not, and if so to whom.

The courts, the platforms, and regulating speech

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

On Tuesday, the Supreme Court issued an order blocking a Texas law that would prevent large social platforms such as Facebook, Twitter, and YouTube from removing content, except in extreme cases (content that involves the sexual exploitation of children, criminal activity, or threats of violence.) The order was brief, because it was triggered by an emergency application from two organizations opposed to the law. NetChoice, a coalition of online service companies, and the Computer & Communications Industry Association—a group whose members include Google, Facebook, and Twitter—asked the Supreme Court for an emergency decision because they argued that the law is “an unprecedented assault on the editorial discretion of private websites” and also a breach of the platforms’ First Amendment rights.

Even as it issued the order, however, the Supreme Court noted that the case is still before an appeals court in Texas, and that the issues at the center of the case are so critical that they will likely need to be considered at length by the Supreme Court at some point. “This application concerns issues of great importance that will plainly merit this Court’s review,” the decision states. In a dissenting opinion issued as part of the Supreme Court’s decision, Justice Alito said social media platforms have “transformed the way people communicate with each other,” but that “it is not at all obvious how our existing precedents, which predate the age of the internet, should apply.” To some, this seemed to open the door to a challenge to the platforms’ First Amendment rights.

Last week, meanwhile, an appeals court in Florida blocked most of the provisions in a similar state law that would have prevented the platforms from removing accounts belonging to politicians. In its decision, the court stated that “it is substantially likely that social-media companies—even the biggest ones—are private actors whose rights the First Amendment protects [and] that their so-called content-moderation decisions constitute protected exercises of editorial judgment.” Specifically, the court said that prohibiting companies from removing content was not allowed, but provisions in the law that require the platforms to provide clear standards for content and allow users to access their data likely don’t violate the First Amendment and can be implemented.

Continue reading “The courts, the platforms, and regulating speech”

Facebook, data sharing, and broken promises

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Meta, the parent company of Facebook, said on Monday that it plans to share more data about political ad targeting on its platform with social scientists and other researchers, as part of what the company calls its Open Research and Transparency project. According to CNN, Meta will provide “detailed targeting information for social issue, electoral or political ads” on the platform to “vetted academic researchers.” Jeff King, Meta’s vice president of business integrity, said in a statement that the information could include the different categories of interest that were used to target users, such as environmentalism or travel. Starting in July, the New York Times reported, the company’s publicly available advertising library will include a summary of this targeting information, including a user’s location. King said that by sharing the data, Meta hoped “to help people better understand the practices used to reach potential voters on our technologies.”

Monday’s announcement, including King’s reassurance, gave the impression that Meta wants to be as transparent as possible about its ad targeting and other data-related practices. Researchers who have dealt with the platform in the past tell a different story, however, including Nathaniel Persily, a law professor at Stanford who co-founded and co-chaired Social Science One, a highly touted data-sharing partnership with Facebook that Persily said he resigned from in frustration. Persily and others say they have spent years trying to get Meta to provide even the smallest amount of useful information for research purposes, but even when the company does so, the data is either incomplete—Meta admitted last year that it supplied researchers with faulty data, omitting about 40 percent of its user base—or the restrictions placed on how it can be used are too onerous. In either case, researchers say the resulting research is almost useless.

In some cases, Meta has shut down potentially promising research because the process didn’t comply with its rules. Last August, the company blocked an NYU research effort called the Ad Observatory, part of the Cybersecurity for Democracy Project, because it said the group was using a browser extension to “scrape” information from Facebook without the consent of users. The company not only blocked the research group from getting any data, but also shut down the researchers’ personal accounts. Laura Edelson, a post-doctoral researcher at New York University who worked on the project, and Damon McCoy, an associate professor of computer science and engineering at NYU, wrote in Scientific American that Facebook wants people to see it as transparent, “but in reality, it has set up nearly insurmountable roadblocks for researchers seeking shareable, independent sources of data” (Edelson also talked with CJR last September about the shutdown of her research and the implications for social science.)

Continue reading “Facebook, data sharing, and broken promises”

Elon Musk, Twitter, and the spam bot problem

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer.

Elon Musk’s bid to acquire Twitter for $44 billion is only a month old, but it has already had more high-speed twists and turns than any Coney Island rollercoaster. After Musk filed notice of his offer with the Securities and Exchange Commission on April 13, Twitter’s board of directors implemented a “poison pill” defense, which would have flooded the market with cheap stock if Musk went ahead with his bid. Only a few days later, Twitter accepted his offer, in part because it was well above the stock’s recent trading price. This triggered a wave of speculation about what Musk planned to do with the service; among other things, he said that he would make the service’s recommendation algorithm public, and confirmed last week that he would reverse the permanent ban on Donald Trump’s Twitter account.

Then late last week (on Friday the 13th, no less) came a series of tweets in which Musk declared that his offer for Twitter was “on hold,” until he could verify the company’s recent statement that spam bots and other fake accounts make up less than five percent of Twitter’s total user base. At a technology conference in Miami on Monday, Musk expanded on this concern, saying he believed that the true number of spam or fake accounts could be 20 percent of Twitter’s total user base or higher, although he didn’t provide any evidence to support his estimate. Musk also said a deal for Twitter at a lower price “wouldn’t be out of the question” (Twitter’s share price is currently in the $36 range, more than 30 percent below where it was after Musk filed his offer.) The company responded that it plans to “enforce the merger agreement.”

Some observers believe Musk’s concern about the percentage of fake accounts is a ruse to either back out of the takeover deal, or at least negotiate a lower price. Matt Levine, an opinion columnist for Bloomberg, wrote recently that he doesn’t believe Musk really cares about spam bots. “I think it is important to be clear here that Musk is lying,” he said. Musk “has produced no evidence at all that Twitter’s estimates are wrong, and certainly not that they are materially wrong or made in bad faith,” Levine wrote. He added that the only way Musk could get out of the deal would be to prove that such a mistake would have a “material adverse effect” on the business, which he called “vanishingly unlikely” (although Musk did question whether advertisers are getting what they paid for, which he said was “fundamental to the financial health of Twitter”.)

Continue reading “Elon Musk, Twitter, and the spam bot problem”

A kayak trip up Grindstone Creek

I often bring my kayak with me when we go to different places, because there’s almost always a lake or river or creek worth paddling around, and it’s a great way to see different aspects of the places we visit. Last year when we came in the spring, I paddled around a huge wetland called Cootes’ Paradise and saw a ton of turtles and hawks and other wildlife. So this past weekend, when we went to our daughter and son-in-law’s place in Ancaster, Ontario — which is just outside Hamilton — I looked for a different place nearby where I could take the kayak and see some wildlife and natural scenery.

Hamilton has historically been a pretty industrial city, with a number of giant steel mills that belch smoke as you drive by. But they have tried to make things a little nicer in different ways, and one of those ways is Bayfront Park, which is a lovely park right by the bay (obviously). So I checked out a few sites and one talked about paddling from Bayfront across the bay to a creek called Grindstone Creek, which winds its way past the Botanical Gardens and through a wetland area.

Continue reading “A kayak trip up Grindstone Creek”

Elon Musk, Donald Trump, and the future of Twitter

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer.

On April 14, Elon Musk filed a notice with the Securities and Exchange Commission saying he intended to acquire Twitter for $43 billion, and since then, average Twitter users and media analysts alike have speculated about his motivation for the acquisition, and his plans for the company. For the most part, Musk has talked in general terms about his desire to own Twitter, describing it as being like a town square, and expressing concern about how it handles free speech. He has also said that he will be happy if both the far right and the far left are equally upset by the way he runs the company, but some note that he has responded more favorably to conservative and even right-wing commentators like Mike Cernovich, who helped promote the Pizzagate conspiracy. In a recent resonse to Cernovich, Musk said Twitter “has a strong left-wing bias” (although social-media researchers say this is not accurate.)

On Tuesday, Musk provided one of the first concrete examples of what he plans to do if he acquires the company, and—whether by design or by accident—it seemed to cater to conservative users. When Musk first indicated he was interested in buying Twitter, right-wing commentators were excited by the possibility he might reverse the company’s ban on Donald Trump, whose account was permanently banned following the January 6 attack on the Capitol because his tweets promoted violence. At a Financial Times conference on Tuesday, Musk said he plans to restore Trump’s account if he acquires Twitter. He called the ban “a mistake because it alienated a large part of the country and did not ultimately result in Donald Trump not having a voice,” the New York Times reported. Musk added that the ban was “morally wrong and flat-out stupid” and that “permanent bans just fundamentally undermine trust in Twitter.”

Jack Dorsey, a co-founder and former CEO of Twitter, appears to agree with Musk, saying on Tuesday that permanent suspensions of individual users “are a failure” of the company and “don’t work” (Trump, for his part, has said that he won’t rejoin Twitter even if his account is reinstated). Dorsey, who was running the company when Trump was banned, said last year that the decision, while difficult, was ultimately the right one, but on Tuesday he said that “it was a business decision [and] I still believe that permanent bans of individuals are directionally wrong.” Musk and Dorsey aren’t the only ones who feel this way: Gilad Edelman, writing in Wired, argued that they both have a point. “It’s probably not a good idea for important platforms to be in the business of frequently banning users for life,” he said, especially one like Twitter, which Edelman says “occupies a unique place in American political life.”

Continue reading “Elon Musk, Donald Trump, and the future of Twitter”