The platforms and the challenges of the next election

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

With the midterm elections approaching in the US, the major social platforms have all released new statements about how they are planning to handle any misinformation and abuse that might arrive later this year. As Sarah Roach, Nat Rubio-Licht and Issie Lapowsky put it in Protocol’s “Source Code” newsletter on Wednesday: “It’s mid-August of an election year in America, which can only mean one thing: It’s time for every social media company to announce how it plans to combat whatever fresh hell November has in store.” Based on what Meta (the parent company of Facebook), Twitter, Google, and Tiktok have said about their midterm plans so far, the order of the day appears to be “stay the course.” In other words, none of the platforms appear to be making any dramatic departures from the way they handled the last election and its aftermath—and, depending on your perspective, that could be either a good thing or a bad thing.

Kurt Wagner and Alex Barinka wrote for Bloomberg that “after years of revising and updating its election strategy, Meta is pulling out a familiar playbook for the US midterms, sticking with many of the same tactics it used during the 2020 general election to handle political ads and fight misinformation. That largely means focusing on scrubbing misinformation about voting logistics and restricting any new political ads in the week prior to Election Day.” Nick Clegg, the head of global affairs at Meta and a former deputy prime minister of the United Kingdom, wrote on the company’s blog on Tuesday that its approach to the 2022 US midterms “is consistent with the policies and safeguards we had in place during the 2020 US presidential election,” and that Facebook has “hundreds of people” working to prevent misinformation and abuse. Clegg also said the company plans to stick to its plan to review Donald Trump’s ban in January 2023, even if Trump declares his intention to run in the next election.

Not everyone is happy about Facebook’s decision to go forward with the same policies and practices it used in 2020, however. After Clegg posted the company’s plans on Twitter, NYU’s Center for Social Media and Politics responded that by most accounts, Facebook’s misinformation policy “worked fairly well—until they disbanded the election integrity unit and slowed enforcement after Election Day. Let’s hope they don’t make the same mistake again.” Kayla Gogarty, deputy research director at Media Matters For America, said that she is “alwways skeptical of Facebook’s ad restrictions. Following the 2020 election, it banned ads about social issues, elections, and politics, but let The Daily Wire earn millions of impressions on ads that seemingly fit that criteria.”

Facebook’s rules around political advertising are among the most controversial aspects of its policies related to the election. Few question the company’s decision to remove posts that mislead people about when or where to vote, or that call for election violence. Blocking political ads in the week prior to the election also seems fairly uncontroversial—although it has caused problems in the past, including when The Daily Wire was allowed to run ads despite the ban. But there are those who believe Facebook shouldn’t allow political advertising at all, and others who question why the company chooses not to fact-check political ads. Facebook says this policy is “grounded in Facebook’s fundamental belief in free expression,” but Yael Eisenstadt, the former head of election integrity for Facebook, told NPR that the company opted not to fact-check because “they needed to preserve their power with the incumbent, and so they put that priority over what many people in the company believed would actually protect our democracy.”

Unlike Facebook, neither Twitter nor TikTok allow political advertising, although for different reasons. TikTok, the Chinese-owned video app that has become one of the most popular social tools in the world, says that it bans political ads because its users love “The app’s light-hearted and irreverent feeling,” and political advertising doesn’t fit that experience. In 2021, however, the Washington Post noted that partisan influencers were “flying under the radar on the social network, exposing a critical blindspot in the company’s rules.” A report from the Mozilla Foundation described more than a dozen examples of influencers on the platform with financial ties to political organizations who posted without disclosing that their messages were sponsored. TikTok has said it plans to crack down on that sort of thing, along with more fact-checking.

The New York Times recently reported that TikTok has a problem with election misinformation, both around the world and, increasingly, in the US as well. “In Germany, TikTok accounts impersonated prominent political figures during the country’s last national election,” the paper wrote. “In Colombia, misleading TikTok posts falsely attributed a quotation from one candidate to a cartoon villain [and] in the Philippines, TikTok videos amplified sugarcoated myths about the country’s former dictator. Now, similar problems have arrived in the United States.” The Times said TikTok is “shaping up to be a primary incubator of baseless and misleading information, in many ways as problematic as Facebook and Twitter,” because “the same qualities that allow TikTok to fuel viral dance fads… can also make inaccurate claims difficult to contain.”

Twitter, meanwhile, banned political advertising of any kind in 2019. The company says on its site that it prohibits the promotion of political content “based on our belief that political message reach should be earned, not bought.” The plan for the upcoming elections, Twitter wrote on its company blog last week, is to label misinformation, and then show users a prompt when they attempt to like or share those tweets. Unfortunately for Twitter, some research shows that its labels do very little to stop users from sharing the tweets in question, at least where Trump is concerned, and in some cases made the information spread faster than it might have otherwise. In cases where there is potential for harm associated with a false claim, however, the company says such tweets “may not be liked or shared to prevent the spread of the misleading information.”

Here’s more on the platforms:

Engineering Pt. 1: Google announced its own plan to handle election misinformation, which, not surprisingly, involves algorithms: “By using our latest AI model, Multitask Unified Model (MUM), our systems can now understand the notion of consensus, which is when multiple high-quality sources on the web all agree on the same fact,” Pandu Nayak, vice-president of search, wrote on the company’s blog. He went on to say that Google is also working on filling what researchers call “data voids,” when there isn’t enough reliable information about a breaking news topic. The company plans to expand its use of content advisories in situations when a topic is rapidly evolving.

Engineering Pt. 2: When asked if he thought Trump was more or less of a risk to public safety now than when his account was banned, Clegg told Politico: “Look, I work for an engineering company. We’re an engineering company. We’re not going to start providing a running commentary on the politics of the United States.” Of Trump’s ban, he said the company “will look at the situation as best as we can understand it” but that “getting Silicon Valley companies to provide a running commentary on political developments in the meantime is not really going to … help illuminate that decision when we need to make it.”

Everybody hurts: Niam Yaraghi, a fellow at the Brookings Institution, argued that Twitter’s ban on political ads “hurts our democracy.” It is difficult to “untangle electioneering activities from issue-based advocacy. Healthcare, education, business, entertainment, and religion are all intertwined with politics,” he wrote. The inherent difficulty in defining electoral advocacy and separating it from issue advocacy, “makes it almost impossible to implement such a ban effectively,” Yaraghi wrote, and even if social media companies could successfully define these terms, “the benefits of such a policy are unclear.”

Other notable stories:

Salma al-Shehab, a Saudi student at Leeds University in the UK who returned home for a holiday has been sentenced to 34 years in prison for having a Twitter account and for following and retweeting dissidents and activists, the Guardian reported. The case “marks the latest example of how the crown prince Mohammed bin Salman has targeted Twitter users in his campaign of repression,” the newspaper wrote. Al-Shehab, 34, a mother of two young children, was initially sentenced to three years in prison for allegedly using Twitter to “cause public unrest and destabilise civil and national security,” but the court handed down the longer sentence because she also allegedly “assisted those who seek to cause public unrest and destabilise civil and national security.”

The Financial Times reported that young adults in the UK spend more time on TikTok than watching broadcast television, according to a new report from Ofcom, the British media regulator. “In its annual survey of consumption trends, the media regulator found that those aged 16 to 24 spent an average of 53 minutes a day viewing traditional broadcast TV, just a third of the level a decade ago,” the Times wrote. “By contrast, people over the age of 65 spent seven times as long in front of channels such as BBC One or ITV, viewing almost six hours’ worth of broadcast TV a day—a figure that has risen since 2011.”

Davey Alba and Jack Gillum write for Bloomberg that Google Maps routinely misleads people looking for abortion providers. “When users type the words ‘abortion clinic’ into the Maps search bar, crisis pregnancy centers account for about a quarter of the top 10 search results on average across all 50 US states, plus Washington D.C.,” Alba and Gillum reported, according to data Bloomberg collected in July. “In 13 states, including Arkansas, South Carolina and Idaho where the procedure is newly limited, five or more of the top 10 results were for CPCs, not abortion clinics.”

Google has agreed to pay $60 million in penalties as a result of a battle with Australia’s competition regulator over allegations that Google misled users on how it collected their personal location data, The Guardian reported. “In April last year, the federal court found Google breached consumer laws by misleading some local users into thinking the company was not collecting personal data about their location via mobile devices with Android operating systems,” the paper reported.

Penn Entertainment, a casino operator, is acquiring the remaining shares of Barstool Sports that it doesn’t already own, giving it control of the sports-focused social media service, Bloomberg reported. “In a filing Wednesday, Penn said it exercised call rights and would complete the purchase of the remaining Barstool shares by February 2023,” the news service wrote. In 2020, Penn agreed to buy a 36 percent stake in Barstool for $161.2 million; under the terms of the latest deal, which is detailed Penn’s second-quarter results, the company is to buy the rest of Barstool for $387 million.

Nieman Reports writes about four independent digital journalism outlets it says are “the vanguard of next-generation Turkish journalism.” They include Kapsül, a newsletter that started in early 2020 and now has 54,000 subscribers; Medyascope, which was founded in 2015 by Ruşen Çakir, a journalist who worked for some of Turkey’s biggest outlets; Sözcü, which has one of the highest circulations in the country, according to Reuters; and Podfresh, which hosts more than 350 independent podcasts, accounting for almost 20 percent of the shows produced regularly in Turkey.

Music artists Swizz Beatz and Timbaland sued Triller, a social video-sharing app, alleging the social media platform owes them more than $28 million after acquiring Verzuz, a live-streaming music series started by Beatz and Timbaland, Taylor Lorenz reported for the Washington Post. “Triller acquired Verzuz, a webcast series pitting musical acts against one another, in January 2021 for an undisclosed sum” Lorenz wrote. The lawsuit alleges that Triller began missing payments in January 2022. Triller has also failed to pay some Black creators it signed deals with, Lorenz reported earlier.

The Financial Times announced that it has made three new appointments to expand its financial coverage in the US. Jennifer Hughes, who is currently Asia Finance and Markets editor at Reuters’ Breakingviews in Hong Kong, becomes the FT‘s new US markets editor; Eric Platt, currently the FT‘s US markets editor based in New York, is the new senior corporate finance correspondent; and Tabby Kinder, currently the FT‘s Asia financial correspondent, based in Hong Kong, has been appointed West Coast financial editor.

Leave a Reply

Your email address will not be published. Required fields are marked *