Facebook and Twitter continue to profit from Chinese propaganda

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Last week, both Facebook and Twitter removed a number of accounts and pages they said were part of a propaganda effort tied to the Chinese government, aimed at spreading disinformation about the ongoing protests in Hong Kong. According to Facebook, Twitter was the first to detect the campaign, and it then alerted its social-networking counterpart about “inauthentic behavior” on its platform. Facebook removed seven pages, three groups and five accounts that it said engaged in a number of deceptive tactics, including posing as news organizations. Twitter, meanwhile, said that it had removed more than 900 accounts originating from China, which it said were attempting to sow political discord, “including undermining the legitimacy and political positions of the protest movement” (although the owner of at least one account included in that total denied being part of any co-ordinated Chinese propaganda campaign).

In a related move, Twitter said that it would no longer accept advertising and promoted tweets from state-owned media entities such as Xinhua or China Daily. Facebook, however, did not say anything of the sort, although a spokesman said that the company continues to “look at our policies as they relate to state-owned media.” As it stands now, despite the action it took against the Chinese government-funded accounts engaging in inauthentic behavior, Facebook seems to have no problem continuing to promote ads bought by the country’s state media. Xinhua placed four ads on Monday, according to BuzzFeed, saying the police have been “very restrained” in handling the riots, and calling them “heroes” for standing up to the protesters, and other state outlets have been running ads promoting the benefits of the detention and re-education camps China has set up for Uighur Muslims.

Despite its ban on state-owned media, Twitter has also apparently continued to run ads and promoted tweets from China’s state-run media outlets, according to BuzzFeed reporter Ryan Mac. In its announcement about the ban, the company said that advertisers would have to remove their existing campaigns after 30 days, and that it would not accept any new ones, but the ones BuzzFeed found appeared to be brand new campaigns. Some of them involve harmless positive statements about Chinese culture, but others promote anti-US sentiment, and one says that Hong Kong “used to be a paradise” and is now “engulfed in chaos.” When asked about the campaigns, a Twitter spokesman would not comment other than to point CJR to the part of the company’s previous statement where it said it would take 30 days to remove state-funded ads.

Continue reading “Facebook and Twitter continue to profit from Chinese propaganda”

Facebook goes back to the future by hiring journalists for news tab

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Journalists who cover Facebook get used to feeling a sense of deja vu, since the social networking behemoth often tends to revisit things it has tried to do once—or even multiple times—in the past. The company says that’s because it is committed to “iterating” (as tech founders like to call it), which means trying the same thing over and over until it comes out right. The idea of employing journalists to curate the news definitely falls into that category. Facebook has said it is planning to roll out a new standalone tab for news, for which it is cutting lucrative deals with a number of leading publishers like The New York Times and Washington Post. And it is also hiring a handful of professional editors to curate the top headlines. But will the social network manage to make this unlikely marriage of humans and algorithms work any better than it did the last time?

Facebook’s previous attempt to curate the news turned into what could only be described as a fiasco. The company hired human editors to help select headlines for its “trending topics” feature, which began in 2014 as an attempt to compete with Twitter as a breaking news platform, run by Facebook’s all-powerful algorithm. All seemed to be going well, until Gizmodo ran a story in 2016 that quoted some of the company’s hired editors admitting that they often deliberately excluded some conservative websites from the trending topics lineup. The truth of the matter turned out to be much more nuanced than the headline portrayed it (as even the editor of the piece later admitted), but the damage was done. Conservatives soon howled that Facebook was biased against them, and the company scrambled to apologize and make amends. The human editors were fired, and eventually the feature was shut down completely.

This was arguably the genesis of the long-standing conspiracy theory that Facebook is biased against conservatives, something that has been raised time and time again by pundits—not to mention the White House and Congress—despite the fact that there is absolutely no evidence to support it (and in fact significant evidence to the contrary). The idea of a separate news tab has also been tried before, although in a slightly different way. In 2018, Facebook ran an experiment in six countries where it removed news from the News Feed completely, and put it all in a separate tab called Explore. This also failed miserably, as several Facebook executives admitted, and eventually the experiment was scrapped. “People don’t want two separate feeds,” said Chris Cox, who at the time was CEO Mark Zuckerberg’s second-in-command. One big problem with the tab: virtually no one ever went there, which (needless to say) left news publishers concerned about the impact on their traffic.

Continue reading “Facebook goes back to the future by hiring journalists for news tab”

Could WordPress + Tumblr create an alternative to Facebook?

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

When Verizon announced earlier this week that it was selling Tumblr, the blogging platform Yahoo acquired in 2013 for $1.1 billion, most of the attention focused on the price: according to Axios, the communications conglomerate sold Tumblr for just $3 million (Vox says closer to $2 million). In other words, Yahoo vaporized about 99 percent of the platform’s theoretical value in the six years it owned the company. But apart from this massive bonfire of value, one of the most interesting things about the Tumblr sale was the acquirer: Automattic, the parent company of WordPress. If Tumblr was the Coney Island freak show of the blogosphere, WordPress is the more dependable cousin—the one with a steady job. Could the combination of the two bring back the glory days of independent blogging? Some are clearly hoping that it will, and if anyone has a chance of pulling it off, it’s probably WordPress.

More than 35 percent of the world’s 1 million most popular websites run on the company’s publishing software (about ten times the number that use its nearest competitor). That list includes many leading publishers such as The New Yorker, TechCrunch, the BBC and Variety magazine. But the software behind all of these sites isn’t the product of some massive corporation like Microsoft: founder Matt Mullenweg cobbled it together in 2003, when he was just 19 years old. Even more surprising, the core of WordPress is still open source, meaning anyone can help develop it, and any user can download, install and run it for free. Automattic helps manage the free version, but also sells a for-pay version and related services to large publishers. The company is valued at over $1 billion.

In an interview with The Verge on Tuesday, Mullenweg—who is now CEO of Automattic–makes it clear the purchase of Tumblr wasn’t just an attempt to cash in on a Verizon fire sale. Part of his motivation, he suggests, was to try to bring back some of the magic of the old days of blogging, when the web seemed to be mostly made up of individuals writing on their own websites instead of just posting to a Facebook news feed. And Mullenweg clearly sees the open-source, do-it-yourself ethos of Tumblr and WordPress as an alternative to the centralized control of a social-networking behemoth like Facebook. “I would love for Tumblr to become a social alternative,” he says. “It has the fun and friendliness of some of the other networks we use, but without that democracy destroying…” The sentence is left unfinished, but it’s obvious who he’s talking about.

Continue reading “Could WordPress + Tumblr create an alternative to Facebook?”

Casey Newton on dismantling the platforms and taking Facebook’s cash

Note: This is something I originally wrote for the New Gatekeepers blog at the Columbia Journalism Review, where I’m the chief digital writer

Most technology journalists were naive in the early days of the social web, Verge senior editor Casey Newton admitted in a recent interview with CJR, in the sense that most of the coverage of Facebook, Twitter, and YouTube focused on their benefits rather than the potential for harassment, abuse, and disinformation. “Yeah, I think we were naive,” Newton said in an interview on CJR’s discussion platform, Galley. “There had never been social networks with billions of users before, and it was difficult to predict the consequences that would come with global scale. The ability for anyone to beam a message instantly to hundreds of millions of people was new in human history, and for a while it wasn’t clear how that power would be used.”

For the most part, said Newton—a former senior writer at CNET and reporter for the San Francisco Chronicle—journalists in Silicon Valley covered the social platforms either as success stories or focused primarily on them as business stories, writing about IPOs and valuations. Some reporters and academics focused on the darker aspects of these networks, Newton said, but for most “that narrative was secondary to the question of whether these businesses would survive and thrive.” That all changed with the election in 2016, he said, when it became obvious how easily social platforms could be exploited by foreign states to spread propaganda. “We saw how weak the platform defenses were,” he said. “What had looked like fun distractions turned out to be far more consequential. And we’ve been catching up to those consequences ever since.”

I asked Newton whether he thought Facebook co-founder and CEO Mark Zuckerberg or Twitter co-founder and CEO Jack Dorsey should be held personally responsible for not foreseeing some of the issues that have been caused by their platforms. Zuckerberg admitted in an interview last year that for the first 10 years of the company’s life, all he thought about were the positive aspects of connecting billions of people in real time. And as a followup question, I asked Newton whether he thought the government should be regulating and/or breaking up Facebook, Google, and other mega-platforms.

Continue reading “Casey Newton on dismantling the platforms and taking Facebook’s cash”

The myth of social platform anti-conservative bias refuses to die

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Despite an almost total lack of evidence to support the theory, alt-right groups and mainstream conservatives alike—including the ones that currently occupy the White House—continue to promote the idea that Facebook, Twitter and Google are somehow biased against them. It’s a conspiracy theory that has cropped up in a variety of ways since at least 2016, and has led to some almost farcical situations, including a Congressional hearing in which the right-wing YouTube hosts known as Diamond and Silk argued that the platforms were censoring them, despite the fact that they had a large and growing following. In a similar way, Donald Trump has repeatedly made the case that Twitter is somehow throttling his reach on the service, despite the fact that the president has more than 60 million followers.

In the latest move in this long and tiresome parade of grievances, sources tell Politico the White House is circulating drafts of a proposed executive order that would address allegations of anti-conservative bias by social media companies, according to a White House official and two other people familiar with the matter. This comes just a month after Trump pledged to explore “all regulatory and legislative solutions” to the issue. Those comments were made when the president announced a Social Media Summit, which was supposed to look at the topic of anti-conservative bias. But the event turned into a sideshow featuring a rogue’s gallery of alt-right names, including Diamond & Silk, a meme-maker known as Carpe Donktum, and a reporter from Infowars. None of the social platforms were invited.

Politico’s sources didn’t have any real details about what the proposed executive order might say, or what penalties it might invoke for alleged anti-conservative bias, which suggests that it could be a lot of smoke and mirrors. An unnamed White House official was quoted as saying that “if the internet is going to be presented as this egalitarian platform and most of Twitter is liberal cesspools of venom, then at least the president wants some fairness in the system.” This phrasing calls to mind the Fairness Doctrine, an old FCC requirement that forced broadcast networks to air opposing viewpoints on important political topics. That rule was eventually seen as being in conflict with the First Amendment, and it’s likely that any executive order compelling the social platforms to say or not say certain things would face a similar roadblock from freedom-of-speech advocates.

Continue reading “The myth of social platform anti-conservative bias refuses to die”

What responsibility do hosting companies have for sites like 8chan?

Note: This is something I originally wrote for the New Gatekeepers blog at the Columbia Journalism Review, where I’m the chief digital writer

Over the August 4th weekend, another mass shooting took place in which the shooter posted material related to his attack — including written “manifestos,” as well as images and in some cases live, streaming video — to the controversial online community 8chan. The gunman in the latest case, who killed 20 people in a Walmart in El Paso, Texas, posted his alleged justification for the rampage on 8chan’s message boards, and so did the killer in the Christchurch mosque shootings in New Zealand in March, and the shooter who opened fire on a mosque near San Diego, Calif. in April. Commenters on the 8chan threads for these acts referred to each of the shooters as “our guy,” and in some cases have talked about the killing as a “high score,” the way someone playing a video game would.

Until late Sunday night, 8chan used the services of a company called Cloudflare, which runs a network of powerful internet “proxy” servers that can balance the traffic going to such sites when there is a sudden onslaught of visitors — either because a piece of content has become popular, or because malicious users are directing a “denial of service” attack at the site by hitting it with an automated deluge of traffic. When 8chan’s role in the latest mass shooting came to light, reporters asked Cloudflare whether the company planned to continue providing these services to the site, and Cloudflare said yes, arguing that it isn’t up to the company to decide what kinds of content are appropriate. But by late Sunday, Cloudflare CEO Matthew Prince had changed his mind, and said 8chan would be blocked from using the service.

This isn’t the first time this issue has come up for Cloudflare. In 2017, the company went through a similar debate before cutting off neo-Nazi website The Daily Stormer, which routinely promotes racism and white supremacist ideology. Prince finally decided to block the site from Cloudflare’s service, but wrote a long and thoughtful blog post about how he didn’t think his company and others like it —those that provide hosting services and other utilities — should have the power to effectively remove certain websites from the public internet. “Due Process requires that decisions be public and not arbitrary,” Prince said. “Law enforcement, legislators, and courts have the political legitimacy and predictability to make decisions on what content should be restricted. Companies should not.” Prince said something very similar in a blog post about 8chan, as well as in interviews, as did legal experts such as Kate Klonick of Yale Law School, an expert in censorship and online misinformation

A provider like Cloudflare can’t block a site from the internet completely, but removing its services means 8chan could be crippled fairly easily by a denial-of-service attack or some other exploit. In effect, it makes the site much less stable, which in turn makes it less likely to have as much reach. And Cloudflare isn’t the only one that has taken action: Google removed 8chan from its search index in 2015, which means that anyone searching for it gets links to Wikipedia entries and news stories about it rather than a link to the site itself. Of course, the content often filters out even when the sites themselves are taken down: the conservative news site The Drudge Report, for example, posted a version of the El Paso killer’s manifesto even though most other sites refused to even link to it. And Gizmodo notes that while Cloudflare may have removed 8chan, the proxy service and other hosting services continue to support a wide range of other objectionable and hate-filled sites. 

As was the case with The Daily Stormer, the removal of service by companies like Cloudflare usually results in a scramble to come up with alternative hosting and DoS protection. Much like the neo-Nazi site, 8chan fairly quickly signed up with a Cloudflare-like provider called Bitmitigate — which is a subsidiary of Epik, a company whose founder bragged about helping to host The Daily Stormer after it was taken offline. But even an internet utility has to rely on other utilities for its livelihood, which in turn makes its content and services vulnerable. In the case of Bitmitigate, a company called Voxility owns the internet infrastructure that allows the caching or proxy service to function, and after its role was pointed out on Twitter (by Alex Stamos, former director of security at Facebook, among others) the company said it had removed Bitmitigate from its service.

In some ways, the responsibility that social networks like Facebook and YouTube have for offensive content is more obvious than it is for a service provider like Cloudflare. Facebook and Twitter and Google not only help to distribute such content, but their content-promoting algorithms make sure plenty of people see it, which is an editorial function like the one newspapers used to fulfill. Cloudflare and similar hosting services are more like the power company, which operates the grid that keeps the lights on, or the phone network that connects users and allows them to call each other. Should the power company be deciding which companies or homes to supply electricity to? Should the phone company be cutting off users who choose to talk about offensive subjects using their network?

None of these analogies are totally accurate, but they help show why providers like Cloudflare have a difficult time removing services even from obvious online cesspools like 8chan, and why questions are often raised when payment processors like PayPal or Visa make it impossible to donate to certain entities (as they did with WikiLeaks). Do we want a utility provider to be making those kinds of decisions? And if not, then who does? And based on what criteria? These are the kinds of questions that 8chan — and the role it has played in mass shootings — have forced us to begin to grapple with.

Facebook’s 3rd party fact-checking program falls short

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

In December of 2016, in the wake of a firestorm of criticism about online disinformation and Facebook’s role in spreading it during the 2016 election, the social network reached out to a number of independent fact-checking organizations and created the Facebook Third Party Fact-Checking project. When these outside agencies debunked a news story or report, Facebook promised to make this ruling obvious to users, and to down-rank the story or post in its all-powerful News Feed algorithm so fewer people would see it. But even though the project has grown to the point where there are now 50 partner organizations fact-checking around the world, it’s still very much an open question how useful or effective the program actually is at stopping the spread of misinformation.

One of those raising questions is a relatively new Facebook fact-checking partner in the UK, known as Full Fact, a non-profit entity that recently published an in-depth report on the first six months of its involvement in the program. The group says its overall conclusion is that the third-party fact-checking project is worthwhile, but it has a number of criticisms to make about the way the program works. For example, Full Fact says the way Facebook rates misinformation needs to change, because the terminology or categories it applies aren’t granular enough to encompass the various different kinds. It also says that while the company has expanded to fact-check in 42 different languages, Facebook has so far failed to scale up the speed with which it flags and responds to fact checks. According to the group, it fact-checked just 96 claims in six months (and was paid $171,800 under the terms of its partnership contract).

One of the group’s other concerns is more fundamental: namely, that Facebook simply doesn’t provide enough transparency or clarity on the impact of the fact-checking that groups like Full Fact do. How many users did the debunks or fact-checks reach? How many clicked on the related links from the info pane? Did this slow or even halt the spread of that misinformation? Facebook doesn’t divulge enough data to even begin to answer those questions. Its only response to the Full Fact report and its 11 recommendations was to tell the group that it is “encouraged that many of the recommendations in the report are being actively pursued by our teams as part of continued dialogue with our partners, and we know there’s always room to improve.” There was no response to the criticism about a lack of data.

Continue reading “Facebook’s 3rd party fact-checking program falls short”