When you play a game, it’s handy to have a score, so you know how you did compared to all the other players. But what if the score is one that Facebook assigns you based on your estimated “trustworthiness,” and the criteria behind the score is kept secret from you? That appears to be the case, according to a report from the The Washington Post on Tuesday. A Facebook product manager in charge of fighting misinformation (there’s a job title for the ages) told the paper that the social network has developed the ranking system over the past year as it has tried to deal with “fake news” on the platform. Every user is given a score between zero and one, which indicates whether they are considered to be trustworthy when it comes to either posting content or flagging already posted content as fake.
It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” product manager Tessa Lyons told the Post. The trustworthiness score is designed in part to guard against this kind of gaming of the process. Facebook also took pains to point out that there is no single “reputation score” given to users, and that the trustworthiness ranking is not an absolute indicator of a person’s credibility. It is just one measurement among thousands of behavioral clues, Lyons said, which are used to determine whether a post is legitimate and/or whether a post was flagged improperly.
The speed with which Facebook tried to reassure users that they don’t have a single reputation score isn’t surprising, given all the attention on what is happening with social activity in China. There, the government is assigning all Chinese citizens a “social credit score” based on their behavior both online and offline, including what they share via networks like WeChat (which is a little like Facebook, Instagram, Snapchat, and PayPal combined into a single app). This social credit score can then be used to determine who gets access to certain services, including schools. No one is suggesting that anything quite so dystopian is going on at Facebook, but still the idea of being assigned a secret trustworthiness score by a network that controls the information diet of more than two billion people feels a tad uncomfortable.
After initially refusing to take action against notorious conspiracy monger Alex Jones—even after virtually every major platform had removed him for publishing hate speech—Twitter seemed to reluctantly admit that it had a problem, and put Jones in “Twitter jail” by suspending his account for seven days. Now, in a wide-ranging interview with the Washington Post, part-time CEO Jack Dorsey has promised to look closely at ways to solve Twitter’s troll problem, up to and including changing some of the “incentives” that are built into the social network and the way it rewards users for certain kinds of behavior.
Dorsey’s promises might seem like a magnanimous gesture, a kind of “we will stop at nothing” declaration of purpose. But for anyone who has paid attention to Twitter for more than a nanosecond, there are a couple of significant issues with what the Twitter CEO said. For example, the Post notes that “Dorsey said Twitter hasn’t changed its incentives, which were originally designed to nudge people to interact and keep them engaged, in the 12 years since Twitter was founded.”
In other words, only now, more than a decade after Twitter was founded, is Dorsey finally willing to take a hard look at some of the potential negative effects of the technology he and his company created, years after those problems were first brought to their attention. What took so long? The most obvious answer, as with Facebook’s deliberate attempt to ignore similar problems, is that avoiding or side-stepping those negative aspects was far more lucrative than trying to solve them—harassment and flame wars and misinformation are also known as engagement.
But that’s only part of the problem with Dorsey’s mea culpa. The second statement in the interview that should set off warning bells is when the Post says the Twitter CEO is thinking about “redesigning key elements of the social network, including the like button and the way Twitter displays users’ follower counts,” because they no longer reflect what the social network wants people to do.
All that’s necessary, Dorsey appears to be saying, is a few tweaks to Twitter’s design interface—maybe highlight some things in a different typeface, make a button a little larger or give it a different name—and boom! Problem solved. It’s like seeing racism and homophobia and other forms of harassment as byproducts of a poorly designed user interface, or some kind of bug in the software, and believing that if we could just do enough A/B testing, we could solve it once and for all.
The reality is that Twitter’s problems (and Facebook’s, and Instagram’s, and even Pinterest’s) are not flaws in programming, or reactions to design incentives, they are the result of deep-seated social, cultural, and psychological issues, some of which have been around for hundreds—if not thousands—of years. The walls of the ancient ruins in Pompeii are covered in political graffiti that could have been taken directly from Twitter (“All the deadbeats vote for Thucydides”).
The idea that a tweak in how the network prioritizes retweets or labels favorites is going to alter that kind of behavior is absurd, especially coming from Twitter, which seems to have spent a majority of its time as a company almost completely in the dark about how or why people use it.
It may even be the case, as John Herrman argues in The New York Times, that Twitter is simply too large to function in the way it wants to, as a kind of town square where ideas compete with one another and everyone’s speech has exactly the same weight as everyone else’s. It’s possible that human beings aren’t designed to work properly in a “community” that consists of 350 million people. But regardless of whether the problem is unsolvable or not, the idea that Twitter can do so by turning a few software dials is nonsense on stilts.
If you’re a journalist, chances are you’ve either read or been forwarded links to a story in The Australian, a newspaper based in Sydney that contains some explosive commentary from Facebook CEO Mark Zuckerberg behind its paywall (the Daily Beast has a summary of it). According to the story, Campbell Brown—Facebook’s head of news partnerships—said in a meeting with the paper’s senior executives that “Mark doesn’t care about publishers,” and also warned that if media companies didn’t work with the giant social network on business model solutions, “in a few years, I’ll be holding your hand with your dying business, like in a hospice.”
These comments were held up by some as conclusive proof that Facebook hates journalists and can’t wait for the industry to die. After all, the sentiment seemed to fit right in with some of the social network’s recent moves, which have reduced traffic to media outlets by significant amounts. Some even believe the company is trying to deliberately distance itself from media because it is such a political and social minefield, and that all of this represents a retreat from having to deal with journalism altogether. But would Facebook really come out and say it doesn’t care if the media dies?
Sources at Facebook, not surprisingly perhaps, say Brown’s comments were taken out of context and in some cases appear to have been manufactured wholesale. “These quotes are simply not accurate and don’t reflect the discussion we had in the meeting,” Brown said in a prepared statement. The company says they don’t reflect its actual thinking either about journalists or the media industry as a whole. No one has used the term “fake news,” but it’s obvious people within Facebook are thinking it. The social network says it has a recording of the meeting that proves its case, but so far the company hasn’t released it.
As usual when the Facebook is involved, there are a number of layers to this latest dust up. One is that Facebook probably is trying to distance itself from the media—whenever it gets involved, it raises issues like the company’s role in misinformation, censorship, and other unpleasantness. Also, as Josh Benton has pointed out at Nieman Lab, it appears that Facebook really is a lot less interested in driving traffic to publishers, based on the available evidence from publishers like Quartz.
On top of that, Campbell Brown’s bald statement that “we are not interested in talking about your referrals any more” has the ring of truth, given some of what she said at a Recode conference earlier this year, when she told publishers they could basically take it or leave it. “If anyone feels this isn’t the right platform for them, they should not be on Facebook,” she said at the time. Some journalists even appear to support her latest comments as a no-holds-barred assessment of where things stand.
Whether Facebook is making the changes it has (de-emphasizing traffic to media outlets, etc.) because it literally hates the media and wants it to die is anybody’s guess, of course, but the fact remains they are happening.So the comments from Brown might have seemed like a veiled threat, but they could also have been just a statement of fact: If Facebook won’t provide the revenue or the traffic necessary for some outlets to survive,publishers might start going on life support. Journalists wish this wasn’t true, but are afraid that it might be.
Infowars conspiracy theorist Alex Jones has been blocked, banned or removed from a host of platforms, including Facebook, Spotify, YouTube, Apple, and even MailChimp. One major social service has so far refused to join the anti-Jones bandwagon, however: Twitter CEO Jack Dorsey took to his own service on Tuesday to reiterate that he has no plans to ban Jones or his ilk. Why has the company chosen this path when everyone else seems convinced banning him is the right thing to do?
In part, it could be about boosting engagement and revenue—the same answer that many give when asked why Twitter allows the troll-in-chief, Donald Trump, to remain on the network. But the answer also likely has a lot to do with the company’s history as a social platform, and its vision of itself as a bastion for free speech.
You can see this in Dorsey’s responses to the Infowars controversy. His first message is simple: Twitter hasn’t banned or suspended Jones or Infowars because they haven’t violated Twitter’s rules of behavior. In a followup message, he suggests that having Jones on the service is the best approach, because that allows journalists to “document, validate and refute such information directly so people can form their own opinions.” This approach is what “serves the public conversation best,” says Dorsey.
Many journalists responded antagonistically to this, since it implied journalists should be cleaning up the platform instead of the company itself. “You know, Jack, our days are pretty full as it is without cleaning up your website for you pro bono,” said the Portland Press Herald. On a deeper level, however, Dorsey’s message fits with his view of what Twitter is—an information network populated in part by journalists, who perform a kind of crowdsourced fact-checking service, and thereby create a marketplace of ideas where controversial views are encouraged and free speech reigns.
This is markedly different from what Facebook has been trying to do since it first appeared on the world stage. Although CEO Mark Zuckerberg likes to talk about free speech, Facebook’s purpose has always been much more about community, about building connections between family members and friends. Free speech has always taken a back seat to those goals, and to the goal of building a multibillion-dollar revenue generating machine—in fact, Facebook has shown time and time again that it is more than happy to take down or block content for a variety of reasons, including government pressure.
Twitter, by contrast, has always seen itself as “the free-speech wing of the free-speech party,” as former Twitter executive Tony Wang put it in 2012. From the beginning, the company’s focus has been protecting the right of users to say whatever they wanted, even if it was problematic—as it did in 2013 when it fought a French demand to censor homophobic and anti-Semitic comments. The company has also fought numerous attempts by various governments to block or censor content, although it does censor certain kinds of posts where it is required to do so by law (including pro-Nazi sentiment in Germany).
This helps explain why Twitter has tried to define what is and isn’t acceptable so narrowly, saying tweets have to contain explicit statements of violence towards specific individuals before they contravene the rules. In a sense, the company is trapped in the utopian vision of the future it had when it started: That giving people the tools to share information in real time would create a kind of intellectual meritocracy where the best information would win. To some, that now seems like a hopelessly naive way to look at the internet, given overwhelming evidence that networks like Twitter and Facebook have enabled hate speech and harassment and even contributed to violence on a scale never before possible.
Free speech and censorship are hot topics in North America, with heated debates over issues such as Facebook’s decision to delete pages belonging to conspiracy theorist Alex Jones, Twitter’s refusal to ban neo-Nazis, and whether Google should remove controversial or offensive YouTube videos. But none of these topics stir much interest in China, according to a recent piece by Li Yuan, a technology writer for The New York Times—mostly because an entire generation has never heard of Facebook, Twitter or Google, and censorship isn’t something they seem to care much about.
For anyone with an interest in the open, uncensored web, Li’s portrayal of how millennials and their ilk in China experience the Internet is likely to be profoundly depressing. She mentions an 18-year-old named Wei Dilong, who lives in a city in southern China and likes basketball, hip-hop music and Hollywood superhero movies. He has never heard of Google or Twitter, and has a hunch that Facebook might be a bit like Baidu, the Chinese search engine. Wen Shengjian, a 14-year-old who likes playing basketball, said he had heard of Google, Facebook, Twitter and Instagram, but said a friend of his father’s told him they were blocked because some of their content wasn’t appropriate for the development of socialism with Chinese characteristics. “I don’t need them,” Wen said.
Li’s piece raises the possibility that the Chinese government has achieved some or all of its original goal in blocking certain sites and services, and heavily censoring others: It has managed to keep almost an entire generation away from content it disapproves of, and has replaced Western apps and services with its own heavily censored versions, to the point where young Chinese men and women show little or no interest in—or even awareness of—the alternative. According to Li, two economists found most college students were not interested in uncensored sites even when they were given free tools to access them.
The Knight First Amendment Institute on Tuesday called on Facebook to add a special amendment to its terms of service that would create a “safe harbor” for journalists and researchers, allowing them to do things other users are forbidden from doing, including creating fake accounts and using automated tools to harvest user data. It may seem like a reasonable request, but it’s likely to be highly contentious, if only because those are the exact same things that blew up in the company’s face with the Cambridge Analytica fiasco and the Internet Research Agency, the infamous Russian “troll farm.”
As reasonable as the Institute’s request may be, however, there’s an inherent problem at the heart of its proposal: Namely, who gets to decide who is deserving of protection? Having Facebook choose which researchers qualify might not raise too many red flags, but giving a private corporation the ability to say who is or isn’t an approved journalist would be hugely controversial, as evidenced by the controversy over Facebook’s recent attempts to rank “trusted” news outlets. Also, what’s to prevent bad actors from pretending to be journalists or researchers in order to get around the rules?
The Institute’s proposal and letter to Facebook CEO Mark Zuckerberg suggest that the research or journalism in question would have to be designed to “inform the general public about matters of public concern,” including issues like echo chambers, misinformation, and discrimination. The proposal says researchers and journalists would have to take steps to protect user privacy and to not mislead users about the purpose of their work, and wouldn’t be able to sell or transfer any data they acquired.
Jameel Jaffer, the Institute’s executive director, said in an email that the group isn’t asking Facebook to decide who is and who isn’t a journalist. “We’re asking it to decide, with respect to any given investigative project, whether the purpose of the project is to inform the general public about matters of public concern, and whether the project appropriately protects the privacy of Facebook’s users and the integrity of Facebook’s platform,” he said. While there are risks in asking the platform to do so, Jaffer said it would be better than journalists and researchers being blocked from doing their work.
Digital journalism and research “are crucial to the public’s understanding of Facebook’s platform and its influence on our society,” the Institute says in its proposal. But Facebook’s terms of service “limit this kind of journalism and research because they ban tools that are often necessary to it.” The statement goes on to point out that journalists and researchers who use these tools risk not only having their accounts suspended or disabled, but also risk civil and criminal liability under the Computer Fraud and Abuse Act.
Kashmir Hill, who works on investigative projects for Gizmodo, says in a piece published Tuesday that Facebook tried to shut down a tool the site came up with to do research into the social network’s “People You May Know” feature. The tool still remains up and active, but Facebook made it clear that the kind of automated data collection Gizmodo was trying to do was a breach of its terms. The Knight Institute said Hill is one of the journalists it is representing in its attempt to get a safe harbor exemption, along with Kate Conger of The New York Times and award-winning journalist Cameron Hickey.
So far, Facebook’s response to the Institute’s request suggests it wants to appear concerned about the problem without sending any signal whatsoever about whether it intends to help. Campbell Brown, the social network’s Head of News, said in a statement that journalists and researchers “play a critical role in helping people better understand companies and their products—as well as holding us accountable when we get things wrong,” and that Facebook recognizes its rules “sometimes get in the way of this work.” But the company said nothing about what, if anything, it plans to do about that problem.
After almost a decade of giving China the cold shoulder, Google appears to be planning to re-enter the country officially, even if doing so means agreeing to the government’s demands for wholesale censorship of topics such as human rights and democracy. The story was initially reported by The Intercept, but multiple sources have now confirmed Google has been working on a Chinese search app (using the code name Dragonfly) for over a year, as well as a news app. Both would block sites that don’t comply with the country’s censorship rules, effectively making them part of China’s “Great Firewall.”
The news has caused some consternation in political circles, but also within Google, in much the same way the company’s work for the US Department of Defence did earlier this year. In that case, Google said it would not renew a contract it was working on, but it is less likely to back down in the case of China, since it represents a huge market opportunity (Facebook is also said to have worked on a feature that would allow the Chinese government to censor content on the network). According to The Intercept:
“When a person carries out a search, banned websites will be removed from the first page of results, and a disclaimer will be displayed stating that ‘some results may have been removed due to statutory requirements.’ Examples cited in the documents of websites that will be subject to the censorship include those of British news broadcaster BBC and the online encyclopedia Wikipedia.”
Moving back into China with a service that implements government censorship would be a significant reversal for Google, which pulled out of the country completely eight years ago. The final straw was a series of hacks aimed at prominent Gmail accounts, but Google’s decision to leave also appeared to be driven in part by concerns about how it was playing into the hands of a totalitarian state by doing business in the country. At the time, co-founder Sergey Brin spoke about how China’s tactics reminded him of the methods used by the government of the former Soviet Union, where he lived as a child.
When Google was still active in the country, the company’s argument was that withdrawing would be worse than continuing to do business with a repressive government, since it would deprive Chinese citizens of a useful service. But that argument is significantly less persuasive now that China’s Baidu has effectively become the local version of Google search, and many other services as well. Now, an attempt to move back into the country would look more like a crass commercial gesture.
Google watchers and others are also concerned that if the company accedes to the Chinese government’s demands, it will make it easier for others to do so, and will also embolden other totalitarian states to ask for their own custom censorship services from Google and other tech giants.
“Any move by Google to provide government censored search services to China would not only be evil, but also incredibly dangerous,” wrote Lauren Weinstein, a long-time technology commentator who has worked as a consultant for the company. If it goes ahead with the plans, he said, “Google will not only have gone directly and catastrophically against its most fundamental purposes and ideals, but will have set the stage for similar demands for vast Google-enabled mass censorship from other countries around the world.”
Facebook has put on what amounts to a full-court press over the past several days, a move that appears to be aimed at convincing Congress it is working hard to crack down on misinformation ahead of the upcoming US midterm elections. But is it really? Tuesday’s announcement that the company shut down 32 accounts for what it calls “inauthentic behavior” sounded impressive, and the blog post describing the move was filled with colorful details. On closer examination, however, the shutdown looks like fairly small potatoes, which makes the whole thing feel more like a PR campaign than anything substantive.
For a social network that has 2.2 billion users every day uploading more than ** posts and other content, 30 pages and accounts amount to a tiny molecule in a vast ocean of information. Even the most engaging post from that entire network garnered a relatively puny ** followers, and most of the content posted by the pages in question didn’t have anything to do with politics or even broader social issues related to the election.
Facebook made a point of saying that it wanted to be as transparent as possible about the steps it was taking, noting that it had shared details with Congress and with other tech companies, as well as with researchers such as the Digital Forensic Research Lab, and publishing a series of blog posts written by senior executives. And yet, this is the same company that has been repeatedly criticized by the UK government for not sharing enough information about its connections to Cambridge Analytica and that company’s use of private data. In a recent report, the UK’s commission on disinformation said:
“What we found, time and again, during the course of our inquiry, was the failure on occasions of Facebook and other tech companies, to provide us with the information that we sought. We undertook fifteen exchanges of correspondence with Facebook, and two oral evidence sessions, in an attempt to elicit some of the information that they held, including information regarding users’ data, foreign interference and details of the so-called ‘dark ads’ that had reached Facebook users. Facebook consistently responded to questions by giving the minimal amount of information possible, and routinely failed to offer information relevant to the inquiry.”
It’s easy to see why Facebook might be interested in at least giving the impression that it is hard at work fighting misinformation and malicious behavior. The federal grilling it got in the aftermath of the 2016 election about the activities of the Internet Research Agency, a Russian-operated troll farm, forced CEO Mark Zuckerberg and other senior executives to embark on what some called the 2018 Facebook Apology Tour, during which dozens of Senators and congressmen and women took turns admonishing them for allowing their platform to be used in an attempt to destabilize American democracy.
This experience was more than just embarrassing. It raised the possibility that Congress could decide to regulate the social network in a variety of unpleasant ways, up to and including limiting the protection it currently enjoys under Section 230 of the Communications Decency Act. That’s the clause which effectively gives Facebook and other social platforms immunity from prosecution for anything posted by their users.
A recent discussion paper circulated among members of Congress and the tech community by Democratic Senator Mark Warner, vice-chairman of the Senate Intelligence Committee, raises that as one of a number of potential regulatory moves—along with forcing the platforms to label automated accounts, requiring them to put a price tag on the user data they collect, and implementing a privacy framework similar to the European Union’s GDPR or General Data Protection Regulation. The proposals have no real regulatory weight, but they are still signposts that indicate where some politicians would like to go.