Facebook has put on what amounts to a full-court press over the past several days, a move that appears to be aimed at convincing Congress it is working hard to crack down on misinformation ahead of the upcoming US midterm elections. But is it really? Tuesday’s announcement that the company shut down 32 accounts for what it calls “inauthentic behavior” sounded impressive, and the blog post describing the move was filled with colorful details. On closer examination, however, the shutdown looks like fairly small potatoes, which makes the whole thing feel more like a PR campaign than anything substantive.
For a social network that has 2.2 billion users every day uploading more than ** posts and other content, 30 pages and accounts amount to a tiny molecule in a vast ocean of information. Even the most engaging post from that entire network garnered a relatively puny ** followers, and most of the content posted by the pages in question didn’t have anything to do with politics or even broader social issues related to the election.
Facebook made a point of saying that it wanted to be as transparent as possible about the steps it was taking, noting that it had shared details with Congress and with other tech companies, as well as with researchers such as the Digital Forensic Research Lab, and publishing a series of blog posts written by senior executives. And yet, this is the same company that has been repeatedly criticized by the UK government for not sharing enough information about its connections to Cambridge Analytica and that company’s use of private data. In a recent report, the UK’s commission on disinformation said:
“What we found, time and again, during the course of our inquiry, was the failure on occasions of Facebook and other tech companies, to provide us with the information that we sought. We undertook fifteen exchanges of correspondence with Facebook, and two oral evidence sessions, in an attempt to elicit some of the information that they held, including information regarding users’ data, foreign interference and details of the so-called ‘dark ads’ that had reached Facebook users. Facebook consistently responded to questions by giving the minimal amount of information possible, and routinely failed to offer information relevant to the inquiry.”
It’s easy to see why Facebook might be interested in at least giving the impression that it is hard at work fighting misinformation and malicious behavior. The federal grilling it got in the aftermath of the 2016 election about the activities of the Internet Research Agency, a Russian-operated troll farm, forced CEO Mark Zuckerberg and other senior executives to embark on what some called the 2018 Facebook Apology Tour, during which dozens of Senators and congressmen and women took turns admonishing them for allowing their platform to be used in an attempt to destabilize American democracy.
This experience was more than just embarrassing. It raised the possibility that Congress could decide to regulate the social network in a variety of unpleasant ways, up to and including limiting the protection it currently enjoys under Section 230 of the Communications Decency Act. That’s the clause which effectively gives Facebook and other social platforms immunity from prosecution for anything posted by their users.
A recent discussion paper circulated among members of Congress and the tech community by Democratic Senator Mark Warner, vice-chairman of the Senate Intelligence Committee, raises that as one of a number of potential regulatory moves—along with forcing the platforms to label automated accounts, requiring them to put a price tag on the user data they collect, and implementing a privacy framework similar to the European Union’s GDPR or General Data Protection Regulation. The proposals have no real regulatory weight, but they are still signposts that indicate where some politicians would like to go.