The major tech companies did what they probably hoped was the requisite amount of bowing and scraping before the assembled members of both the House and the Senate intelligence committees on Wednesday, after being called on the carpet for their role in distributing Russian-backed ads and fake news during the 2016 election. But tangible commitments from the tech giants were few and far between.
Once the political rhetoric was swept away, representatives from Facebook, Google, and Twitter admitted they make money (in some cases quite a lot of it, as Facebook reported a record profit of $4.7 billion for the latest quarter) from their advertising businesses. And because of the structure of their platforms, all admitted that some of that money inevitably comes from fake accounts, including—as it turns out—agents of the Russian government.
Google, in fact, said that while Twitter has banned the Russian government-backed media outlet RT from its platform, the search giant had no plans to stop RT from advertising on YouTube, which has reportedly become a significant part of the Russian outlet’s media campaign. Why? Because its behavior hasn’t breached Google’s rules, the company said.
In a nutshell, the trio were adamant (in a deferential way, of course) that while they look and behave very much like media companies, they will resist attempts to force them to abide by the same kinds of rules. Each committed to taking steps to add disclosure to their ads, in the hope that doing so might blunt the need for legislation, which the Senate is currently working on.
The three repeated many of the same tropes in their testimony that they trotted out in the Senate judiciary committee meeting on Tuesday. Namely, that malicious behavior by fake accounts created by Russian troll farms was relatively minor in scope compared to the size of their vast platforms, that they recognize how disturbing these incidents were—and they feel terrible about it—and that they are working hard to prevent it from happening again.
In all three cases, the companies appear to be trying desperately to have their cake and eat it too: Arguing that the number of fake accounts or dubious ads or malicious actors represents only a tiny fraction of the activity on their platforms (0.004 percent, according to Facebook) while telling advertisers and corporate users how effective their advertising and reach is.
As more than one senator pointed out during the interview portion of the hearings, one of the best advertisements for the effectiveness of the platforms is the amount of influence that Russia’s troll farms were able to purchase for so little money. And advertisers are clearly getting that message loud and clear.
Facebook, for example, admitted at the hearings that almost 150 million users were exposed to the fake ads and accounts that were created by the Kremlin-backed entity known as the Internet Research Agency, after initially saying just a few million were exposed (and even earlier claiming there was no evidence of Russian involvement at all). And what did all of that exposure cost the Russian outfit? About $100,000.
In some cases, campaigns that cost just $1,200—several of which were displayed by senators during the hearings and released to the public afterwards, including pages with names like South United and Blacktivists—got the fake accounts huge numbers of followers and engagement.
https://twitter.com/dnvolz/status/925796721002799105
While Facebook in particular tried hard to keep the conversation focused on the advertising issue, several members of the senate committee pointed out that a far larger problem is the reach and influence of so-called “organic” posts—which don’t cost anyone anything, and as a result are far more difficult to track (according to Facebook’s general counsel).
This is a crucial point. Unlike traditional media outlets, where advertising and editorial are kept relatively separate, one of the core features of a social network like Facebook is that virtually any piece of content on the platform can become an ad. That feature has helped the company pull tens of billions of dollars of advertising away from traditional media entities, to the point where it and Google now control a majority of the digital ad business.
And what exactly are the platforms doing to try and prevent similar problems in the future? This was a question repeated over and over throughout the proceedings, but the answer isn’t at all clear, and in fact it got murkier and murkier as the hearings continued.
All three of the companies said they are working on improving their automated systems so they could detect potential fake or malicious accounts better and faster—Twitter claimed it has gotten twice as good as it used to be, and now challenges 4 million potentially fake accounts every week. Facebook talked about partnering with other companies on a cyber-threat team, and said it is doubling the number of people it has working on security to 20,000.
When pressed, however, all three admitted that they probably haven’t discovered all of the malicious activity on their platforms, and that there is likely to be much more to come—including more Russian-linked activity. And what, if anything, should Congress be doing about that? Shrugs all around (but deferential shrugs, of course).
Each of the platforms also demurred when pressed on some of the steps that senators and members of the House committee thought might be worthwhile, such as notifying users who had been the target of fake ads and accounts. Too difficult, Facebook said.
The tech platforms each have their own reasons for trying to ju jitsu their way out of the government’s clutches. In Twitter’s case, it is desperately trying to hang onto its status as an anonymous network that stresses free speech, something that came under fire repeatedly during the hearings. But a lack of action could inflame the desire of some legislators to regulate the tech giants, since many believe they are already too powerful.
And what form might that legislation take? That remains to be seen, but proposals from critics have so far run the gamut from requiring better advertising disclosure to subjecting some or all of the tech giants to the full weight of US antitrust legislation, or fine-tuning the “safe harbor” that internet giants currently enjoy when it comes to offensive content.
As Senator Dianne Feinstein put it during the hearings: “You created these platforms, and now they’re being misused. And you have to be the ones who do something about it—or we will.” And that is likely to strike fear into the hearts of even the most powerful tech giant.