Note: This post was originally published as the daily newsletter at the Columbia Journalism Review, where I am the chief digital writer
The history of Congressional hearings into the inner workings of Facebook, Twitter, and Google — which dates back to before 2017 — isn’t exactly filled with penetrating insights or dogged investigation. For the most part, it’s been a series of sideshow carnival-style events, with a lot of grandstanding by senators and members of congress designed to get airtime on TV news shows and/or help with re-election bids, not to mention finger-waving about non-existent fears, such as the alleged bias that social platforms like Facebook have against conservative voices. For every hard-hitting question about the ways in which these networks distort information or use personal data for ad targeting, there have been dozens more poorly-informed inquiries like Republican Senator Orrin Hatch’s infamous question about how Facebook makes money if it doesn’t sell personal data. “We sell ads, Senator” chief executive Mark Zuckerberg replied, overjoyed at seeing such a softball pitch.
Given that backdrop, the likelihood of yet another Congressional hearing producing anything of substance was extremely low, especially since the one that just concluded on Tuesday — titled “Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape our Discourse and Our Minds,” — didn’t involve any of the chief executives of Facebook, Twitter, or Google. The fact that there were no high-profile names attached helps explain why there were no front-page headlines with quotes from those involved, or video clips of senior executives being pigeonholed by a senator. In advance of the hearing, some argued that the lack of big names might actually be a positive development, since there was less chance of the whole thing turning into a circus. So was this hearing notable for its depth or perspicacity? Not really. If anything, there was less outrage than there probably should be about the hidden algorithms that control what we see and do on social platforms.
At the outset of the hearing, Democratic Senator Chris Coons said “there’s nothing inherently wrong” with how Facebook, Twitter, and YouTube use algorithms to keep users engaged. Coons said the committee wasn’t weighing any actual legislation and that the hearing was designed to be a listening session between legislators and the platforms. That sanguine description was at odds with some of the experts who testified, however, including Joan Donovan, who runs the Technology and Social Change project at Harvard’s Shorenstein Center. “The biggest problem facing our nation is misinformation-at-scale,” she told the committee, adding that “the cost of doing nothing is democracy’s end.” On Twitter, Donovan criticized those at the hearing for not going deeper. “The companies should have been answering questions about how they determine what content to distribute and what criteria is used to moderate,” she said. “We could have also gone deeper into the role that political advertising and source hacking plays on our democracy.” For its part, Facebook routinely argues that its algorithms merely give you more of what you have already indicated you want to see or interact with on the platform.
There are a number of problems with Congress trying to actually do anything about the algorithms that power Facebook, Twitter, and Google, and one of those problems is that very little is known about how they are designed, how they function, and how and why they are tweaked. Everyone knows that they are used to target ads, but no one outside those companies knows much about how. Facebook routinely talks about changes to its algorithm and describes in very general terms what it is trying to do (highlight more personal content, for example) but the specifics are a secret. Twitter rarely says anything about the algorithms it uses, and Google never does. And the second problem when it comes to algorithms is that Facebook and the other major platforms are protected by both the First Amendment and Section 230 of the Communications Decency Act. The first protects the rights of these companies to curate their content in whatever way they wish (within certain limits) and the second not only reinforces that right to remove any content they wish, but also protects them from liability for content they host that is created by their users.
There are ongoing attempts to limit some of Section 230’s protections, including two pieces of proposed legislation that were brought up during the hearing on Tuesday. One is a bill that was recently reintroduced, called the “Protecting Americans from Dangerous Algorithms Act.” It would remove Section 230 liability protection from any platform if its algorithms are used to “amplify or recommend content directly relevant to a case involving interference with civil rights… or in cases involving acts of international terrorism.” The problem with taking this approach, according to critics like Mike Masnick of Techdirt, is that if the platforms are exposed to liability in such cases, “they will cleanse [their platforms] of potentially extreme, though First Amendment protected, speech. This amounts to legislative censorship by fiat.” In addition, Masnick and others argue, much of the radicalization and disinformation that Congress is concerned about occurs in private groups and messaging services, which the proposed legislation would not address at all. These kinds of laws are not only likely to fail to achieve their goals, but would also make everyone’s experience on social platforms like Facebook much less safe, says the Electronic Frontier Foundation.
Here’s more on the platforms:
Failure to prevent: BuzzFeed News revealed last Thursday that an internal Facebook report criticized the company for failing to prevent the “Stop the Steal” movement from using its platform to incite the January 6 attack on the US Capitol. The report discusses how Facebook missed critical warning signs about the growth and influence of the movement, and concludes “the company was unprepared to stop people from spreading hate and incitement to violence.” The report’s authors published the document to Facebook’s internal message board, making it available to company employees, but it was later removed.
Not reconcilable: Justin Hendrix, co-founder, chief executive and editor of Tech Policy Press, writes that Republican Senator Ben Sasse made the most perceptive point at the hearing when he argued that the answers the platforms were providing about the way their algorithms function were simply “not reconcilable” with the positions of their critics. This, Hendrix argues, puts the focus back on lawmakers, “whose duty it is to reconcile the interests of society and democracy with the business interests of the platforms.”
Undue influence: Members of Congress are reportedly looking into whether Google tried to influence a critic’s testimony at a hearing last week about the future of app stores. Senators Klobuchar and Lee have asked for the details of an alleged phone call between a Google employee and a Match Group employee prior to a hearing before the Senate Judiciary subcommittee, during which Match and other Google critics accused the company of using its monopoly power to curb competition. According to one report, a Google executive called Match after the testimony to ask why the company’s comments didn’t jibe with previous comments it had made about Google and its app store.
Other notable stories:
Project Veritas has filed a defamation lawsuit against CNN for saying during a broadcast in February that the group’s account was suspended from Twitter as part of a crackdown by the social network on users who spread misinformation, according to a report by the Hollywood Reporter. The complaint says the Project Veritas account was actually suspended because it included the personal information of other users without their consent, something the group says CNN should have known.
USA Today is experimenting with a pay wall for some of its news stories, the Poynter Institute reports. Earlier this month, the report says, the flagship of the Gannett chain started putting some of its stories behind a pay wall, asking readers to sign up for a digital-only subscription at $4.99 a month. The paper published a short note that appeared along with the request, but Gannett has not announced the experiment publicly, Poynter says. A spokesperson for the chain confirmed that it is testing such a service.
Emily Bell, director of the Tow Center for Digital Journalism at Columbia University, writes about how legislation like Australia’s new bargaining code for social platforms risks putting too much power in the hands of companies like Facebook and Google. “The disappearance of advertising support and the consequent collapse of local journalism is one of the most effective tools being used to leverage more regulatory oversight against the platforms,” she writes. “But tthe scramble to cross-subsidize leaves unanswered the uncomfortable question of whether this close relationship of corporate power and supposedly accountability journalism is something that needs dissolving rather than encouraging.”
A tweet from an Oracle executive that included the Signal and email account info of a female Intercept reporter was found to have violated Twitter’s policies, according to a report by Gizmodo. Ken Glueck, a vice-president with the software company, was forced to take down the tweet and had his account suspended in read-only mode for 12 hours. The reporter, Mara Hvistendahl, recently published a story on how reseller networks in China enable the government to acquire Oracle’s technology.
A new survey of news consumers found that most preferred “solutions journalism” stories to traditional news reports. Respondents said they saw stories containing proposed solutions deeper and more engaging than traditional news stories, according to the survey firm, SmithGeiger. The survey, which was commissioned by the Solutions Journalism Network, found that these results were consistent across all ages and political persuasions, the firm reported, and while the study looked at consumers of TV news content, SmithGeiger said it believes that the results would hold for other platforms as well.
The Trans Journalists’ Association released a statement saying newsrooms should allow trans journalists to retroactively change their bylines once they have come out, to replace their “dead name” with the name that they have chosen to use. “We charge that it’s inappropriate not to retroactively change a trans journalist’s byline when that is what the journalist in question requests,” the statement says. “We strongly urge newsrooms and media organizations to change a trans journalist’s byline to reflect their lived experience without issuing a correction or a disclaimer.” The New York Times is reportedly fighting with its union over a request to allow bylines to be retroactively changed.
The technology news magazine Protocol wrote about News Break, an app that is run almost entirely by algorithms and artificial intelligence. According to the report, it is the most downloaded news app in the world — more than the New York Times, BBC and even Google News. Publishers are ecstatic about the amount of traffic it drives to their sites, Protocol says, and the content it carries is entirely local. The app was founded by a former Yahoo executive and a former Baidu executive with experience in China, and it is similar to other news apps that have become popular in that country, such as Toutiao.
Last week, Time magazine started accepting Bitcoin and 31 other types of cryptocurrencies from paid subscribers, through a partnership with Crypto.com, according to a report by Digiday. The magazine has also started letting sponsors pay in Bitcoin for their advertising campaigns, and a crypto asset manager named Grayscale reportedly signed the first deal two weeks ago (the terms of the partnership with Crypto.com and the sponsorship deal were not disclosed, Digiday says). Time was bought in 2018 by Marc Benioff, the billionaire chief executive of software company Salesforce.