Meta let researchers study whether it stokes polarization. The results were polarizing

For much of the last decade, academic researchers have been trying to persuade Meta, the company formerly known as Facebook, to share internal data about the behavior of users on its platforms, so that they might understand how—if at all—the sites’ algorithms influence people’s political views and behavior. The company suggested that it might offer such access; back in 2018, it even launched a project designed to share data. But the amount of usable information it ended up offering to researchers was minuscule, and in some cases, significantly flawed. As I reported for CJR two years ago this month, Meta also thwarted attempts by social scientists to collect their own data through scraping, and even disabled the accounts of some researchers. All this left the impression that the company had no interest in facilitating academic scrutiny.

It was more than a little surprising, then, when social scientists last week published not one but four new studies based on user data that Meta had shared with them, part of a research project that the company launched in 2020 to analyze users’ behavior both during and immediately after that year’s presidential election. Meta provided twenty million dollars in funding (the company did not pay the researchers involved directly), and the project was coordinated by the University of Chicago’s National Opinion Research Center, a nonpartisan organization that also helped to collect and distribute some of the data. The research was initially scheduled to be released in the summer of 2021, but was delayed a number of times; the lead researchers said that the job of sorting and analyzing all the data was “significantly more time-consuming” than they had expected. The January 6 riot at the Capitol also extended the project’s timeline. 

According to several of the researchers involved and an independent observer of the process—Michael W. Wagner, a professor of journalism and communication at the University of Wisconsin-Madison—Meta provided virtually all the data that they requested, and did not restrict or try to influence the research. A number of Meta staffers are named as co-authors of the papers. And the project isn’t done yet—another twelve research projects are set to drop soon.

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

In three of the four new studies, which were published in the journals Science and Nature, researchers from institutions including the University of Texas, New York University, and Princeton modified how Facebook functions in a number of different ways in an attempt to determine whether the site and its algorithms influenced users’ political beliefs or behavior. One study, which was based on data from more than twenty thousand Facebook users and a similar number of Instagram users, replaced the normal algorithm used to sort the news feed and instead showed users a reverse-chronological feed (one in which more recent posts appear first). As Fast Company notes, this was among the reforms endorsed by Frances Haugen, a former Facebook staffer turned high-profile whistleblower. At the time, the idea seemed also to appeal to a number of members of Congress.

A different paper examined whether limiting a user’s ability to share another’s post could lead to changes in their political beliefs, since this type of behavior often involves viral content that is more likely to be misleading. The researchers behind another paper tried limiting the amount of content to which users were exposed by friends or by Facebook pages and groups with which they sympathized, the idea here being that such content can entrench different beliefs and behavior than might be the case if they were exposed to different political views. In the fourth paper, researchers analyzed which news stories made it into the feeds of Facebook users in the US  and correlated this with how liberal or conservative the users were.

So, did all this research show that Facebook’s algorithms changed people’s political behavior or beliefs? According to Meta, it emphatically did not. Nick Clegg, the company’s president of global affairs, wrote in a blog post that while questions about the impact of social media on political attitudes and behavior are not settled, the studies add to what he described as a “growing body of research showing there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization.” 

As The Atlantic’s Kaitlyn Tiffany noted, however, while the research comes with “legitimate vetting,” Clegg’s conclusion is fraught: “an immensely powerful company that has long been criticized for pulling at the seams of American democracy—and for shutting out external researchers—is now backing research that suggests, Hey, maybe social media’s effects are not so bad.” And, according to a report in the Wall Street Journal, some of the researchers involved in the project disagreed strenuously with Clegg’s characterization, as did Wagner, the impartial observer; along with officials from the journal Science, they stated (in the Journal’s words) that Meta was “overstating or mischaracterizing some of the findings.” Science headlined its print package about the research “Wired To Split.” Meta reportedly took issue with this, asking that a question mark be added to imply that the question was not settled, but Science told the company (also per the Journal) that it considered “its presentation of the research to be fair.” 

This disagreement was fueled in part by the complexity of the results that the research threw up. In the study that replaced an algorithm-powered feed with a chronological one, for example, users spent less time on Facebook. As a result, they were exposed to less content that reinforced their existing beliefs, which could be seen as a positive, and less polarizing, experience. At the same time, these users also saw a substantially higher number of posts from untrustworthy sources, which is a somewhat less desirable outcome. And the researchers found that neither of these changes had a perceptible impact on polarization, political awareness, or political participation.

The study in which users’ ability to re-share content was limited showed that those users saw a dramatically smaller number of posts from untrustworthy sources—but it also reduced the amount of political news that they saw, which led to lower levels of political knowledge, an outcome that might be seen as negative. As in the chronological-feed study, limiting re-sharing seemed to have no perceptible impact on polarization. And the study that reduced the amount of content that users saw from like-minded accounts also showed no effect on polarization or the extremity of people’s views. As Fast Company noted, when users did see posts from like-minded sources, they were even more likely to engage with them, “as if being deprived of that content made them even more hungry for it.”

In addition to this complexity, the studies have been criticized on methodological and other grounds. Various critics of Meta noted that the findings only apply to a limited time period around the 2020 election, and that Meta’s content policies have since evolved. David Garcia, a professor at the University of Konstanz in Germany, wrote in Nature that, as significant and broad-reaching as the studies may have been from a research point of view, they do not foreclose the possibility that Facebook’s algorithms do contribute to political polarization; Garcia told Tiffany that the experiments were conducted at the individual level, whereas polarization is “a collective phenomenon.” To prove that algorithms do not play a role would be much harder, Garcia said—if it’s even possible at all. And Casey Newton wrote, in his Platformer newsletter, that the studies are consistent with “the idea that Facebook represents only one facet of the broader media ecosystem.” 

For me, this is the most compelling takeaway from the studies: it’s difficult, if not impossible, to show that Facebook did or didn’t change users’ political attitudes because it is impossible to separate what happens on Facebook from what happens beyond it. As Newton points out, Facebook may have removed election and other strands of disinformation in 2020, but “election lies still ran rampant on Fox News, Newsmax, and other sources.” In the end, as Newton writes, “the rot in our democracy runs much deeper than what you find on Facebook.” 

13 Replies to “Meta let researchers study whether it stokes polarization. The results were polarizing”

Leave a Reply

Your email address will not be published. Required fields are marked *