Facebook and the dilemma of coordinated inauthentic behavior

Note: This was originally published as the daily newsletter for the Columbia Journalism Reviews, where I am the chief digital writer

Yesterday, Facebook released a report on what it calls “influence operations” on its platform, which are defined as “coordinated efforts to manipulate or corrupt public debate for a strategic goal.” By this, the company seems to mean primarily the kinds of activity that Americans heard about during the 2016 election, from entities like the Russian “troll farm” known as the Internet Research Agency, which used fake accounts to spread disinformation about the election and to just generally cause chaos. Facebook says in this “threat report” that it has uncovered evidence of disinformation campaigns in more than 50 countries since 2017, and it breaks down some of the details of 150 of these operations over that period. In addition to noting that Russia is still the leading player in this kind of campaign (at least the ones that Facebook knows about) the company describes how dealing with these kinds of threats has become much more complex since 2016.

One of the big challenges is defining what qualifies as “coordinated inauthentic behavior.” Although Facebook doesn’t really deal with this in its report, much of what happens on the service (and other similar platforms) would fit that description, including much of the advertising that is the company’s bread and butter. In private groups devoted to everything from politics to fitness and beauty products, there are likely plenty of posts that could be described as “coordinated efforts to manipulate public debate for a strategic goal,” albeit not the kind that rise to the level of a Russian troll farm.

Influence operations can take a number of forms, Facebook says, “from covert campaigns that rely on fake identities to overt, state-controlled media efforts that use authentic and influential voices to promote messages that may or may not be false.” For example, during the 2016 election, the Internet Research Agency created groups devoted to Black Lives Matter and other topics that were filled with authentic posts from real users who were committed to the cause. In one case mentioned in Facebook’s threat report, a US marketing firm working for clients such as the PAC Turning Point USA “recruited a staff of teenagers to pose as unaffiliated voters” and comment on various pages and accounts. As researchers like Shannon McGregor of UNC note, “enlisting supporters in coordinated social media efforts is actually a routine campaign practice.”

According to Facebook’s report, this is why “content itself isn’t a reliable signal for determining whether a given account or a Page is part of an influence operation.” The company says its internal security staff have seen deceptive campaigns reusing real posts to build an audience, and real people who “unwittingly post memes originally created by IO actors.” That’s why the company says its definition of coordinated inauthentic behavior requires the use of fake accounts to mislead users. Many deceptive efforts, it says, “don’t cross the coordinated inauthentic behavior threshhold,” such as the use of political topics to drive people to websites filled with ads.

Facebook also describes how the tactics being used by malicious actors are changing. One change involves what Facebook calls “a shift from wholesale to retail influence operations,” meaning a move away from broad deceptive campaigns that are designed to reach everyone on Facebook to smaller, more targeted operations. Facebook also describes what it calls “perception hacking,” in which malicious actors try to capitalize on the fear of foreign influence, and convince users that this kind of activity has been more effective than it actually has been. The company also says it has seen the rise of “influence operations as a service,” in which commercial entities offer their services to government actors and others, providing them with a smokescreen for their identities.

When it comes to dealing with this problem, Facebook says uses a number of tactics, including combining automated detection of inauthentic behavior and “expert investigations” by staff, although it says the latter are hard to scale. Other tools include what the company calls “adversarial design,” making the tactics that malicious actors use — such as fake accounts — less effective or harder to implement, and the use of independent researchers, law enforcement and journalists to identify the sources of such campaigns. Journalists seem less enamored of being used by Facebook in this way, however, and often complain that the company doesn’t take action until the media writes about something.

While the Facebook report tries to give the impression of a company doing its best to keep the information ecosystem clean using all of its advanced technology, other internal documents paint a different picture: for example, a leaked report by Facebook staff on the Stop the Steal campaign that led up to the attack on Congress on January 6 argued that the company failed to take action against people and groups loyal to Donald Trump, including the Patriot Party, and that these groups playing a key role in the events of January 6. Ironically, given Facebook’s focus in its latest public threat report, the internal document said the company’s emphasis on rooting out fake accounts kept it from taking action against real people who were plotting an insurrection.

Here’s more on Facebook and inauthentic behavior:

The Russians: The company’s latest report makes it clear that Russian actors are involved in a broad variety of inauthentic behavior campaigns, but a previous report in 2017 — the company’s first — caused a significant amount of controversy for allegedly downplaying the impact of Russian activity during the 2016 election. Alex Stamos, the company’s head of security, wound up leaving the company as a result, although he has said the Russian aspect of the report was just one part of why he left. Stamos, who CJR has interviewed on its Galley discussion platform in the past, now runs the Stanford Internet Observatory, which tracks malicious activity on social platforms and the web.

The Russians? In 2018, people were seeing Russian activity everywhere, in part because of the focus on activity on Facebook during the election, and the fact that Robert Mueller indicted a number of Russian agents and corporations for trying to influence the election. It got to the point where almost every issue or misinformation campaign was blamed on Russians, including the resignation of Senator Al Franken over allegations of sexual harassment. But some were skeptical: even the Internet Research Agency, according to Atlantic writer Alexis Madrigal, “wasn’t that sophisticated,” and if it was a Silicon Valley startup “probably would not be picking up a fresh round of venture capital.”

Disinfodex: Disinformation researchers have created a public database of information about any campaigns that have been disclosed by platforms such as Facebook and Twitter or reported on by independent investigators, a project called Disinfodex. Developed as part of the Disinformation 2020 Fellowship, the database is supported by the Berkman Klein Center at Harvard University and the Ethics and Governance of Artificial Intelligence Fund at The Miami Foundation. The project is also affiliated with the Carnegie Endowment’s Partnership for Countering Influence Operations.

Other notable stories:

Israeli police are targeting Palestinian journalists at the Al Aqsa mosque in Jerusalem, the Intercept reports, including by denying access, delivering beatings, and firing on reporters with rubber-coated bullets. “We have witnessed a worrying increase in the number and frequency of violent attacks against the press, both by security forces and citizens,” the Union of Journalists in Israel said in a statement. “Journalists and photographers who are sent by their newsrooms to cover events are finding themselves to be a direct target of violence, often to the point of physical attacks.”

Tribune Publishing is planning to seek voluntary buyouts across all of the chain’s newspapers, according to multiple reports on Wednesday. Hedge fund Alden Global Capital recently succeeded in acquiring the chain, which owns the Chicago Tribune and a number of other smaller daily and weekly newspapers. NPR media correspondent David Folkenflik also reported that Alden had already added $278 million in debt to Tribune Publishing’s balance sheet as part of the $630-million purchase of the chain.

Russia is pressuring Google, Twitter and Facebook to fall in line with Kremlin internet crackdown orders or risk restrictions inside the country, the New York Times reports. Russia’s internet regulator, known as Roskomnadzor, has been making increasingly strident demands for the social platforms to remove content that the authorities believe to be illegal, or to restore pro-Kremlin content that has been blocked. Warnings have come every week, if not more often, ever since Facebook, Twitter and Google were used by protesters organizing anti-Kremlin demonstrations in January, the Times reports.

Axios says that it plans to expand its local newsletter project, which already has local operations in six cities, to a total of 14 cities by the end of the year, and expects to have revenue of at least $5 million this year, according to a report from AdWeek magazine. After four months, Axios says its local program has more than 350,000 subscribers, and expects to triple its revenue in 2022. Its Charlotte newsletter alone is expected to net more than $2 million in revenue this year, the company said.

Washington Post media writer Erik Wemple interviewed Emily Wilder, the former Associated Press reporter who was let go by the organization for an unspecified breach of the company’s social-media policy. According to Wemple, “Wilder expressed admiration for the AP’s journalism and thrill at having joined the organization. Had her managers laid out their concerns about any tweets, she says, she would have been ‘receptive.'” Janine Zacharia, who taught Wilder at Stanford, wrote in Politico that she was targeted by a disinformation campaign “and rather than recognizing it as such, the organization essentially caved to it.

Facebook’s Oversight Board overturned the company’s decision to remove a comment for abuse of its community standards, and said the comment must be reinstated. The comment was posted by a supporter of imprisoned Russian opposition leader Alexei Navalny, and referred to another user as a “cowardly bot.” Although the removal was in line with Facebook’s standards against bullying and harassment, the board said that the standards on this kind of comment are “an unnecessary and disproportionate restriction on free expression under international human rights standards.”

Staffers at video-game review site IGN say morale at the company is “at an all-time low” following a controversy over the removal of a post referring to humanitarian aid for Palestine, according to a report from Fanbyte. After the removal, several dozen IGN staffers signed a public statement registering their disapproval with the move, which many saw as a breach of editorial independence by IGN’s corporate owner J2 Global. According to Fanbyte, Peer Schneider, the co-founder and chief content officer for the site, initally said that management was sympathetic, but has since “fallen on his sword” and claimed that the decision was solely his, which has caused morale to plummet.

The European Commission says it will open a formal probe into Facebook’s alleged anti-competitive practices, including its behavior towards rivals in classified advertising. Officials for the EU have already sent official questions to Facebook related to its Marketplace classified ad service, and are talking with rivals as well, the Financial Times reports. The EU has previously launched antitrust investigations into Microsoft, Amazon, Apple and Google, but has not held a formal inquiry into Facebook’s practices yet.

Leave a Reply

Your email address will not be published. Required fields are marked *