Fake news is just part of a much bigger problem: Automated propaganda

So-called “fake news” has become a hot button topic of late, thanks to the repeated use of that term by Donald Trump and his followers, who use it to describe any story they disagree with. After initially dismissing the problem, Facebook has promised to crack down on disinformation, and so has Google. But experts say the problem of what they call “computational propaganda” doesn’t just piggy-back on social platforms — it is arguably baked into the DNA and the business model of companies like Facebook, Google and Twitter. And it’s going to take more than a few algorithm tweaks to get rid of it.

Dipayan Ghosh is a computer scientist who helped provide technical advice to the Obama administration while working on his PhD, and then wound up working at Facebook as part of the privacy and public policy team. In 2016, he says, he and others started to notice a deluge of “fake news” and other disinformation, one that appeared to be driven by the News Feed algorithm. When Donald Trump was elected, Ghosh says he had a kind of crisis of conscience, because he believed that politically motivated misinformation had helped Trump win.

“I was sitting on the floor at the Javits Center watching and I was shaken to the core,” Ghosh says. “It was just such a shocker. I couldn’t understand it given [Clinton’s] rise in the popular vote, and I thought there might be something else going on, a pro-active campaign going on under the table that was manifesting itself in the election.” Facebook later admitted before Congress that Russian trolls had promoted fake news and taken advantage of the platform in order to reach more than 125 million people.

After his election-night disillusionment, Ghosh joined the New America foundation and Harvard’s Shorenstein Center on Media, Politics and Public Policy, where he started researching the impact of digital propaganda distributed by social platforms. In January, he published a report called “Digital Deceit: The Technologies Behind Precision Propaganda on the Internet” with Ben Scott, a former innovation adviser at the US State Department.

While most of the attention focused on content from the Internet Research Agency, a Russian “troll factory,” the New America report notes that this is just the tip of a very large digital iceberg. “These platform companies are at the center of a vast ecosystem of services that enable highly targeted political communications that reach millions of people with customized messages that are invisible to the broader public,” Ghosh and Scott wrote.

In effect, they say, Russian trolls and others take advantage of how social platforms and ad networks are constructed in order to turn them to their own purposes. “Disinformation campaigns are functionally little different from any other advertising campaign, and the leading internet platforms are equipped with world class technology to help advertisers reach and influence audiences,” the report says.

What that means is that “there’s a fundamental alignment between the goals of the Internet platform and the goals of the disinformation operator,” Ghosh said in an interview. “That fundamental goal is to get the user to stay there as long as possible. Their motivations are different — for platform it is to maximize ad space, to collect more information about the individual and to rake in more dollars, and for the disinformation operator the motive is the political persuasion of the individual to make a certain decision. But until we change that alignment, we are not going to solve the problem of disinformation on these platforms.”

After Robert Mueller indicted 13 Russian nationals and three Russian companies for their attempts to influence the US election, sociologist Zeynep Tufekci noted on Twitter that the indictment “shows RU used social media just like any other advertiser/influencer. They used the platforms as they were designed to be used.”

Facebook and Google, says Ghosh, “have not necessarily encouraged the environment of disinformation but have enabled it through the mass collection of individual data, with as much granularity as possible within legal limits,” something Tufekci has described as “surveillance capitalism.” This kind of structure allows advertisers to target users based on a wide range of interests, but it also allows political parties and much more nefarious groups to do the same, and to fine-tune their propaganda to have as much effect as possible.

“It’s a very hard problem — how to distinguish between disinformation and authentic political speech,” Ghosh says. “Those that are clearly foreign agents can be blocked, but with domestic operators there’s an obvious tension there between preventing harm and impacting on free speech, and I don’t think there’s a clear solution yet. But we are definitely going to see more domestic actors in 2018 and that is frightening.”

Although Facebook has gotten the lion’s share of the attention for the way it was manipulated by Russian trolls, it is not alone in facing this problem. Guillaume Chaslot is a former Google engineer who helped develop the algorithms that determine which videos to recommend to YouTube watchers, and he says the platform has a very real issue with promoting fake news and disinformation.

Chaslot says while studying the functioning of the recommendation algorithm, he noticed that in many cases, the videos that the software was promoting were of questionable quality — factually inaccurate reports from dodgy websites pushing conspiracy theories and hoaxes. So he tried to come up with ways to improve the quality of the recommendations, but says his superiors at YouTube weren’t interested. All they wanted, he says, was for the team to come up with ways of getting people to spend more time on the platform.

“Total watch time was what we went for — there was very little effort put into quality,” Chaslot said in an interview with CJR. “All the things I proposed about ways to recommend quality were rejected.”

In a blog post in early 2017 entitled “How YouTube’s A.I. boosts alternative facts,” Chaslot described an experiment he conducted which pretended to view YouTube videos and then catalogued the automated recommendations. In a number of cases, the most recommended videos involved conspiracy theories about the earth being flat, the Pope being an agent of evil, Michelle Obama being a man, etc.

“I came to the conclusion that the powerful algorithm I helped build plays an active role in the propagation of false information,” Chaslot wrote. And it does so because YouTube wants to keep people using the service, and salacious or bizarre hoaxes and conspiracy theories keep people engaged.

In addition, as Chaslot describes, “once a conspiracy video is favored by the A.I., it gives an incentive to content creators to upload additional videos corroborating the conspiracy. In turn, those videos increase the retention statistics of the conspiracy. Next, the conspiracy gets recommended further. Eventually, the large amount of videos favoring a conspiracy makes it appear more credible.” As a result, the problem snowballs.

It’s not just fake news or hoaxes that are involved in these organized propaganda campaigns, Tow Center researcher Jonathan Albright notes. He looked at more than 200,000 tweets that were connected to Russian troll accounts — tweets that were provided to NBC by Twitter insiders before they were deleted — and analyzed them based on the content they were linking to. Many of them distributed real news stories from traditional sources, but in a way that was designed to promote a specific pro-Trump agenda.

When The Guardian wrote about his research, Chaslot says representatives from Google and YouTube criticized his methodology and tried to convince the paper not to do the story, promising to publish a blog post refuting his claims, but no such post was ever published. The company said it “strongly disagreed” with the research — but after Senator Mark Warner raised concerns about YouTube promoting what he called “outrageous, salacious and often fraudulent content,” Google thanked the paper for doing the story.

After The Wall Street Journal reproduced some of Chaslot’s findings, the head of YouTube’s recommendations team said that “We recognize that this is our responsibility, and we have more to do.” The search giant has come under fire for a number of similar problems in the past, including an incident in which a fake news story was one of the top recommended links related to the mass shooting in Las Vegas. Google says it is trying to surface “more authoritative” content when people look for hoaxes or conspiracy theories.

“They have made some changes to the search algorithm so it recommends more high-quality content,” says Chaslot, “but if you look at what is recommended, it is still very divisive politically.” In the US this might not be a problem because of the country’s strong democracy and a culture of respect for the First Amendment, he says, “but in some countries where you don’t have that culture it could be a much worse problem. There is the same issue in France, where recommendations quickly get into conspiracy theories.”

Platforms like YouTube and Facebook “seem very democratic, because anyone can click the like button and have a vote on the content,” Chaslot says. “But if you know how the system works, if you’re a Russian troll or someone like that, you can figure out how to have a lot more impact, because you know how to organize your content, when to publish, and a lot of other things that increase the probability of your video being seen.”

Google and Facebook often say that they don’t want to get into the business of deciding what is true and what isn’t, but Chaslot describes this argument as “total bullshit.” Both platforms could easily create the kinds of tools or processes that are used on a site like Wikipedia, he said, where a group of moderators decide what information to keep and what not to keep. “There are lots of tools they could try, but they don’t really have any interest in doing it,” Chaslot says. “They have the money to do it, and there are people working there who want to do it, but they don’t bother to try and do it because there is no incentive to do so.”

Lisa-Maria Neudert is part of a team of researchers who work on the Oxford Internet Institute’s computational propaganda project. In a recent report, the Institute looked at how and where “fake news” stories and related content were shared on Twitter and Facebook, and found that those who shared such posts tended to be Trump supporters or from the conservative end of the political spectrum.

Propaganda isn’t new, says Neudert. But what is new is the ease with which it can be created and distributed, and the speed with which such campaigns can be generated — and the fact that they can be targeted to specific individuals or groups, thanks to Facebook and Google’s ad technologies.

“This ability to have mass distribution at extremely low cost enables propaganda at an entirely different scale, one we’ve never seen before,” she says. “And it uses all of the information that we as users are consciously and unconsciously providing, to produce individualized propaganda.”

In a sense, just as Facebook and Google and Twitter have democratized social communication and media, they have also democratized propaganda. “Social media has shifted the capability of designing propaganda to regular users,” says Neudert. “So it’s no longer something that is created by big companies or governments — now the everyday lay person can make a propaganda campaign or a disinformation site or create a bot army.”

For example, critics say Twitter has made it easy for groups or even individuals to create what some call “astro-turfing” campaigns, which are designed to give the impression that there is widespread support for certain views, because the service allows users to create and distribute sponsored posts for entirely fictitious organizations, without even having to have a Twitter account or a website to point to.

A non-profit group called The Alliance for Securing Democracy, which is funded by the German Marshall Fund, runs a site called Hamilton68 that tracks the behavior of Russian troll accounts, and has shown that they exhibit organized behavior around specific news hashtags, including those that were used prior to the release of the Nunes memo, as well as hashtags used following the Parkland shootings.

The social platforms have been slow to realize just how integral a role they play in this new form of disinformation, Neudert argues.

“I think [Facebook] has had a rude awakening, that the way they structure their platforms has contributed to this problem, but it has been a slow awakening,” she says. “It was only after months and months of pressure that we saw some of the data being shared, and they still haven’t shared even a small part of the massive amounts of data they have. If they shared more, I think maybe we could come up with better solutions.”

Some of Facebook’s proposed news-feed changes could actually make the disinformation problem worse, Neudert says.

“The content that is the most misleading or conspiratorial, that’s what’s generating the most discussion and the most engagement, and that’s what the algorithm is designed to respond to,” she says. “So it promotes these kinds of issues even more by exploiting the way that human attention works. The environment maximizes for outrage. They say they want more meaningful conversation, but it’s not clear how they are going to define that.”

 

Leave a Reply

Your email address will not be published. Required fields are marked *