Spotlight on fake news and disinformation turns toward YouTube

So far, Facebook has taken most of the heat when it comes to spreading misinformation, thanks to revelations about how Russian trolls used the network in an attempt to influence the 2016 election. But now YouTube is also coming under fire for being a powerful disinformation engine.

At Congressional hearings into the problem in November, where representatives from Facebook, Google and Twitter were asked to account for their actions, Facebook took the brunt of the questions, followed closely by Twitter. Google, however, argued that since it’s not really a social network in the same sense that Facebook and Twitter are, it therefore doesn’t play as big a role in spreading fake news.

This was more than a little disingenuous. While it may not run a social network like Facebook (its attempt at doing so, known as Google+, failed to catch on), Google does own the world’s largest video platform, and YouTube has played—and continues to play—a significant role in spreading misinformation.

This becomes obvious whenever there is an important news event, and especially one that has a political aspect to it, such as the mass shooting in Las Vegas last October—where fake news showed up at the top of YouTube searches—or the recent school shooting in Parkland, Florida where 17 students died.

After the Parkland shootings, YouTube highlighted conspiracy theories about the incident in search results and in its recommended videos. At one point, eight out of the top 10 recommended videos that appeared for a search on the name of one of the students who survived the shooting either promoted or talked about the idea that he was a so-called “crisis actor” and not a real student.

When this was mentioned by journalists and others on Twitter, the videos started disappearing one by one, until the day after the shooting there were no conspiracy theories in the top 10 search results. But in the meantime, each of those videos got thousands or possibly tens of thousands of views they might otherwise not have gotten if they hadn’t been recommended.

This kind of thing isn’t just a US problem. YouTube has become hugely popular in India with the arrival of cheap data plans for smartphones, and after a famous actress died recently, the trending list on YouTube for that country was reportedly filled with fake news.

In part, the popularity of such content is driven by human nature. Conspiracy theories are often much more interesting than the real facts about such an event, in part because they hint at mysteries and secrets that only a select few know about. That increases the desire to read them, and to share them, and social platforms like Facebook, Twitter and YouTube play on this impulse.

Human nature, however, is exacerbated by the algorithms that power these platforms, creating a vicious circle. YouTube’s algorithm tracks people clicking and watching conspiracy theory videos and assumes that this kind of content is very popular, and that people want to see more of it, and so it moves those videos higher in the rankings. That in turn causes more people to see them and click on them.

The result is that users are pushed towards more and more polarizing or controversial content, regardless of the topic, as sociologist Zeynep Tufekci described recently in The New York Times. The platform has become “an engine for radicalization,” she says.

“In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.”

Guillaume Chaslot, a programmer who worked at Google for three years, told CJR recently he noticed a similar phenomenon while working on the YouTube recommendation algorithm. He says he tried to get the company interested in implementing fixes to help solve it, but was told that what mattered was that people spent lots of time watching videos, not what kind of videos they were watching.

“Total watch time was what we went for—there was very little effort put into quality,” Chaslot says. “All the things I proposed about ways to recommend quality were rejected.”

After leaving Google, Chaslot started collecting some of his research and making the results public on a website called Algotransparency.org. Using software that he created (and has made public for anyone to use), he tracked the recommendations provided for YouTube videos and found that in many cases they are filled with hoaxes, conspiracy theories, fake news and other similar content.

Jonathan Albright, research director at the Columbia University’s Tow Center for Digital Journalism, has done his own research on YouTube, including a study in which he catalogued all of the recommended videos the site suggested after a hypothetical user clicked on a “crisis actor” video. What he found was a network of more than 9,000 conspiracy-themed videos, all of which were recommended to users as the “next up” video after they watched one involving the alleged Parkland shooting hoax.

“I hate to take the dystopian route, but YouTube’s role in spreading this ‘crisis actor’ content and hosting thousands of false videos is akin to a parasitic relationship with the public,” Albright said in a recent blog post about his research. “This genre of videos is especially troublesome, since the content has targeted (individual) effects as well as the potential to trigger mass public reactions.”

Former YouTube head of product Hunter Walk said recently that at one point he proposed bringing in news articles from Google News or even tweets to run alongside and possibly counter fake news or conspiracy theories, rather than taking them down, but that proposal was never implemented—in part because growing Google+ became more important than fixing YouTube.

Google has taken some small steps along those lines to try and resolve the problem. This week, YouTube CEO Susan Wojcicki said at the South by Southwest conference that the service will show users links to articles on Wikipedia when they search for known hoaxes about topics such as the moon landing. But it’s not clear whether this will have any impact on users’ desire to believe the content they see.

Google has also promised to beef up the number of moderators who check flagged content, and has created what it calls an “Intelligence Desk” in order to try and find offensive content much faster. And it has said that it plans to tweak its algorithms to show more “authoritative content” around news events. One problem with that, however, is it’s not clear how the company plans to define “authoritative.”

The definition of what’s acceptable also seems to be in flux even inside the company. YouTube recently said it had no plans to remove a channel called Atomwaffen, which posts neo-Nazi content and racist videos, and that the company believed adding a warning label “strikes a good balance between allowing free expression and limiting affected videos’ ability to be widely promoted on YouTube.”

After this decision was widely criticized, the site removed the channel. But similar neo-Nazi content reportedly still remains available on other channels. There have been reports that Infowars, the channel run by alt-right commentator Alex Jones, has had videos removed, and that the channel is close to being removed completely. But at the same time, some other controversial channels have been reinstated after YouTube said that they were removed in error by moderators.

In her talk at South by Southwest, Wojcicki said that “if there’s an important news event, we want to be delivering the right information,” but then added that YouTube is “not a news organization.” Those two positions seem to be increasingly incompatible, however. Facebook and YouTube both say they don’t want to become arbiters of truth, and yet they want to be the main news source for information about the world. How much longer can they have it both ways?

 


 

YouTube wants the news without the responsibility

 

After coming under fire for promoting fake news, conspiracy theories and misinformation around events like the Parkland school shooting, YouTube has said it is taking a number of steps to try and fix the problem. But the Google-owned video platform still appears to be trying to have its cake and eat it too when it comes to being a media entity.

This week, for example, YouTube CEO Susan Wojcicki said at the South by Southwest conference in Texas that the service plans to show users links to related articles on Wikipedia when they search for videos on topics that are known to involve conspiracy theories or hoaxes, such as the moon landing or the belief that the earth is flat.

Given the speed with which information moves during a breaking news event, however, this might not be a great solution for situations like the Parkland shooting, since Wikipedia edits often take awhile to show up. It’s also not clear whether doing this will have any impact on users’ desire to believe the content they see.

In addition to those concerns, Wikimedia said no one from Google notified the organization (which runs Wikipedia) of the YouTube plan. And some of those who work on the crowdsourced encyclopedia have expressed concern that the giant web company—which has annual revenues in the $100-billion range—is essentially taking advantage of a non-profit resource, instead of devoting its own financial resources to the problem.

Google seems to want to benefit from being a popular source for news and information without having to assume the responsibilities that come with being a media entity. In her comments at SXSW, Wojcicki said “if there’s an important news event, we want to be delivering the right information,” but then quickly added that YouTube is “not a news organization.”

This feels very similar to the argument that Facebook has made when it gets criticized for spreading fake news and misinformation—namely, that it is merely a platform, not a media entity, and that it doesn’t want to become “an arbiter of truth.”

Until recently, Facebook was the one taking most of the heat on fake news, thanks to revelations about how Russian trolls used the network in an attempt to influence the 2016 election. At Congressional hearings into the problem in November, where representatives from Facebook, Google, and Twitter were asked to account for their actions, Facebook took the brunt of the questions, followed closely by Twitter.

At the time, Google argued that since it’s not a social network in the same sense as Facebook and Twitter, it therefore doesn’t play as big a role in spreading fake news. This was more than a little disingenuous, however, since it has become increasingly obvious that YouTube has played and continues to play a significant role in spreading misinformation about major news events.

Following the mass shooting in Las Vegas last October, fake news about the gunman showed up at the top of YouTube searches, and after the Parkland incident, YouTube highlighted conspiracy theories in search results and recommended videos. At one point, eight out of the top 10 results for a search on the name of one of the students either promoted or talked about the idea that he was a so-called “crisis actor.”

When this was mentioned by journalists and others on Twitter, the videos started disappearing one by one, until the day after the shooting there were no conspiracy theories in the top 10 search results. But in the meantime, each of those videos got thousands or tens of thousands of views they might otherwise not have gotten.

Misinformation in video form isn’t just a problem in the US. YouTube has also become hugely popular in India with the arrival of cheap data plans for smartphones, and after a famous actress died recently, YouTube’s trending section for India was reportedly filled with fake news.

Public or media outrage seems to have helped push Google to take action in the most recent cases, but the subject of controversial content on YouTube has also become a hot-button issue in part because advertisers have raised a stink about it, and that kind of behavior has a very real impact on Google’s bottom line, as opposed to just affecting its public image.

Last year, for example, dozens of major-league advertisers—including L’Oreal, McDonald’s and Audi—either pulled or threatened to pull their ads from YouTube because they were appearing beside videos posted by Islamic extremists and white supremacists. Google quickly apologized and promised to update its policies to prevent this from happening.

The Congressional hearings into Russian activity also seem to have sparked some changes. One of the things that got some scrutiny in both the Senate and House of Representatives hearings was the fact that Russia Today—a news organization with close links to the Russian government—was a major user of YouTube.

Google has since responded by adding warning labels to Russia Today and other state broadcasters to note that they are funded by governments. This move has caused some controversy, however: PBS complained that it got a warning label, even though it is funded primarily by donations and only secondarily by government grants.

As well-meaning as they might be, however, warning labels and Wikipedia links aren’t going to be enough to solve YouTube’s misinformation problem, because to some extent it’s built into the structure of the platform, as it is with Facebook and the News Feed.

In a broad sense, the popularity of fake news is driven by human nature. Conspiracy theories and made-up facts tend to be much more interesting than the real truth about an event, in part because they hint at mysteries and secrets that only a select few know about. That increases the desire to read them, and to share them. Social services like Facebook, Twitter, and YouTube tend to promote content that plays on this impulse because they are looking to boost engagement and keep users on the platform as long as possible.

Human nature, however, is exacerbated by the algorithms that power these platforms. YouTube’s algorithm tracks people clicking and watching conspiracy theory videos and assumes that this kind of content is very popular, and that people want to see more of it, so it moves those videos higher in the rankings. That in turn causes more people to see them.

The result is that users are pushed towards more and more polarizing or controversial content, regardless of the topic, as sociologist Zeynep Tufekci described in a recent New York Times essay. The platform, she says, has become “an engine for radicalization.”

“In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.”

Guillaume Chaslot, a programmer who worked at Google for three years, told CJR recently he noticed a similar phenomenon while working on the YouTube recommendation algorithm. He says he tried to get the company interested in solving it, but was told that what mattered was that people spent lots of time watching videos, not what kind of videos they were watching.

Former YouTube head of product Hunter Walk said recently that at one point he proposed bringing in news articles from Google News or even tweets to run alongside and possibly counter fake news or conspiracy theories, rather than taking them down, but that proposal was never implemented—in part because Google executives made it clear that growing Google+ was a more important goal than fixing YouTube.

In addition to adding Wikipedia links, Google has also promised to beef up the number of moderators who check flagged content, and has created what it calls an “Intelligence Desk” in order to try and find offensive content much faster. And it has said that it plans to tweak its algorithms to show more “authoritative content” around news events. One problem with that, however, is it’s not clear how the company plans to define “authoritative.”

The definition of what’s acceptable also seems to be in flux even inside the company. YouTube recently said it had no plans to remove a channel called Atomwaffen, which posts neo-Nazi content and racist videos, and that the company believed adding a warning label “strikes a good balance between allowing free expression and limiting affected videos’ ability to be widely promoted on YouTube.”

After this decision was widely criticized, the site removed the channel. But similar neo-Nazi content reportedly still remains available on other channels. There have been reports that Infowars, the channel run by alt-right commentator Alex Jones, has had videos removed, and that the channel is close to being removed completely, although YouTube denies this. But at the same time, some other controversial channels have been reinstated after YouTube said that they were removed in error by moderators.

Facebook and YouTube both say they want to be the main news source for information about the world, but they also say they don’t want to be arbiters of truth. How long can they continue to have it both ways?

Leave a Reply

Your email address will not be published. Required fields are marked *