YouTube says it wants to fix things, but it feels too little and too late

Most of the attention around “fake news” and misinformation so far has tended to focus on Facebook, in part because of its enormous size, and because of the role that Russian trolls and data-driven targeting by organizations like Cambridge Analytica may (or may not) have played in the US election. But YouTube is also a gigantic misinformation machine, as CJR has pointed out a number of times, and yet it seems to be almost incapable of dealing with the fallout from the machinery it has created.

A piece in the latest issue of BusinessWeek is entitled “YouTube’s Plan to Clean Up the Mess That Made It Rich,” but there doesn’t seem to be any real plan per se, or if there is the article doesn’t describe it in any detail. It appears to consist of hiring more moderators to police content, and/or working on artificial intelligence as a way of flagging the worst offenders—in other words, more or less the same solution that Facebook has offered when it gets criticized for similar things.

And just as Facebook did, when CEO Mark Zuckerberg admitted in an interview that for the first 10 years of its existence it simply didn’t think about the negative aspects of the technology it was creating, YouTube would like us to believe that most of these problems came as a complete surprise. Just the growing pains of a hyperactive and rapidly-expanding toddler, in other words:

In interviews at the San Bruno complex, YouTube executives often resorted to a civic metaphor: YouTube is like a small town that’s grown so large, so fast, that its municipal systems—its zoning laws, courts, and sanitation crews, if you will—have failed to keep pace. “We’ve gone from being a small village to being a city that requires proper infrastructure,” Kyncl says. “That’s what we’ve been building.”

The only problem with that kind of argument, either from Facebook or Google, is that hundreds of thousands of smart people have been building this machinery for more than a decade now. These are not country bumpkins in a small town somewhere. To assume no one ever suggested or thought about the potential negative aspects of these networks defies belief. The only other explanation is that those concerns simply weren’t seen as being important enough, or at least not as important as growth itself.

Former YouTube executive Hunter Walk tells BusinessWeek that resources were gradually taken away from trying to improve the environment on the network. And former YouTube engineer Guillaume Chaslot tells the magazine the same thing he told CJR for a piece on “computational propaganda,” which is that suggestions about ways to keep the recommendation engine from promoting conspiracy theories and fake news were rejected, in favor of a single-minded focus on growth and engagement.

Is this tide turning? Perhaps. But even as YouTube and Facebook say they are committed to solving these problems, their revenue continues to grow at eye-popping rates—analysts estimate YouTube’s revenues are in the $22-billion range, and Facebook’s revenues climbed by almost 50 percent in the latest quarter to $12 billion. In other words, even the high-profile issues Facebook is having with the fallout from the Cambridge Analytica data leak don’t seem to be having much impact on the bottom line. What incentive is there to attack any of these problems when the overall business is going so well?

Leave a Reply

Your email address will not be published. Required fields are marked *