Google plays hardball with European news publishers

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

While the US obsessed on Wednesday over what technically constitutes impeachment for a sitting American president, some European news publishers may have been focused on something quite different: namely, a decision by Google to play hardball with French media companies when it comes to linking to their content in its search results. As of Wednesday, unless a French publisher specifically says that it wants Google to do so, the search giant will no longer include short excerpts from news stories in its results. Instead, there will just be a headline. It’s not exactly clear how this will look in practice—in an earlier mockup of results with text from news publishers excluded, there was just a big white space where the excerpt and image were supposed to go.

Why is Google doing this? Because the French government recently passed a law that requires the search company to pay publishers if it uses even short excerpts of their content on its search pages. The French law is a local variation of a recently adopted European Union copyright directive known as Article 11, which says that publishers are entitled to compensation for the use of even small chunks of text, a payment some refer to as a “link tax.” This in turn was inspired by similar attempts in other EU countries to get Google and others to pay for excerpts. Germany tried with its Leistungsschutzrecht für Presseverleger law in 2013, and Spain tried with a similar law in 2014. In Germany, a number of publishers had their results removed from Google News when it refused to pay them, but later relented when their traffic collapsed by as much as 40 percent. In Spain, Google eventually removed Spain completely from the Google News index.

Google maintains that its news excerpts send publishers a huge amount of traffic—as the company’s head of news, Richard Gingras, pointed out in a blog post on Wednesday—and that this in turn generates revenue via advertising. Publishers, however, note that ad revenue is falling, in part because Google and Facebook control the lion’s share of the market—which is why Google also likes to highlight the Google News Initiative, through which the company funds research and development (and even the creation of entirely new local news outlets, as it is doing through partnerships in both the UK and US). The News Initiative got its start in 2006, when Belgium was the first country to sue Google for using content from local publishers without their consent. The two sides eventually settled, and Google agreed to fund research and development for the industry, and then offered similar deals to France and other countries.

Continue reading “Google plays hardball with European news publishers”

The Facebook Supreme Court will see you now

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

A year and a half ago, Mark Zuckerberg floated what seemed like a crazy idea. In an interview with Ezra Klein of Vox Media, the Facebook CEO said he was thinking about creating a kind of Supreme Court for the social network—an independent body that would adjudicate some of the hard decisions about what kinds of content should or shouldn’t be allowed on the site’s pages, decisions Facebook routinely gets criticized for. Imagine, Zuckerberg said, “some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech.” It wasn’t just a pipedream or an offhand comment: for better or worse, Facebook has been hard at work over the past year creating just such an animal, which it is now calling an “Oversight Board.” This week, it took another in a series of steps towards that goal, by publishing the charter that will govern the board’s actions, as well as a document that shows how it incorporated feedback from experts and interested parties around the world, and a description of how the governance process will work for this third-party entity.

As part of the roadshow for this effort, Facebook held an invitation-only conference call with journalists, in which the head of the governance team took pains to describe just how much work the company did to gather as much feedback as possible on the idea. Facebook held six in-depth workshops and 22 roundtables, featuring journalists, privacy experts, digital-rights activists, and constitutional scholars from 88 different countries, along with 1,200 written submissions—all of which sounds very impressive until you remember that Facebook has more than 35,000 employees and revenues of more than $56 billion. And what did the company come up with? The charter describes an independent body that will start with 11 members and eventually number as many as 40, who will hear cases in groups of five. Some cases will be referred by Facebook, while others will come from appeals launched by users whose content has been removed for a variety of reasons.

In an attempt to keep the board as independent as possible, Facebook says it will appoint two co-chairs for the board, who will then be free to select whomever they wish to fill out the rest of the board membership. And Facebook will not compensate board members directly, for fear of the perception of a conflict of interest—compensation will come from a trust that the company will set up (and fund), which will also be run independently. The charter specifically states that members can’t be removed because of specific decisions they make, but can only be disqualified if they breach the code of conduct set out in the charter. But most important of all, Facebook says, decisions made by the board are binding, which means they can’t be overruled by the company unless the changes that would be required to comply actually violate the law, or unless the board recommends something that is technically impossible.

Continue reading “The Facebook Supreme Court will see you now”

Source hacking: How trolls manipulate the media

Note: This is something I originally published in the daily newsletter sent out by the Columbia Journalism Review, where I’m the chief digital writer

Most people are probably familiar by now with the idea that there are “trolls” on the Internet—thanks in part to events like GamerGate, but also to the rise of Donald Trump, the Troll-in-Chief who occupies the White House. Many trolls have an agenda of some kind, as the infamous Russian Internet Research Agency did, while others seem to just get a kick out of creating chaos. As Alfred said to Bruce Wayne in The Dark Knight, “some men just like to watch the world burn.” But regardless, there are some similarities in how trolls work, and how they are able to capture the attention of both regular Internet users—and, in some cases, professional journalists—in order to spread their disinformation far and wide. An attempt to create a taxonomy of trolling tactics is the aim of a new report published by the digital think tank Data & Society, written by Joan Donovan, Director of the Technology and Social Change Research Project at Harvard University’s Kennedy School, and senior researcher Brian Friedberg.

The report focuses on a subset of online manipulation that Donovan calls “source hacking.” The report describes this as a set of techniques for hiding the sources of problematic information, in order to permit its circulation in mainstream media—an indirect method for targeting journalists, by planting false information in places they are likely to encounter it. The report breaks down the tactics used by trolls into four categories: 1) Viral Sloganeering, which consists of repackaging reactionary talking points for social media and press amplification; 2) Leak Forgery, which involves prompting a media spectacle by sharing forged documents; 3) Evidence Collages, which are documents (usually images) made up of information or misinformation from multiple sources so as to make them easily shareable, and 4) Keyword Squatting, the strategic domination of keywords and “sock-puppet” accounts in order to misrepresent the behavior of specific groups or individuals.

Donovan and Friedberg use recent case studies to illustrate each of their sub-categories. For example, one of the most successful viral sloganeering success stories was the “Jobs Not Mobs” hashtag from October 2018. The slogan emerged first on Reddit threads, where users came up with visual memes that would help the hashtag spread, including video clips showing decontextualized riots and migrant caravans. “Easily shareable audiovisual material, alongside the deployment of a hashtag, created opportunities for a swarm of participation, and the slogan quickly grew past its point of origin in far-right online hubs,” the report says. The slogan moved to Twitter and Facebook, where automated or bot-like accounts helped it spread even further, and finally the hashtag was used by the president of the United States in a tweet—the Mount Everest of trolling.

Continue reading “Source hacking: How trolls manipulate the media”

YouTube tries to have its cake and eat it too

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Google would really like everyone to know that its video sharing service, YouTube, is on the job when it comes to cracking down on offensive content. To that end, the company put on a full-court press this week announcing that it had removed more than 100,000 videos and over 17,000 channels for violating its hate speech rules between April and June, which is five times more than it removed in the previous three months. The company said it also took down over 500 million comments because they included hate speech. According to a blog post about the crackdown, YouTube’s moderators removed about 30,000 videos last month alone. And how popular were these videos compared to the rest of the content on the streaming service? The company would like you to know that they “generated just 3% of the views that knitting videos did over the same time period.”

In other words, instead of getting actual usable information about something, we get a comparison to something else that we also haven’t been given any details on, in such a way as to provide an illusion of transparency. How popular are knitting videos compared to the rest of what appears on YouTube? We have no idea. But we know that they are just about as popular as 30,000 videos the company removed, which we also know nothing about, other than they breached the site’s terms and conditions. That means we know next to nothing, and that seems to be the way YouTube would like to keep it. As far as the company is concerned, getting upset about people viewing offensive content is like getting upset about knitting videos. YouTube’s community guidelines Enforcement Report is similar: Filled with impressive-looking numbers, but little useful detail.

But even the illusion of transparency is better than what the company usually comes up with when it removes and/or reinstates accounts and videos. Just days before the announcement about the removals, for example, YouTube reinstated two controversial accounts that it had previously removed after much criticism — one belonging to white nationalist Martin Sellner, and another belonging to a British YouTube broadcaster who calls himself The Iconoclast, both of whom have ties to the white supremacist movement, including the shooter who opened fire on a mosque in Christchurch, New Zealand (and spread video of the shooting on YouTube). Why the sudden change of heart on these two and their use of YouTube’s platform? Are there new criteria being applied? All the company would say was that while many “may find the viewpoints expressed in these channels deeply offensive,” the company had decided the channels in question did not violate its community guidelines after all.

Continue reading “YouTube tries to have its cake and eat it too”