Is there more than a whiff of schadenfreude to media coverage of Facebook?

Scott Rosenberg touched off a minor firestorm in the media-sphere with a post at Axios this week, in which the veteran technology writer argues that at least some of the enthusiasm with which media companies are covering Facebook’s trial and tribulations stems from their resentment over how the company has stolen their readers and advertising revenue. Here’s his argument in a nutshell:

Outrage over Facebook’s misuse of user data and failure to rein in election fraud is real. But the zeal that media outlets bring to their Facebook coverage is personal, too. It’s turbocharged because journalists, individually and collectively, blame Facebook — along with other tech giants, like Google, and the internet itself — for seducing their readers, impoverishing their employers, and killing off their jobs. This blame war is the latest phase of a decades-long grudge match between traditional media companies and new technology giants.

This theory sparked a raft of responses from journalists of all stripes, including everything from a brief “LOL—sorry, but this is BS” from USA Today reporter Jessica Guynn to a full-throated denial by Eric Levitz in New York magazine, who argued even if Facebook and Google have crippled the media industry, that doesn’t mean coverage of their power and influence is motivated by anything other than a desire to expose that power and influence, and to question its impact on a civil society (including journalism).

It would be one thing if Axios presented a litany of libelous errors that journalists had made in the course of covering Silicon Valley with a vengeance. But if this alleged resentment isn’t producing misinformation, then what is the point of insinuating that critical coverage of Facebook is rooted in personal grievance? Who is served by such unsubstantiated insinuations?

Others noted that BuzzFeed has been relentless in exposing the details of various stories involving Facebook, despite the fact that it owes more of its livelihood to the social network than just about any other media entity, and therefore might be assumed to be more favorably inclined towards it rather than less. And some pointed out that in the early days of Facebook, a case could be made that coverage of the company was overly positive, making the current critical approach more of a return to normal than an outlier.

Some saw the Axios piece as primarily a piece of marketing—a way of indicating that Axios is sympathetic to the tech giants and their complaints about overly critical coverage. And a few noted that the media startup, which is run by former Politico founder Jim VandeHei, had a partnership with Facebook that involved a series of exclusive interviews with senior executives.

One of the other implications in Rosenberg’s piece is that Facebook and Google didn’t just steal audience and revenue from publishers because they had natural advantages, but because media executives failed to adapt quickly enough to the internet, and then in a desperate attempt to catch up, handed over too much of their business to Facebook and Google. Whether anyone in the traditional media industry wants to admit it or not, that point has more than a little truth to it.

Fake news, clickbait still doing well after Facebook algorithm change

Earlier this year, Facebook announced a change to its News Feed algorithm, one designed to reduce the visibility of news — apart from those outlets deemed to be high quality or trusted sources — in favor of posts by people and pages that encouraged what the social network called “meaningful interaction.” And what effect has this had on the flow of fake and/or real news? According to a recent study from media-monitoring firm Newswhip, real news is still being shared a lot, but so are obvious misinformation and clickbait.

In the report, entitled “Navigating the Facebook Algorithm Change,” the firm looked at the most widely-shared articles and found that more than half of the top 100 were hard news stories or reporting on current events, including the death of famed astrophysicist Stephen Hawking. But clearly falsified stories also show up fairly high in the rankings: Number 26 on the most-shared list is a report from a fake news site called Your Newswire that says the flu shot is causing a “disastrous flu outbreak.” That got more than 850,000 engagements.

Although the algorithm change has resulted in a decline in traffic to some sites (including apparently conservative sites, which are complaining that they have been deliberately targeted), Newswhip’s analysis shows that some news outlets have actually seen an increase in traffic since the change, including NBC and Fox News. January was the strongest month for Fox since October of 2017, the company says, but NBC actually eclipsed it and took the top spot for the first time since January of last year. More niche sites have suffered:

Some Pages that our data showed a more serious decline in average engagements were UNILAD, Student Problems, 9Gag, Cosmopolitan, and Architecture & Design, though some of these are starting to show some recovery. For these publishers, it might be time to look at what role their content actually serves their followers — is it connecting them to one another, teaching them something new, making them pause… or is it just adding to a landscape of digital waste?

Newswhip also found that Facebook is true to its word, and has been favoring posts that get more engagement, including comments. Over the past couple of years, the number of comments as a proportion of the overall engagement on the top 100 posts has averaged about 5 percent but since the algorithm change comments make up more than 11 percent of total engagement on the top 100 most-shared posts. Most of the ones that got a large number of comments were funny clickbait-style videos.

Facebook struggles to get out from under its privacy debacle

Facebook CEO Mark Zuckerberg has been having a bad week, and it’s probably going to get worse before it gets better, as the company continues to take fire from all sides because of the way it allowed personal data on more than 50 million users to be misused by a firm called Cambridge Analytica. Facebook has shut down the specific method used in that case — an app that hoovered up not just the data of those who signed up for it, but also personal information shared by any of their friends — but the incident has touched off a debate over the social network’s privacy protections that has reached as far as Washington, DC and the European Union.

On Wednesday, Facebook tried to show that it is listening to its critics by updating its privacy settings to make it easier for users to find out what they are sharing and with whom, and then change those settings if necessary. Of course, as more than one long-time Facebook watcher pointed out, the company has done this on multiple occasions in the past whenever it has run afoul of privacy rules, and not much seems to change. Tim Wu, a law professor and former staffer at the Federal Trade Commission, has also noted that in the consent decree Facebook signed with the regulator in 2011, the company agreed to take better care of its users’ data.

Congress has asked Zuckerberg to appear before a hearing into the incident, and according to reports by CNN and others, the Facebook co-founder plans to show up — unlike the last hearings Facebook was called to attend, when the Senate and House intelligence committees questioned Facebook, Google and Twitter about whether Russian trolls used their platforms to try and influence the 2016 election. Zuckerberg didn’t appear at those hearings; instead he sent his legal counsel, as did both Google and Twitter. This time, Congress has made it clear that they want to hear from the man himself, not one of his deputies.

The United Kingdom may have to make do with a stand in, however. Legislators in Britain have also asked Zuckerberg to appear before them to answer questions about Cambridge Analytica’s usage of Facebook data, but sources close to the company told Reuters that the co-founder and CEO won’t be attending. Meanwhile, the turmoil caused by the Cambridge revelations continues for Facebook: It announced on Tuesday that it has delayed plans to launch a “smart assistant” style device similar to the Google Home or Amazon Echo, concerned that people might not react well to a Facebook-branded always-on listening device.

  • In addition to strong questions from Washington and the UK, Facebook is also getting grilled by legislators in India, according to a report by BuzzFeed. They want to know whether companies like Cambridge Analytica have used Facebook data to try and influence elections in India, and they’ve issued the social network an official notice asking how it plans to keep its platform from being exploited in that way. India has several state elections happening this year and national elections next year.
  • Although hashtags like #DeleteFacebook have been trending on some social networks since the Cambridge data leak news broke, it’s not clear how much of a backlash there is at the user level. Some corporations, however, have deleted their pages, including Tesla and SpaceX, both owned by maverick billionaire Elon Musk. And on Wednesday, Playboy magazine said that it was removing itself from Facebook because of the data leak, but also because the social network’s policies are “sexually repressive.”
  • While there may not have been a mass exodus of Facebook users so far, the same is not true for investors. Some shareholders of the company appear to be worried that the backlash could impact Facebook financially, especially if there are more regulations coming that will restrict what it can and can’t do with its users’ data (as there are in Europe). Facebook’s share price has fallen by more than $32 in the past five days, which has shrunk the company’s market cap by almost $100 billion.
  • The Trump campaign wasn’t the only outfit that used Cambridge Analytica’s data and expertise. Both The Economist and the Financial Times were reportedly also clients of the data-analysis firm, which has been accused of meddling in the Brexit vote in the UK as well as the 2016 US election. A Financial Times source told BuzzFeed UK that the paper only did some market-size research with Cambridge Analytica. It wasn’t clear whether any illicit Facebook data was part of the deal or not.

Other notable stories:

  • The New York Times released its diversity report on Wednesday and said that while it has made some progress in employing more women in its newsroom and business operations — with female leadership on both the news side and the business side at 46 percent — it hasn’t been as successful when it comes to people of color. While the number of staff who fall into that category has grown, the percentage of those in leadership roles actually fell last year compared to 2016.
  • Most newspaper companies are doing their best to get out of the print business, but in Toronto there’s a brand new paper that is only in print. It’s called the West End Phoenix, and its a neighborhood paper published by veteran rock musician Dave Bidini, who lives in the city’s West End and says he wanted to create something that would tell stories about the neighborhood and its residents.
  • Bloomberg spins a fascinating tale about Robert Mercer, who in addition to being a billionaire Trump supporter (who has also helped finance both Breitbart News and Cambridge Analytica), happens to be a volunteer police officer for the tiny town of Lake Arthur in New Mexico — which has a population of about 435 — even though he doesn’t live anywhere near the town, and doesn’t really have any personal connection to it. To find out why, you’ll have to read the story.
  • Vice Media, the alternative giant with a market cap in the billions, appears to be running into some headwinds in India. Two senior editors have reportedly resigned their positions with the company due to editorial interference, according to a report by The Wire, after a story was killed that involved a gay activist who worked for the youth wing of the governing Bharatiya Janata Party.
  • The Tow Center at Columbia has a report in CJR that looks at the problem of disinformation by comparing two very different communities in Philadelphia. In some cases, the authors point out, “a lack of trust in media, issues of perceived relevance, and a sense of relentless negativity have led many readers to vacillate between disengaging from the news for periods of time, and seeking out alternative sources.”
  • In a bizarre incident, New York Daily News reporter Ken Lovett was arrested by state police on Wednesday for using his cellphone in the Senate chamber lobby, in breach of the chamber’s rules. After being detained, he was released by none other than Governor Andrew Cuomo, who personally went to the lockup at the state capitol and had him sprung. “I offered my services on a pro-bono basis—it just does my heart good to be able to say I freed Ken Lovett,” Cuomo said after the incident.

Affiliate ad scammers say Facebook helped them trick users

Most of the attention focused on Facebook right now is aimed at the Cambridge Analytica leak, where a shadowy Trump-affiliated organization got hold of personal data on 50 million Facebook users and targeted them with ads and fake news during the 2016 election. But this saga is just one example of how Facebook’s targeting features could be mis-used.  As a piece by Bloomberg points out, shady affiliate marketers have been mining the social network for dubious clicks for years, and making millions of dollars by doing so.

The piece goes inside a community of digital grifters and con-men, who use social networks like Facebook, Twitter and Instagram to con people out of their money with the offer of untold riches, self-help scams and bogus health remedies. Author Zeke Faux (whose last name seems particularly appropriate in this context) writes about a conference in Berlin last year where Facebook had a large presence. The conference that was supposed to be about traditional marketing, but was filled with shysters and click-farmers:

The Berlin conference was hosted by an online forum called Stack That Money, but a newcomer could be forgiven for wondering if it was somehow sponsored by Facebook Inc. Saleswomen from the company held court onstage, introducing speakers and moderating panel discussions. After the show, Facebook representatives flew to Ibiza on a plane rented by Stack That Money to party with some of the top affiliates… Officially, the Berlin conference was for aboveboard marketing, but the attendees I spoke to dropped that pretense after the mildest questioning. Some even walked around wearing hats that said “farmin’,” promoting a service that sells fake Facebook accounts.

Facebook has taken pains to point out that it doesn’t want this kind of business on the network, and says it has been working hard to get rid of scammers. Rob Leathern, who joined Facebook in 2017 as part of an effort to purge the network of affiliate marketers and similar low-life advertisers, tells Bloomberg that the days when people like Gryn could make millions with dubious clicks are over. “We are working hard to get these people off the platform. They may get away with it for a while, but the party’s not going to last,” he says.

That could be easier said than done, however, given the head start that Facebook gave to the scammers and their supporters. Faux spends part of the piece profiling one of the men at the heart of this affiliate scam network — a Polish entrepreneur named Robert Gryn, who worked his way up through Stack That Money, then developed software that hundreds of affiliate scammers use to leverage Facebook’s ad targeting machine.

Only a few years ago, Gryn was just another user posting on Stack That Money. Now, at 31, he’s one of the wealthiest men in Poland, with a net worth estimated by Forbes at $180 million. On Instagram, he posts pictures of himself flying on private jets, spearfishing, flexing his abs, and thinking deep thoughts. Last year he posed for the cover of Puls Biznesu, a Polish financial newspaper, with his face, neck, and ears painted gold. Gryn’s prominent cheekbones, toned biceps and forearms, perfectly gelled pompadour, and practiced smile lend him a resemblance to his favorite movie character: Patrick Bateman, the murderous investment banker played by Christian Bale in American Psycho.

Tracking down ads that were placed by Russian trolls and aimed at voters during the 2016 election is complicated, but it might actually be easier than rooting out the kind of scams Bloomberg describes. When their accounts are blocked or banned, affiliate link traders simply set up new ones under other plausible-sounding names, and start again. And the same tools that allowed the Russian trolls to target voters give marketers the ability to push their ads to a vast network of gullible users for pennies per click. Read the whole piece here.

Facebook touches the third rail by mentioning accreditation of journalists

Not surprisingly, the issue of “fake news” and the role that the giant web platforms play in spreading misinformation was a big topic of conversation at the Financial Times “Future of News” conference held this week in New York. But things started to get a little heated when Campbell Brown — Facebook’s head of news partnerships — was asked by moderator Matthew Garrahan if the social network might consider “some sort of accreditation system” as part of its attempts to solve the disinformation problem.

“I think we are moving in that direction,” Brown said, at which point she was interrupted by Google’s VP of News, Richard Gingras, who was also part of the panel discussion (along with Emily Bell, director of Columbia University’s Tow Center for Digital Journalism). Gingras echoed what many journalists were probably thinking when he protested that “from a First Amendment perspective, we don’t want anyone accrediting who a journalist is.”

In tweets sent later to some journalists who made similar criticisms, Brown clarified that what she meant was not accreditation per se, but that in order to stamp out fake news, Facebook might have to verify trusted news organizations “through quality signals or other means.”

Giving what seems like approval to the idea of accreditation might be horrifying for some, since it brings up unpleasant images of countries where the government or dictator in power decides who qualifies as a journalist. But at the same time, Brown’s gaffe is somewhat understandable, because Facebook is currently trapped between a rock and a hard place when it comes to taking action on fake news and misinformation.

On the one hand, the company is being pressed by governments both in the US and elsewhere to do more to remove or de-emphasize fake news, not to mention hate speech, harassment and other negative content. But the more it does that, the more it gets accused of infringing on free speech. And every attempt to rank news outlets on vague concepts such as “quality” or “trust” looks a lot like Facebook deciding who is a journalist and who isn’t.

Until the whole Russian troll fiasco broke out into the open, Facebook could plausibly maintain the fiction that it is just a platform, and that it doesn’t play favorites when it comes to sources of news or any other content (which has never really been the case, of course). But now it is having to grapple with the realities of being a media entity and making editorial decisions about what to include and who to highlight, and that is a completely different ball game.

In their haste to curb bad speech, regulators could endanger all speech

Unless you have a specific interest in sex-trafficking or proposed legislation aimed at reducing it, you might not be familiar with a bill that was has been making its way through Congress, a bill known as the Stop Enabling Sex Trafficking Act or SESTA. The bill has already been approved by the House, and on Wednesday was overwhelmingly approved by the Senate, which means it is on its way to President Trump for approval, and if he signs it — which seems likely, based on his previous comments — SESTA will become law.

Why should you care? Because in the process of trying to combat sex trafficking, Congress could wind up endangering free speech online. As CJR described in a piece on the proposed law last year, when it was still going through the House, SESTA effectively weakens one of the key pillars of online speech: Namely, Section 230 of the Communications Decency Act of **. That’s the clause that gives platforms like Facebook and Google immunity or “safe harbor” for the user-generated content that appears on their platforms.

In a nutshell, Section 230 is the reason Facebook, Google and Twitter can distribute your tweets or status updates or video clips without being legally liable for everything contained in them. SESTA removes that protection or safe harbor in the case of anything involving sex trafficking — which wouldn’t be a problem, except for the fact that by removing that protection, it weakens the entire edifice that is Section 230.

This isn’t happening in isolation. Section 230’s safe-harbor provisions were already coming under fire from Congress because of the belief that they insulate platforms like Facebook or YouTube from responsibility for other kinds of speech the government doesn’t like, including Russian troll campaigns, “fake news,” sexual harassment, neo-Nazi sentiments and anything that falls into the large and growing bucket labelled “terrorism.”

The risk is that if Section 230’s protections for speech are weakened — as similar protections are being weakened in Europe to try and stamp out fake news and hate speech — it gives everyone from trolls to governments license to go after all kinds of speech they dislike or disagree with. And while Facebook and Google might have the resources to deal with that, lots of smaller publishers and online services don’t.

As Senator Ron Wyden, one of the authors of Section 230, put it: “”In the absence of Section 230, the internet as we know it would shrivel,” Wyden said on the Senate floor ahead of the vote Wednesday. “Only the platforms run by those with deep pockets, and an even deeper bench of lawyers, would be able to make it.” That would only entrench the dominance that Facebook and Google and other massive platforms have.

Did the Times change a story because Facebook complained?

It might not have registered for most people trying to keep up with the maelstrom of news this week about the Facebook data leak — the one in which the shadowy, Trump-linked data company Cambridge Analytica got personal details on more than 50 million users — but a number of sharp-eyed New York Times critics noticed that one of the paper’s stories about the topic changed as it was edited.

So what, you might ask? After all, that kind of thing happens on news websites all the time: A short version goes up quickly and then later is replaced by a longer version as more information comes in.

Except in this case, the Times removed a line suggesting that Alex Stamos — a senior Facebook executive in charge of security — wanted to be more open about Russian involvement on the platform, and Chief Operations Officer Sheryl Sandberg shut him down.

That sent the media conspiracy machine into overdrive. A site called Law & Crime, run by ABC News legal commentator Dan Abrams, noticed the change and wrote a story suggesting that the Times changed the story because of a complaint from Facebook.

The New York Times apparently offers powerful third parties the ability to edit away—that is, to delete from the internet—unfavorable coverage appearing in the paper of record’s online edition,” the site wrote. The story was picked up by Glenn Greenwald, the occasionally combative journalist who runs The Intercept, who also accused the Times of watering down the story after complaints from Facebook.

https://twitter.com/ggreenwald/status/976258670932701184

Soon others joined the fray, including Kurt Walters of Demand Progress, who tweeted: “The original has multiple sources saying advocacy to disclose info about Russian activities on FB caused friction/resistance by Sandberg & other execs. The second does not.”

To their credit, Times reporters involved in the story—including Sheera Frenkel and Nicole Perlroth—responded to these allegations at length on Twitter, describing the changes to the story as nothing more than the usual editing process. They and others pointed out that the final version of the story still suggested Stamos and Sandberg clashed over the former’s desire to be more open about Russian activity, it just didn’t use the same specific sentence or word (“consternation”) as the original.

None of this seemed to dissuade Greenwald, however, who continues to maintain that the Times made a significant change, after receiving criticism from Facebook, and is refusing to acknowledge it:

https://twitter.com/ggreenwald/status/976536458461839366

To be fair to Greenwald and other Times critics, some of this is the paper’s fault. It routinely changes news stories—in some cases significantly—and then never discloses or explains the change. In several cases, the changes have became the subject of columns by former Public Editor Margaret Sullivan (the Times no longer has a public editor, after shutting down the position last year).

Web geeks have been recommending for some time that the paper—and other publishers—implement a “diffs” approach, which maintains a record of all the changes in an article over time, the way Wikipedia does with its “talk” pages (WikiTribune, the new journalism venture from Wikipedia founder Jimmy Wales, has a similar system).

There is a site called NewsDiffs that tracks changes to Times stories, which is how the latest changes were discovered. But it would be so much easier if that kind of tracking system was built into the Times website. The chances of that seem astronomical, however. If the Times was interested in talking openly about those kinds of things, it would probably still have a public editor. All we got in this case was a response from Times PR department on Twitter about how the Law & Crime story was false.

Old Facebook got away with murder, New Facebook not so much

As the mushroom cloud continued to spread over the weekend from Friday evening’s nuclear blast—the news that Facebook provided personal data on more than 50 million users to a Trump-linked data company called Cambridge Analytica—one consistent theme amid all the noise and smoke was the increasingly defensive argument from senior Facebook executives that a) What happened wasn’t technically a data “breach,” and b) It happened a long time ago, before they tightened up their data usage policies, so it doesn’t relate to current events like the Trump election campaign, etc.

What’s interesting is that the response to the Cambridge Analytica incident—shock, horror, the pointing of accusatory fingers and threats of regulation—says a lot about the way that attitudes toward Facebook and what it does have shifted over time. The honeymoon isn’t just over at this point; both sides are looking at hiring expensive lawyers and taking each other to divorce court.

At the risk of appearing like a Facebook apologist, both of the points made by Facebook’s former ad executive Andrew “Boz” Bosworth and Chief Strategy Officer Alex Stamos have a certain amount of truth to them. The data wasn’t obtained as the result of hackers getting access to a database illegally, so it wasn’t technically a breach. Cambridge Analytica got the data because an academic researcher sold it, even though Facebook’s rules say you’re not supposed to do that, and then the firm failed to delete it.

On the second point, Facebook is right that the API access the researcher made use of—which gave him access not just to the friend graph of users who signed up for a quiz, but to their friends’ friends as well—was tightened up in 2014, after a number of privacy researchers and others pointed out it could be misused.

In one of the many responses to the Facebook/Cambridge incident, Benedict Evans, who works for Silicon Valley venture capital firm Andreessen Horowitz, defended the social network by pointing out that in the past, people complained that Facebook was doing too much censoring of the News Feed and was also too stingy with its data, and now that conversation has completely flipped:

https://twitter.com/BenedictEvans/status/975054282771722240

As a VC staffer, Evans is naturally inclined to defend a great Silicon Valley success story like Facebook (which Andreessen Horowitz invested in, and AH co-founder Marc Andreessen once sat on the board of). But that’s not to say he doesn’t have a point.

Not that long ago, Facebook was criticized for removing posts too often and infringing on people’s free-speech rights, but now people seem to want it to do a lot more to remove offensive speech, fake news, and so on. And when it comes to the company’s data, one major complaint was that Facebook’s API was too locked down, not open enough, and that it should make it easier for others (including users) to get their data out. Now the criticism seems to be that it didn’t lock it down soon enough, or tight enough.

As Two-Face said in movie Batman: The Dark Knight, you either die a hero or live long enough to see yourself as a villain, and that’s where Facebook is now: All of the things it used to do that many people celebrated as a triumph of social technology—including the ability to target individuals based on their personal data, something the Obama campaign was celebrated for doing—are the fruit of a poisoned tree, in part because we understand what Russian trolls and other governments can do with such data. Our innocence has been lost, and perhaps that’s ultimately a good thing.

 

The media today: Google promises up to $300 million for media

As some media companies are questioning their commitment to Facebook, in the wake of changes to the News Feed and what some see as lackluster revenue from the platform, Google appears to be making a concerted effort to replace the social network as the media’s best friend. On Tuesday, it announced a new venture called the Google News Initiative at an event held in New York City. The new venture involves a range of different projects the company says are designed to help support media companies and quality journalism, along with a commitment to spend $300 million over the next three years.

The new entity is similar in name to the Digital News Initiative, which Google set up in 2015 to help European media entities figure out how to become more web savvy, and included a $150 million fund that anyone could apply to access. It has funded research (including the annual Digital News Report from the Reuters Institute) but mostly gives out grants every year to journalists and media companies to try digital projects. That all now becomes part of the much broader Google News Initiative.

On the new site devoted to the project, Google says the News Initiative is aimed at “building a stronger future for journalism,” and that it wants to “work with the news industry to help journalism thrive in the digital age.” Some of the things it includes as part of that effort—such as training for newsrooms, or partnerships with organizations like First Draft and the Local Media Consortium—have been underway for some time, either as part of the Digital News Initiative or Google’s News Lab, which helps media companies do research. But some of what was announced on Tuesday was new.

In the newish category is the expansion of a pilot project called Subscribe With Google, in which Google partners with publishers to make it easier for users to sign up and login to news sites. As reported earlier by Bloomberg, Google will also highlight content from outlets that users pay for when they do a search, and will share data that could help publishers figure out how to boost subscription revenue. Google also announced a new tool called Outline, which will allow media companies to create VPNs (virtual private networks) for their journalists, and the web giant plans to spend $10 million on a media literacy project through its non-profit Google.org arm, including an ad campaign involving YouTube stars.

Here’s more on Google and its expanding relationship with the media:

  • A Disinfo Lab: Google is helping launch a lab based at Harvard’s Shorenstein Center, in partnership with First Draft, where journalists will monitor disinformation in advance of and during elections around the world. And starting on April 2 (which is International Fact-Checking Day) Google says it will offer more than 20,000 students advanced training on how to distinguish misinformation online, through a partnership with the International Fact Check Network.
  • News Lab changes: As part of the new project, the Google News Lab is expanding its efforts, according to a post from head Steve Grove. It is adding full-time staff in Australia and Argentina to the 13 other countries where it already has employees, is hiring new Teaching Fellows and expanding its News Lab Fellowships program, which funds the hiring of journalists by newsrooms. But the News Lab’s website goes away, and gets absorbed by the broader GNI site.
  • More search fixes: In addition to all of the new announcements about funding, Google’s VP of news Richard Gingras also said the company is rolling out tweaks to the search giant’s algorithm in order to “put more emphasis on authoritative results over factors like freshness or relevancy.” How exactly it defines the term “authoritative” is unclear, but Google is probably hoping it will stop conspiracy theories from turning up in YouTube results after school shootings.
  • Sour grapes? Amid all the good news about the things it wants to do for media outlets, Google is still getting some criticism about its desire for control in some of the things it already does, including the AMP (Accelerated Mobile Pages) project. Although it is an open-source effort and Google says anyone can add to it, some complain that it gives the web giant too much of a say in the process.

Other notable stories:

  • Many journalists were mourning the loss of Les Payne on Tuesday. The 76-year-old Pulitzer Prize-winning former editor at Newsday was the founder of the National Association of Black Journalists and had a journalism career that spanned almost four decades. His family said he died unexpectedly at his home in Harlem. Nikole Hannah Jones, a writer with The New York Times magazine, called him “a fearless trailblazer, a door opener, and a fierce champion for black & brown journalists.”
  • The fallout from the Cambridge Analytica affair continues to cause turmoil at Facebook, and could lead to sanctions against the company in addition to its falling stock price, but so far there has been radio silence from co-founder and CEO Mark Zuckerberg. According to a report from The Daily Beast, the company held an all-hands Q&A about the incident, but Zuckerberg didn’t show.
  • Speaking of Cambridge Analytica, the shadowy Trump-linked entity that got its hands on the personal data of more than 50 million Facebook users, CJR spoke with NYC professor David Carroll about the lawsuit he launched in Britain recently to force the company to give him all the data it has on him. Carroll filed the claim under the UK’s Data Protection Act.
  • Karen McDougal, a former Playboy model who claims she had an affair with Donald Trump, is suing the publisher of the National Enquirer, trying to force the company to release her from a legal agreement she signed in 2016 that barred her from talking about the affair. Adult entertainment star Stephanie Clifford, also known as Stormy Daniels, is also trying to break an agreement she had to remain silent about an affair she says she had with Trump.
  • The TV news program 60 Minutes is under fire for what some see as an overly friendly segment on Mohammed bin Salman, the new ruler of Saudi Arabia. The Intercept said the piece, which praised bin Salman for cracking down on corruption but never mentioned allegations of torture or other criticisms, was “more of an infomercial for the Saudi regime than a serious or hard-hitting interview.” CJR writer Jon Allsop wrote recently about the challenges of reporting on Saudi Arabia.

David Carroll talks about his Cambridge Analytica lawsuit

Last week, David Carroll—a professor at the Parsons School of Design at the New School in New York—filed a legal challenge in Britain asking the court to force Cambridge Analytica to disclose how it came up with the psychographic targeting profile it had on him. Later that same day, Facebook announced that it had banned Cambridge Analytica from using the social network because the company had acquired the personal information of more than 50 million Facebook users in a way that contravened the social network’s terms of use, and had failed to delete it as requested.

Subsequent reporting by The Guardian, the Channel 4 TV network, and The New York Times suggests Cambridge Analytica not only used the data to target Facebook users for misinformation campaigns during the 2016 election, but also that the firm ran sophisticated black-ops campaigns in a number of countries including Kenya. Facebook, meanwhile, has been asked to appear before the UK’s Information Commission and a number of US Congressional committees, and there have been suggestions the company may have breached a consent decree it signed with the FTC in 2011 with respect to privacy.

Although David Carroll’s filing didn’t directly trigger these developments, the issues involved in the case—which has been crowdfunded through a web-based service—implicate not just the behavior of Cambridge Analytica or Facebook, but the entire commercial advertising-technology marketplace that both are a part of, which uses massive data-collection techniques to track, identify and target users. CJR spoke with Carroll about his case, and what follows is a transcript of that conversation, edited for length and clarity.

You seem to be at the center of a hurricane right now. How does it feel?

It’s been a crazy, crazy day. I haven’t even had time to reflect on it, really. I knew this day would come, I just didn’t know how big it would end up being. A few hours after I filed the [Facebook] suspension announcement came. I don’t necessarily think it was connected, I think there were parallel things in motion at the same time, but the way things converged was quite astonishing. I don’t know if hurricane is the right metaphor, but I’m still processing it all. I knew about the whistleblower going into this, so I knew the scale of some of it, but I did not know that the Channel 4 sting video was coming, and that kicks it up another level. I hope to get to a point where I can record it and process it, and maybe write a book about it, but right now I’m just trying to keep up with it and not lose my perspective.

Tell me about the filing. What are you asking Cambridge Analytica for?

The basic complaint is that what Cambridge gave me is not sufficient, it’s not complete, so it’s not compliant [with the law]. There are two ways of looking at it: One is that it’s not complete based on the company’s own public statements. The company’s public position is that I should have 4.000 to 5,000 data points on myself, but when I asked I only got about a dozen. The more sophisticated take on this question is actually included in the claim filing, two academic expert have both independently said, based on their own views and assessments of the data, that there’s no way this could be complete. There’s evidence of data points beyond just the demographic ones they provided, so if you were to look at the dataset and say how do you take these demographics and get to ideology, it’s insufficient. All you have is zip code, gender, birth date and party registration, and that’s not granular enough to have such nuanced predictions. The experts I used were Phil Howard from the Oxford Internet Institute and David Stillwell from the Cambridge psychometrics lab, who was one of the three scientists who originally created this model.

Did you ever think that your lawsuit would help trigger this kind of storm of controversy?

I did, yes. I thought it would really shift the ground that data-driven advertising and marketing sits upon, because it’s too intertwined with the ad-tech industrial complex to be a separate issue. We haven’t seen all the reverberations yet, and I don’t know if we will, but what will be interesting is if we get disclosure beyond this, that Facebook isn’t the only source for this data—that commercial entities like Acxiom, Experian, comScore and so on are also involved. Then all of those companies, their image is going to be tarnished by affiliation with what is potentially a black-ops contractor like Cambridge. I hoped the suit would cause a wakeup call for the whole industry. The line that they like to give to privacy advocates is that it doesn’t do harm, you can’t prove harm so it shouldn’t be regulated, and I feel like that whole mentality is crumbling before our very eyes. That is the thing that the whole ad-tech house of cards is based on, the idea that we should be able to collect people’s data, as much as we want because you can’t prove it’s harmful.

And you would argue that it is harmful, obviously.

The first question to ask someone who’s a skeptic is ‘Do you feel privacy in the voting booth is sacred?’ If the answer is yes, then we can work back from there. If your likes and credit-card purchases and the TV shows you watch allows us to predict what you will do to an accuracy level of 75%, that’s good enough to take away your privacy in the voting booth. It’s not just about predicting, it’s about how you can be exploited without your knowledge or understanding. What whistleblower Christopher Wylie represents is that this operation is not a typical voting-analytics operation, it doesn’t just create traditional campaign ads for candidates, it’s a full media operation that creates all manner of content, not just to resemble traditional campaign advertising, but literally fake news sites created as a proxy for political advertising. And then it starts to resemble the practices of the Internet Research Agency. If Wylie’s claims are corroborated and verified, we will be talking about a company that literally built vast networks of psychologically targeted and modelled media to distort truth and reality and to target people based on that. We’re not talking about ad banners, we’re talking about falsified media environments, completely fabricated editorial worlds, and tracking mechanisms and re-targeting mechanisms being actively used in a very sinister manner.

Some people seem to believe that Cambridge was mostly just a marketing scam, and that their methods didn’t really achieve what they promised.

I’ve heard that argument too, and all I will say is that we don’t know enough to know. I’m seeking maximum disclosure so that we can put this to bed. Here’s another possibility, if you take that idea all the way to its full completion: Maybe there is no data, and when I requested my data I gave them my driver’s license and my Con-Ed bill and they just fabricated the Excel spreadsheet, so it’s all an illusion, everything is a con or a scheme, a fabrication. It’s conceivable that that happened. That’s why we need the auditing and the forensics. The story is less about what people think about it and whether it works and more about how can we know what really happened, and then decide. People want to dismiss it, but we don’t know enough to make an assessment, and the more we learn the more it seems like it’s not what we thought it was. To get back to what drove me to do this, when I learned of the military work of the parent company [SCL Group] there was this idea that there was no longer a boundary between civilian and military sectors of this business. The data itself is intermingled, so there’s election campaign data being used for other unknown, potentially clandestine, covert purposes, and that was very disturbing and unsettling.

How did you come to start this case and why? What were you trying to accomplish?

The short answer is that it just felt right, but the long answer is that it was a natural trajectory of my career. From a sort of big picture view, I’ve been moving towards these issues very naturally, whether it’s working in the digital marketing space, trying a startup [a visual-content aggregation service called Glossy] and seeing how the sausage is made there, then being an academic, where you’re encouraged to be a critical voice, and then taking up the cause of privacy in 2014-2015. So when the issue of privacy and the technology/privacy conundrum came up in 2016, I was looking at the campaigns and I was curious what the news practices were. I knew that [presidential candidate Ted] Cruz was using Cambridge, and they were doing really really invasive data collection, so that was on my radar. After the election, it was a feeling of I know what happened, and I think I can prove it.

How did that lead to the filing? Can you take me through the steps that led up to it?

I started finding other people that had the same mindset, like [Tow Center research director] Jonathan Albright and Paul Olivier-Dehaye [founder of PersonalData.io], and journalists and other people working on it, and we just started figuring stuff out forensically. Then when Paul Olivier-Dehaye encouraged me to do the data request from Cambridge, I thought why not? And that sort of set off a sort of an inevitable chain of events, because when I got [the profile from Cambridge] I knew it was significant but I didn’t know exactly why. It took time to figure out why it was significant, I started talking to British lawyers and they were like ‘This is not legal.” It was kind of alien concept to me as an American, the idea that they couldn’t do what they’re doing in the UK, it would be illegal, because they’re doing things without consent, without the proper rights. That led to the idea that since they processed our data, we have the right to request it.

And the UK government said you could force Cambridge to release that data, even though you aren’t a British citizen?

Yes, after [Cambridge Analytica CEO Alexander] Nix went before Parliament, then Elizabeth Denham the Information Commissioner was asked about my case, and she explicitly said I do have standing, because they processed my data, and nothing excludes people [from using the privacy act] by citizenship. And going back to why I decided to do this in the first place, proving that the jurisdiction is in effect is its own story, the precedent that we could set if we succeed is really significant and important. And it’s happening at a time when all of these things are coming to an apex, as well as the GDPR [General Data Protection Regulation] and its impact on the industry. The timing of this is really important, I think it will create a small but significant cataclysm in the industry, and I think it will allow for some change, and to shake up the status quo. I don’t know exactly how, but it’s clear that in a year or so things will be different.

You crowdfunded your case—can you tell me a bit about how that worked?

[Guardian journalist] Carol Cadwalladr helped me out significantly by publishing a story about my suit before it was filed so that I could do the crowdfunding for it, and that was instrumental in getting the momentum going. That was in October. It was critical to being able to have the money to do what we just did. The minimum target to convert was 25,000 pounds and we made it very organically, word of mouth, we didn’t have to do any aggressive marketing of the campaign, and we have kept it on a kind of stretch target since then, so anyone who wants to donate can. I think it was at about 28,000 pounds on Friday. The stretch target is up to 100,000 pounds, but the legal stage we’re at we don’t need that much money. Technically I haven’t filed a lawsuit, I’ve filed a claim for pre-action disclosure, so we’re asking the judge to force them to disclose so that we can file a lawsuit. The beauty of that is if the judge forces them to disclose then we don’t need the lawsuit because we’ll get what we’re after.

So you might not proceed with the actual lawsuit if you get full disclosure?

I can’t say for sure that we won’t pursue a lawsuit, because the situation is very fluid. Going into this all I cared about was disclosure, auditable full disclosure, and so if I get all 4,000 or 5,000 data points and the Information Commissioner says it’s a legitimate audited thing, then I could say we’ve achieved our goal. But my lawyers might advise me differently based on unfolding events, so we could take a different strategy now that the government has a warrant and is raiding Cambridge’s offices, the situation is very fluid.

 

Facebook admits connecting the world isn’t always a good thing

One of the defining tenets of Facebook’s corporate philosophy has been the idea that connecting people around the world, both to each other and to issues that matter to them, is inherently a good thing. Co-founder and CEO Mark Zuckerberg has said the social network’s mission is “to give people the power to share and to make the world more open and connected.”

Lately, however, the company seems to be prepared to admit that doing this doesn’t always produce a world of sunshine and rainbows.

The United Nations recently criticized the company for its role in distributing fake news and misinformation about the persecuted Rohingya people in Myanmar, who have been driven from their homes, attacked and in some cases killed. In an interview on Slate’s If Then podcast, Adam Mosseri—the Facebook executive in charge of the News Feed—bluntly admitted that this is a serious problem.

“Connecting the world isn’t always going to be a good thing. Sometimes it’s also going to have negative consequences. The most concerning and severe negative consequences of any platform potentially would be real-world harm. So what’s happening on the ground in Myanmar is deeply concerning in a lot of different ways. It’s also challenging for us for a number of reasons.”

Mosseri went on to say that Facebook is thinking long and hard about to solve this kind of problem. “We lose some sleep over this,” he said. Which is encouraging, because it has to be at least a little disturbing to find that the tool you created to connect the world so people could share baby photos is being used to spread conspiracy theories that encourage violence against an already persecuted minority.

For more background on how Facebook came to play this role in Myanmar, and the challenges that it faces, please see my recent piece in CJR, in which I talked to reporters who work in the region about the social network’s role in the violence there.

 

Spotlight on fake news and disinformation turns toward YouTube

So far, Facebook has taken most of the heat when it comes to spreading misinformation, thanks to revelations about how Russian trolls used the network in an attempt to influence the 2016 election. But now YouTube is also coming under fire for being a powerful disinformation engine.

At Congressional hearings into the problem in November, where representatives from Facebook, Google and Twitter were asked to account for their actions, Facebook took the brunt of the questions, followed closely by Twitter. Google, however, argued that since it’s not really a social network in the same sense that Facebook and Twitter are, it therefore doesn’t play as big a role in spreading fake news.

This was more than a little disingenuous. While it may not run a social network like Facebook (its attempt at doing so, known as Google+, failed to catch on), Google does own the world’s largest video platform, and YouTube has played—and continues to play—a significant role in spreading misinformation.

This becomes obvious whenever there is an important news event, and especially one that has a political aspect to it, such as the mass shooting in Las Vegas last October—where fake news showed up at the top of YouTube searches—or the recent school shooting in Parkland, Florida where 17 students died.

After the Parkland shootings, YouTube highlighted conspiracy theories about the incident in search results and in its recommended videos. At one point, eight out of the top 10 recommended videos that appeared for a search on the name of one of the students who survived the shooting either promoted or talked about the idea that he was a so-called “crisis actor” and not a real student.

When this was mentioned by journalists and others on Twitter, the videos started disappearing one by one, until the day after the shooting there were no conspiracy theories in the top 10 search results. But in the meantime, each of those videos got thousands or possibly tens of thousands of views they might otherwise not have gotten if they hadn’t been recommended.

This kind of thing isn’t just a US problem. YouTube has become hugely popular in India with the arrival of cheap data plans for smartphones, and after a famous actress died recently, the trending list on YouTube for that country was reportedly filled with fake news.

In part, the popularity of such content is driven by human nature. Conspiracy theories are often much more interesting than the real facts about such an event, in part because they hint at mysteries and secrets that only a select few know about. That increases the desire to read them, and to share them, and social platforms like Facebook, Twitter and YouTube play on this impulse.

Human nature, however, is exacerbated by the algorithms that power these platforms, creating a vicious circle. YouTube’s algorithm tracks people clicking and watching conspiracy theory videos and assumes that this kind of content is very popular, and that people want to see more of it, and so it moves those videos higher in the rankings. That in turn causes more people to see them and click on them.

The result is that users are pushed towards more and more polarizing or controversial content, regardless of the topic, as sociologist Zeynep Tufekci described recently in The New York Times. The platform has become “an engine for radicalization,” she says.

“In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.”

Guillaume Chaslot, a programmer who worked at Google for three years, told CJR recently he noticed a similar phenomenon while working on the YouTube recommendation algorithm. He says he tried to get the company interested in implementing fixes to help solve it, but was told that what mattered was that people spent lots of time watching videos, not what kind of videos they were watching.

“Total watch time was what we went for—there was very little effort put into quality,” Chaslot says. “All the things I proposed about ways to recommend quality were rejected.”

After leaving Google, Chaslot started collecting some of his research and making the results public on a website called Algotransparency.org. Using software that he created (and has made public for anyone to use), he tracked the recommendations provided for YouTube videos and found that in many cases they are filled with hoaxes, conspiracy theories, fake news and other similar content.

Jonathan Albright, research director at the Columbia University’s Tow Center for Digital Journalism, has done his own research on YouTube, including a study in which he catalogued all of the recommended videos the site suggested after a hypothetical user clicked on a “crisis actor” video. What he found was a network of more than 9,000 conspiracy-themed videos, all of which were recommended to users as the “next up” video after they watched one involving the alleged Parkland shooting hoax.

“I hate to take the dystopian route, but YouTube’s role in spreading this ‘crisis actor’ content and hosting thousands of false videos is akin to a parasitic relationship with the public,” Albright said in a recent blog post about his research. “This genre of videos is especially troublesome, since the content has targeted (individual) effects as well as the potential to trigger mass public reactions.”

Former YouTube head of product Hunter Walk said recently that at one point he proposed bringing in news articles from Google News or even tweets to run alongside and possibly counter fake news or conspiracy theories, rather than taking them down, but that proposal was never implemented—in part because growing Google+ became more important than fixing YouTube.

Google has taken some small steps along those lines to try and resolve the problem. This week, YouTube CEO Susan Wojcicki said at the South by Southwest conference that the service will show users links to articles on Wikipedia when they search for known hoaxes about topics such as the moon landing. But it’s not clear whether this will have any impact on users’ desire to believe the content they see.

Google has also promised to beef up the number of moderators who check flagged content, and has created what it calls an “Intelligence Desk” in order to try and find offensive content much faster. And it has said that it plans to tweak its algorithms to show more “authoritative content” around news events. One problem with that, however, is it’s not clear how the company plans to define “authoritative.”

The definition of what’s acceptable also seems to be in flux even inside the company. YouTube recently said it had no plans to remove a channel called Atomwaffen, which posts neo-Nazi content and racist videos, and that the company believed adding a warning label “strikes a good balance between allowing free expression and limiting affected videos’ ability to be widely promoted on YouTube.”

After this decision was widely criticized, the site removed the channel. But similar neo-Nazi content reportedly still remains available on other channels. There have been reports that Infowars, the channel run by alt-right commentator Alex Jones, has had videos removed, and that the channel is close to being removed completely. But at the same time, some other controversial channels have been reinstated after YouTube said that they were removed in error by moderators.

In her talk at South by Southwest, Wojcicki said that “if there’s an important news event, we want to be delivering the right information,” but then added that YouTube is “not a news organization.” Those two positions seem to be increasingly incompatible, however. Facebook and YouTube both say they don’t want to become arbiters of truth, and yet they want to be the main news source for information about the world. How much longer can they have it both ways?

 


 

YouTube wants the news without the responsibility

 

After coming under fire for promoting fake news, conspiracy theories and misinformation around events like the Parkland school shooting, YouTube has said it is taking a number of steps to try and fix the problem. But the Google-owned video platform still appears to be trying to have its cake and eat it too when it comes to being a media entity.

This week, for example, YouTube CEO Susan Wojcicki said at the South by Southwest conference in Texas that the service plans to show users links to related articles on Wikipedia when they search for videos on topics that are known to involve conspiracy theories or hoaxes, such as the moon landing or the belief that the earth is flat.

Given the speed with which information moves during a breaking news event, however, this might not be a great solution for situations like the Parkland shooting, since Wikipedia edits often take awhile to show up. It’s also not clear whether doing this will have any impact on users’ desire to believe the content they see.

In addition to those concerns, Wikimedia said no one from Google notified the organization (which runs Wikipedia) of the YouTube plan. And some of those who work on the crowdsourced encyclopedia have expressed concern that the giant web company—which has annual revenues in the $100-billion range—is essentially taking advantage of a non-profit resource, instead of devoting its own financial resources to the problem.

Google seems to want to benefit from being a popular source for news and information without having to assume the responsibilities that come with being a media entity. In her comments at SXSW, Wojcicki said “if there’s an important news event, we want to be delivering the right information,” but then quickly added that YouTube is “not a news organization.”

This feels very similar to the argument that Facebook has made when it gets criticized for spreading fake news and misinformation—namely, that it is merely a platform, not a media entity, and that it doesn’t want to become “an arbiter of truth.”

Until recently, Facebook was the one taking most of the heat on fake news, thanks to revelations about how Russian trolls used the network in an attempt to influence the 2016 election. At Congressional hearings into the problem in November, where representatives from Facebook, Google, and Twitter were asked to account for their actions, Facebook took the brunt of the questions, followed closely by Twitter.

At the time, Google argued that since it’s not a social network in the same sense as Facebook and Twitter, it therefore doesn’t play as big a role in spreading fake news. This was more than a little disingenuous, however, since it has become increasingly obvious that YouTube has played and continues to play a significant role in spreading misinformation about major news events.

Following the mass shooting in Las Vegas last October, fake news about the gunman showed up at the top of YouTube searches, and after the Parkland incident, YouTube highlighted conspiracy theories in search results and recommended videos. At one point, eight out of the top 10 results for a search on the name of one of the students either promoted or talked about the idea that he was a so-called “crisis actor.”

When this was mentioned by journalists and others on Twitter, the videos started disappearing one by one, until the day after the shooting there were no conspiracy theories in the top 10 search results. But in the meantime, each of those videos got thousands or tens of thousands of views they might otherwise not have gotten.

Misinformation in video form isn’t just a problem in the US. YouTube has also become hugely popular in India with the arrival of cheap data plans for smartphones, and after a famous actress died recently, YouTube’s trending section for India was reportedly filled with fake news.

Public or media outrage seems to have helped push Google to take action in the most recent cases, but the subject of controversial content on YouTube has also become a hot-button issue in part because advertisers have raised a stink about it, and that kind of behavior has a very real impact on Google’s bottom line, as opposed to just affecting its public image.

Last year, for example, dozens of major-league advertisers—including L’Oreal, McDonald’s and Audi—either pulled or threatened to pull their ads from YouTube because they were appearing beside videos posted by Islamic extremists and white supremacists. Google quickly apologized and promised to update its policies to prevent this from happening.

The Congressional hearings into Russian activity also seem to have sparked some changes. One of the things that got some scrutiny in both the Senate and House of Representatives hearings was the fact that Russia Today—a news organization with close links to the Russian government—was a major user of YouTube.

Google has since responded by adding warning labels to Russia Today and other state broadcasters to note that they are funded by governments. This move has caused some controversy, however: PBS complained that it got a warning label, even though it is funded primarily by donations and only secondarily by government grants.

As well-meaning as they might be, however, warning labels and Wikipedia links aren’t going to be enough to solve YouTube’s misinformation problem, because to some extent it’s built into the structure of the platform, as it is with Facebook and the News Feed.

In a broad sense, the popularity of fake news is driven by human nature. Conspiracy theories and made-up facts tend to be much more interesting than the real truth about an event, in part because they hint at mysteries and secrets that only a select few know about. That increases the desire to read them, and to share them. Social services like Facebook, Twitter, and YouTube tend to promote content that plays on this impulse because they are looking to boost engagement and keep users on the platform as long as possible.

Human nature, however, is exacerbated by the algorithms that power these platforms. YouTube’s algorithm tracks people clicking and watching conspiracy theory videos and assumes that this kind of content is very popular, and that people want to see more of it, so it moves those videos higher in the rankings. That in turn causes more people to see them.

The result is that users are pushed towards more and more polarizing or controversial content, regardless of the topic, as sociologist Zeynep Tufekci described in a recent New York Times essay. The platform, she says, has become “an engine for radicalization.”

“In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.”

Guillaume Chaslot, a programmer who worked at Google for three years, told CJR recently he noticed a similar phenomenon while working on the YouTube recommendation algorithm. He says he tried to get the company interested in solving it, but was told that what mattered was that people spent lots of time watching videos, not what kind of videos they were watching.

Former YouTube head of product Hunter Walk said recently that at one point he proposed bringing in news articles from Google News or even tweets to run alongside and possibly counter fake news or conspiracy theories, rather than taking them down, but that proposal was never implemented—in part because Google executives made it clear that growing Google+ was a more important goal than fixing YouTube.

In addition to adding Wikipedia links, Google has also promised to beef up the number of moderators who check flagged content, and has created what it calls an “Intelligence Desk” in order to try and find offensive content much faster. And it has said that it plans to tweak its algorithms to show more “authoritative content” around news events. One problem with that, however, is it’s not clear how the company plans to define “authoritative.”

The definition of what’s acceptable also seems to be in flux even inside the company. YouTube recently said it had no plans to remove a channel called Atomwaffen, which posts neo-Nazi content and racist videos, and that the company believed adding a warning label “strikes a good balance between allowing free expression and limiting affected videos’ ability to be widely promoted on YouTube.”

After this decision was widely criticized, the site removed the channel. But similar neo-Nazi content reportedly still remains available on other channels. There have been reports that Infowars, the channel run by alt-right commentator Alex Jones, has had videos removed, and that the channel is close to being removed completely, although YouTube denies this. But at the same time, some other controversial channels have been reinstated after YouTube said that they were removed in error by moderators.

Facebook and YouTube both say they want to be the main news source for information about the world, but they also say they don’t want to be arbiters of truth. How long can they continue to have it both ways?

Anti-terrorism and hate-speech laws are catching artists and comedians instead

One of the risks whenever governments try to curb what they see as offensive speech is that other kinds of speech are often caught in the same net, and that poses a very real risk for freedom of speech and for freedom of the press. One of the most recent examples comes from Spain, where a vague anti-terrorism law has been used to charge and even imprison musicians and other artists.

In a new report on the phenomenon, entitled “Tweet… If You Dare,” Amnesty International looked at the rise in prosecutions under Article 578 of the country’s criminal code, which prohibits “glorifying terrorism” and “humiliating the victims of terrorism.” The law has been around since 2000, but was amended in 2015 and since then prosecutions and convictions have risen sharply.

Freedom of expression in Spain is under attack. The government is targeting a whole range of online speech–from politically controversial song lyrics to simple jokes–under the catch-all categories of “glorifying terrorism” and “humiliating the victims of terrorism.” Social media users, journalists, lawyers and musicians have been prosecuted [and] the result is increasing self-censorship and a broader chilling effect on freedom of expression in Spain.

Among those who have been hit by the law are a musician who tweeted a joke about sending the king a cake-bomb for his birthday and was sentenced to a year in prison, and a rapper who was sentenced to three-and-a-half years in jail for writing songs that the government said glorified terrorism and insulted the crown. A filmmaker and a journalist have also been charged under the anti-terrorism law, and a student who tweeted jokes about the assassination of the Spanish prime minister in 1973 was also sentenced to a year in prison, although her sentence was suspended after a public outcry.

Some free-speech advocates are afraid that new laws either in force or being considered in Germany, France and even the United Kingdom could accelerate this problem. In all three countries, legislators say they are concerned about hate speech, online harassment and fake news, but the definition of those problems is so vague there is a risk that other kinds of speech could also be criminalized—especially when enforcement of those rules gets outsourced to platforms like Facebook, Google and Twitter.

Google offers olive branch to newspapers, YouTube relies on Wikipedia

Google is planning to highlight content from newspapers with paywalls for users who are paying subscribers, according to a report from Bloomberg on Tuesday, March 14. So when users search for articles on a topic, results from sites they subscribe to will show up higher than results from regular websites. Google also plans to share data with publishers about who is most likely to sign up, Bloomberg said.

Google executives plan to disclose specific details at an event in New York on March 20, according to the people. The moves could help publishers better target potential digital subscribers and keep the ones they’ve already got by highlighting stories from the outlets they’re paying for. The initiative marks the latest olive branch from Silicon Valley in its evolving relationship with media companies.

This is the latest in a series of moves that both Google and Facebook have been making around subscriptions. Facebook has been experimenting with adding paywall support to its mobile-friendly Instant Articles feature, and also recently set up a trial project to try and help local publishers figure out how to get more subscription revenue. The main reason why publishers are being forced to rely on subscriptions, of course, is that Google and Facebook have taken control of most of the world’s digital advertising revenue.

Google also recently changed its policy on search results from sites with subscription models. It used to encourage publishers with paywalls to let searchers read at least three articles free under its “First Click Free” model, and those who didn’t comply were ranked lower in search results. But the company dropped the FCF approach last year, and now subscription-based publishers can choose to provide whatever number of free articles they wish to non-subscribers, including providing none at all.

 


 

YouTube, which has been taking a considerable amount of heat for promoting hoaxes and conspiracy theories in search results, will start highlighting articles from Wikipedia when users are looking for what is clearly fake news about topics such as the moon landing, CEO Susan Wojcicki said at the South by Southwest conference in Austin on Tuesday, March 14.

The Wikipedia links will not appear solely on conspiracy-related videos, but will instead show up on topics and events that have inspired significant debate. A YouTube spokesperson used videos about the moon landing (a historical topic with many conspiracy theories surrounding it) as an example and noted that moon landing videos would appear with Wikipedia links below to provide additional information, regardless of whether the video was a documentary or a video alleging the landing was staged.

As a number of people noted on Twitter following this announcement, it’s a little ironic that a giant company with $100 billion in revenues is relying on a donation-funded volunteer organization to do fact-checking for its videos. YouTube said Wikipedia links are just the first step in solving the problem and that it plans to do more, but it seems a little unfair to take advantage of a free resource when Google itself could be trying harder to flag or identify disinformation.

In part, this is because YouTube—like Facebook—seems to be trying to walk a very fine line with its approach to misinformation. Wojcicki said at the SXSW conference that “if there’s an important news event, we want to be delivering the right information,” but also added: “we are not a news organization.” Those two views seem to be increasingly incompatible, and at some point both of the major web platforms will have to come to grips with what that implies.

 

 

Blog posts for CJR

March 12: Apple announced on March 12 that it has acquired Texture for an undisclosed sum. Often called “the Netflix of magazines,” Texture gives readers access to over 200 popular magazines through its app and website for a single monthly fee. It was originally called Next Issue Media when it launched in 2012, and had raised $130 million in venture funding before the acquisition. Said Apple executive Eddy Cue:

“We’re excited Texture will join Apple, along with an impressive catalog of magazines from many of the world’s leading publishers. We are committed to quality journalism from trusted sources and allowing magazines to keep producing beautifully designed and engaging stories for users.”

In an interview at the South by Southwest conference following the news, Cue said that Apple would be integrating Texture into Apple News, and that the company is committed to curating the news to remove fake news. Part of the goal of Apple News and acquiring Texture, he said, is to avoid “a lot of the issues” happening in the media today, such as the social spread of inaccurate information.


March 12: The European Union released the final report from its High Level Expert Group on Fake News, entitled “A Multi-Dimensional Approach to Disinformation,” on March 12. Several of the experts involved in fact-checking and tracking disinformation, including Claire Wardle of First Draft and Alexios Mantzarlis of  the International Fact-Checking Network, summed up the main points of the report in a Medium post, which said the report’s contributions include:

“Important definitional work rejecting the use of the phrase ‘fake news’; an emphasis on freedom of expression as a fundamental right; a clear rejection of any attempt to censor content; a call for efforts to counter interference in elections; a commitment by tech platforms to share data; calls for investment in media and information literacy and comprehensive evaluations of these efforts; as well as cross-border research into the scale and impact of disinformation.”

Among other things, the group notes that at a time when many governments are trying to pass laws aimed at stamping out fake news, this is not the right approach. “Many political bodies seem to believe that the solution to online disinformation is one simple ‘fake news’ law away, [but] the report clearly spells out that it is not. It urges the need for caution and is sceptical particularly of any regulation of content.”


March 11: Joshua Geltzer, executive director of Georgetown Law’s Institute for Constitutional Advocacy and Protection and former senior director for counterterrorism at the National Security Council, writes in Wired that the Russian trolls who tried to manipulate the 2016 election didn’t abuse Facebook or Twitter, they simply used those platforms in the way that they were designed to be used:

“For example, the type of polarizing ads that Facebook admits Russia’s Internet Research Agency purchased get rewarded by Facebook’s undisclosed algorithm for provoking user engagement. And Facebook aggressively markets the micro-targeting that Russia utilized to pit Americans against each other on divisive social and political issues. Russia didn’t abuse Facebook—it simply used Facebook.”

Geltzer says the major web platforms need to do a much better job of removing or blocking malicious actors who try to use their systems for nefarious purposes, and he also says that Facebook, Google and Twitter need to be much more transparent about their algorithms and how they operate. That kind of openness, he says, “could yield crowd-sourced solutions rather than leaving remedies to a tiny set of engineers, lawyers, and policy officials employed by the companies themselves.”


March 10: Sociologist Zeynep Tufekci wrote in an essay published in the New York Times on March 10 about experiments she performed on YouTube during the 2016 election, where she noticed that no matter what kind of political content she searched for, the recommended videos were always more extreme and inflammatory, whether politically or socially. This is a vicious circle, she writes:

“In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.”

Tufekci mentions research done by former YouTube engineer Guillaume Chaslot, who worked on the video platform’s recommendation algorithm and spoke to CJR recently about his conclusions. Like Tufekci, he found that the videos being recommended on the site were overwhelmingly contentious and inflammatory, including many that promoted conspiracy theories, because that kind of content makes people click and spend more time on the site, and that serves Google’s business interests.


March 9: NewsWhip, an analytics company that measures social-media activity, looked at its data and came up with a list of news reporters who get the most engagement on Facebook in February, and number one was Ryan Shattuck, of the satirical news site The Onion. Number 2 was Jonah Urich, who works for a left-wing site called Truth Examiner, known for posting sensationalized political news.Daily Wire, another hyper-partisan political news site, also took several spots in the top 10. As NewsWhip described it:

Beyond the Onion, the top authors were primarily from hyper-partisan sources like the Daily Wire, Truth Examiner, Breitbart, Washington Press, and several small but politically-charged sites. Horrifyingly enough, two authors from fake news sites featured. An author from the fake news site Your Newswire was towards the top of our list, ranking in at #12. Baxter Dmitry wrote 81 articles in February, driving more than 1.7 million Facebook interactions.

Facebook has said it plans to change its algorithm so that more “high quality” news shows up in the News Feed, but that could be easier said than done. The company said it would rank news sources based in part on whether they drive engagement and discussion, and what NewsWhip’s data reinforces is that the most engaging content is often fake, or at least highly sensationalized.


March 9: Most of the attention around fake news has focused on Facebook and YouTube, but other apps and services can also play a role in spreading misinformation, as Wired points out in a March 9 piece on the use of Facebook-owned messaging app WhatsApp in Brazil. Use of the app is apparently complicating the country’s attempts to deal with an outbreak of yellow fever, because of false reports about vaccinations:

In recent weeks, rumors of fatal vaccine reactions, mercury preservatives, and government conspiracies have surfaced with alarming speed on the Facebook-owned encrypted messaging service, which is used by 120 million of Brazil’s roughly 200 million residents. The platform has long incubated and proliferated fake news, in Brazil in particular. With its modest data requirements, WhatsApp is especially popular among middle and lower income individuals there, many of whom rely on it as their primary news consumption platform.

According to Wired, among the conspiracy theories circulating about the vaccination program are an audio message from a woman claiming to be a doctor, warning that the vaccine is dangerous, and a fake-news story connecting the death of a university student to the vaccine. As similar reports about the impact of Facebook in countries like Myanmar have shown, social-media driven conspiracy theories in the US can be annoying but in other parts of the world they can actually endanger people’s lives.


March 8: Renee DiResta, a researcher with New Knowledge and a Mozilla fellow specializing in misinformation, argues that by using Facebook to spread fake news during the 2016 election, the “Russian troll factory” known as the Internet Research Agency was duplicating a strategy initially developed by ISIS, which used digital platforms and social-media methods to spread its message.

The online battle against ISIS was the first skirmish in the Information War, and the earliest indication that the tools for growing and reaching an audience could be gamed to manufacture a crowd. Starting in 2014, ISIS systematically leveraged technology, operating much like a top-tier digital marketing team. Vanity Fair called them “The World’s Deadliest Tech Startup,” cataloging the way that they used almost every social app imaginable to communicate and share propaganda.

Most of the major platforms made half-hearted attempts to get rid of this kind of content, but they were largely unsuccessful. What this showed, DiResta writes, was that the social platforms could be gamed in order to spread political messages, and that the same kinds of targeting techniques that worked for advertising could be turned to political use. And among those who were also learning this lesson, it seems, were some disinformation architects on a troll farm in Russia.