Is there more than a whiff of schadenfreude to media coverage of Facebook?

Scott Rosenberg touched off a minor firestorm in the media-sphere with a post at Axios this week, in which the veteran technology writer argues that at least some of the enthusiasm with which media companies are covering Facebook’s trial and tribulations stems from their resentment over how the company has stolen their readers and advertising revenue. Here’s his argument in a nutshell:

Outrage over Facebook’s misuse of user data and failure to rein in election fraud is real. But the zeal that media outlets bring to their Facebook coverage is personal, too. It’s turbocharged because journalists, individually and collectively, blame Facebook — along with other tech giants, like Google, and the internet itself — for seducing their readers, impoverishing their employers, and killing off their jobs. This blame war is the latest phase of a decades-long grudge match between traditional media companies and new technology giants.

This theory sparked a raft of responses from journalists of all stripes, including everything from a brief “LOL—sorry, but this is BS” from USA Today reporter Jessica Guynn to a full-throated denial by Eric Levitz in New York magazine, who argued even if Facebook and Google have crippled the media industry, that doesn’t mean coverage of their power and influence is motivated by anything other than a desire to expose that power and influence, and to question its impact on a civil society (including journalism).

It would be one thing if Axios presented a litany of libelous errors that journalists had made in the course of covering Silicon Valley with a vengeance. But if this alleged resentment isn’t producing misinformation, then what is the point of insinuating that critical coverage of Facebook is rooted in personal grievance? Who is served by such unsubstantiated insinuations?

Others noted that BuzzFeed has been relentless in exposing the details of various stories involving Facebook, despite the fact that it owes more of its livelihood to the social network than just about any other media entity, and therefore might be assumed to be more favorably inclined towards it rather than less. And some pointed out that in the early days of Facebook, a case could be made that coverage of the company was overly positive, making the current critical approach more of a return to normal than an outlier.

Some saw the Axios piece as primarily a piece of marketing—a way of indicating that Axios is sympathetic to the tech giants and their complaints about overly critical coverage. And a few noted that the media startup, which is run by former Politico founder Jim VandeHei, had a partnership with Facebook that involved a series of exclusive interviews with senior executives.

One of the other implications in Rosenberg’s piece is that Facebook and Google didn’t just steal audience and revenue from publishers because they had natural advantages, but because media executives failed to adapt quickly enough to the internet, and then in a desperate attempt to catch up, handed over too much of their business to Facebook and Google. Whether anyone in the traditional media industry wants to admit it or not, that point has more than a little truth to it.

Fake news, clickbait still doing well after Facebook algorithm change

Earlier this year, Facebook announced a change to its News Feed algorithm, one designed to reduce the visibility of news — apart from those outlets deemed to be high quality or trusted sources — in favor of posts by people and pages that encouraged what the social network called “meaningful interaction.” And what effect has this had on the flow of fake and/or real news? According to a recent study from media-monitoring firm Newswhip, real news is still being shared a lot, but so are obvious misinformation and clickbait.

In the report, entitled “Navigating the Facebook Algorithm Change,” the firm looked at the most widely-shared articles and found that more than half of the top 100 were hard news stories or reporting on current events, including the death of famed astrophysicist Stephen Hawking. But clearly falsified stories also show up fairly high in the rankings: Number 26 on the most-shared list is a report from a fake news site called Your Newswire that says the flu shot is causing a “disastrous flu outbreak.” That got more than 850,000 engagements.

Although the algorithm change has resulted in a decline in traffic to some sites (including apparently conservative sites, which are complaining that they have been deliberately targeted), Newswhip’s analysis shows that some news outlets have actually seen an increase in traffic since the change, including NBC and Fox News. January was the strongest month for Fox since October of 2017, the company says, but NBC actually eclipsed it and took the top spot for the first time since January of last year. More niche sites have suffered:

Some Pages that our data showed a more serious decline in average engagements were UNILAD, Student Problems, 9Gag, Cosmopolitan, and Architecture & Design, though some of these are starting to show some recovery. For these publishers, it might be time to look at what role their content actually serves their followers — is it connecting them to one another, teaching them something new, making them pause… or is it just adding to a landscape of digital waste?

Newswhip also found that Facebook is true to its word, and has been favoring posts that get more engagement, including comments. Over the past couple of years, the number of comments as a proportion of the overall engagement on the top 100 posts has averaged about 5 percent but since the algorithm change comments make up more than 11 percent of total engagement on the top 100 most-shared posts. Most of the ones that got a large number of comments were funny clickbait-style videos.

Facebook struggles to get out from under its privacy debacle

Facebook CEO Mark Zuckerberg has been having a bad week, and it’s probably going to get worse before it gets better, as the company continues to take fire from all sides because of the way it allowed personal data on more than 50 million users to be misused by a firm called Cambridge Analytica. Facebook has shut down the specific method used in that case — an app that hoovered up not just the data of those who signed up for it, but also personal information shared by any of their friends — but the incident has touched off a debate over the social network’s privacy protections that has reached as far as Washington, DC and the European Union.

On Wednesday, Facebook tried to show that it is listening to its critics by updating its privacy settings to make it easier for users to find out what they are sharing and with whom, and then change those settings if necessary. Of course, as more than one long-time Facebook watcher pointed out, the company has done this on multiple occasions in the past whenever it has run afoul of privacy rules, and not much seems to change. Tim Wu, a law professor and former staffer at the Federal Trade Commission, has also noted that in the consent decree Facebook signed with the regulator in 2011, the company agreed to take better care of its users’ data.

Congress has asked Zuckerberg to appear before a hearing into the incident, and according to reports by CNN and others, the Facebook co-founder plans to show up — unlike the last hearings Facebook was called to attend, when the Senate and House intelligence committees questioned Facebook, Google and Twitter about whether Russian trolls used their platforms to try and influence the 2016 election. Zuckerberg didn’t appear at those hearings; instead he sent his legal counsel, as did both Google and Twitter. This time, Congress has made it clear that they want to hear from the man himself, not one of his deputies.

The United Kingdom may have to make do with a stand in, however. Legislators in Britain have also asked Zuckerberg to appear before them to answer questions about Cambridge Analytica’s usage of Facebook data, but sources close to the company told Reuters that the co-founder and CEO won’t be attending. Meanwhile, the turmoil caused by the Cambridge revelations continues for Facebook: It announced on Tuesday that it has delayed plans to launch a “smart assistant” style device similar to the Google Home or Amazon Echo, concerned that people might not react well to a Facebook-branded always-on listening device.

  • In addition to strong questions from Washington and the UK, Facebook is also getting grilled by legislators in India, according to a report by BuzzFeed. They want to know whether companies like Cambridge Analytica have used Facebook data to try and influence elections in India, and they’ve issued the social network an official notice asking how it plans to keep its platform from being exploited in that way. India has several state elections happening this year and national elections next year.
  • Although hashtags like #DeleteFacebook have been trending on some social networks since the Cambridge data leak news broke, it’s not clear how much of a backlash there is at the user level. Some corporations, however, have deleted their pages, including Tesla and SpaceX, both owned by maverick billionaire Elon Musk. And on Wednesday, Playboy magazine said that it was removing itself from Facebook because of the data leak, but also because the social network’s policies are “sexually repressive.”
  • While there may not have been a mass exodus of Facebook users so far, the same is not true for investors. Some shareholders of the company appear to be worried that the backlash could impact Facebook financially, especially if there are more regulations coming that will restrict what it can and can’t do with its users’ data (as there are in Europe). Facebook’s share price has fallen by more than $32 in the past five days, which has shrunk the company’s market cap by almost $100 billion.
  • The Trump campaign wasn’t the only outfit that used Cambridge Analytica’s data and expertise. Both The Economist and the Financial Times were reportedly also clients of the data-analysis firm, which has been accused of meddling in the Brexit vote in the UK as well as the 2016 US election. A Financial Times source told BuzzFeed UK that the paper only did some market-size research with Cambridge Analytica. It wasn’t clear whether any illicit Facebook data was part of the deal or not.

Other notable stories:

  • The New York Times released its diversity report on Wednesday and said that while it has made some progress in employing more women in its newsroom and business operations — with female leadership on both the news side and the business side at 46 percent — it hasn’t been as successful when it comes to people of color. While the number of staff who fall into that category has grown, the percentage of those in leadership roles actually fell last year compared to 2016.
  • Most newspaper companies are doing their best to get out of the print business, but in Toronto there’s a brand new paper that is only in print. It’s called the West End Phoenix, and its a neighborhood paper published by veteran rock musician Dave Bidini, who lives in the city’s West End and says he wanted to create something that would tell stories about the neighborhood and its residents.
  • Bloomberg spins a fascinating tale about Robert Mercer, who in addition to being a billionaire Trump supporter (who has also helped finance both Breitbart News and Cambridge Analytica), happens to be a volunteer police officer for the tiny town of Lake Arthur in New Mexico — which has a population of about 435 — even though he doesn’t live anywhere near the town, and doesn’t really have any personal connection to it. To find out why, you’ll have to read the story.
  • Vice Media, the alternative giant with a market cap in the billions, appears to be running into some headwinds in India. Two senior editors have reportedly resigned their positions with the company due to editorial interference, according to a report by The Wire, after a story was killed that involved a gay activist who worked for the youth wing of the governing Bharatiya Janata Party.
  • The Tow Center at Columbia has a report in CJR that looks at the problem of disinformation by comparing two very different communities in Philadelphia. In some cases, the authors point out, “a lack of trust in media, issues of perceived relevance, and a sense of relentless negativity have led many readers to vacillate between disengaging from the news for periods of time, and seeking out alternative sources.”
  • In a bizarre incident, New York Daily News reporter Ken Lovett was arrested by state police on Wednesday for using his cellphone in the Senate chamber lobby, in breach of the chamber’s rules. After being detained, he was released by none other than Governor Andrew Cuomo, who personally went to the lockup at the state capitol and had him sprung. “I offered my services on a pro-bono basis—it just does my heart good to be able to say I freed Ken Lovett,” Cuomo said after the incident.

Affiliate ad scammers say Facebook helped them trick users

Most of the attention focused on Facebook right now is aimed at the Cambridge Analytica leak, where a shadowy Trump-affiliated organization got hold of personal data on 50 million Facebook users and targeted them with ads and fake news during the 2016 election. But this saga is just one example of how Facebook’s targeting features could be mis-used.  As a piece by Bloomberg points out, shady affiliate marketers have been mining the social network for dubious clicks for years, and making millions of dollars by doing so.

The piece goes inside a community of digital grifters and con-men, who use social networks like Facebook, Twitter and Instagram to con people out of their money with the offer of untold riches, self-help scams and bogus health remedies. Author Zeke Faux (whose last name seems particularly appropriate in this context) writes about a conference in Berlin last year where Facebook had a large presence. The conference that was supposed to be about traditional marketing, but was filled with shysters and click-farmers:

The Berlin conference was hosted by an online forum called Stack That Money, but a newcomer could be forgiven for wondering if it was somehow sponsored by Facebook Inc. Saleswomen from the company held court onstage, introducing speakers and moderating panel discussions. After the show, Facebook representatives flew to Ibiza on a plane rented by Stack That Money to party with some of the top affiliates… Officially, the Berlin conference was for aboveboard marketing, but the attendees I spoke to dropped that pretense after the mildest questioning. Some even walked around wearing hats that said “farmin’,” promoting a service that sells fake Facebook accounts.

Facebook has taken pains to point out that it doesn’t want this kind of business on the network, and says it has been working hard to get rid of scammers. Rob Leathern, who joined Facebook in 2017 as part of an effort to purge the network of affiliate marketers and similar low-life advertisers, tells Bloomberg that the days when people like Gryn could make millions with dubious clicks are over. “We are working hard to get these people off the platform. They may get away with it for a while, but the party’s not going to last,” he says.

That could be easier said than done, however, given the head start that Facebook gave to the scammers and their supporters. Faux spends part of the piece profiling one of the men at the heart of this affiliate scam network — a Polish entrepreneur named Robert Gryn, who worked his way up through Stack That Money, then developed software that hundreds of affiliate scammers use to leverage Facebook’s ad targeting machine.

Only a few years ago, Gryn was just another user posting on Stack That Money. Now, at 31, he’s one of the wealthiest men in Poland, with a net worth estimated by Forbes at $180 million. On Instagram, he posts pictures of himself flying on private jets, spearfishing, flexing his abs, and thinking deep thoughts. Last year he posed for the cover of Puls Biznesu, a Polish financial newspaper, with his face, neck, and ears painted gold. Gryn’s prominent cheekbones, toned biceps and forearms, perfectly gelled pompadour, and practiced smile lend him a resemblance to his favorite movie character: Patrick Bateman, the murderous investment banker played by Christian Bale in American Psycho.

Tracking down ads that were placed by Russian trolls and aimed at voters during the 2016 election is complicated, but it might actually be easier than rooting out the kind of scams Bloomberg describes. When their accounts are blocked or banned, affiliate link traders simply set up new ones under other plausible-sounding names, and start again. And the same tools that allowed the Russian trolls to target voters give marketers the ability to push their ads to a vast network of gullible users for pennies per click. Read the whole piece here.

Facebook touches the third rail by mentioning accreditation of journalists

Not surprisingly, the issue of “fake news” and the role that the giant web platforms play in spreading misinformation was a big topic of conversation at the Financial Times “Future of News” conference held this week in New York. But things started to get a little heated when Campbell Brown — Facebook’s head of news partnerships — was asked by moderator Matthew Garrahan if the social network might consider “some sort of accreditation system” as part of its attempts to solve the disinformation problem.

“I think we are moving in that direction,” Brown said, at which point she was interrupted by Google’s VP of News, Richard Gingras, who was also part of the panel discussion (along with Emily Bell, director of Columbia University’s Tow Center for Digital Journalism). Gingras echoed what many journalists were probably thinking when he protested that “from a First Amendment perspective, we don’t want anyone accrediting who a journalist is.”

In tweets sent later to some journalists who made similar criticisms, Brown clarified that what she meant was not accreditation per se, but that in order to stamp out fake news, Facebook might have to verify trusted news organizations “through quality signals or other means.”

Giving what seems like approval to the idea of accreditation might be horrifying for some, since it brings up unpleasant images of countries where the government or dictator in power decides who qualifies as a journalist. But at the same time, Brown’s gaffe is somewhat understandable, because Facebook is currently trapped between a rock and a hard place when it comes to taking action on fake news and misinformation.

On the one hand, the company is being pressed by governments both in the US and elsewhere to do more to remove or de-emphasize fake news, not to mention hate speech, harassment and other negative content. But the more it does that, the more it gets accused of infringing on free speech. And every attempt to rank news outlets on vague concepts such as “quality” or “trust” looks a lot like Facebook deciding who is a journalist and who isn’t.

Until the whole Russian troll fiasco broke out into the open, Facebook could plausibly maintain the fiction that it is just a platform, and that it doesn’t play favorites when it comes to sources of news or any other content (which has never really been the case, of course). But now it is having to grapple with the realities of being a media entity and making editorial decisions about what to include and who to highlight, and that is a completely different ball game.

In their haste to curb bad speech, regulators could endanger all speech

Unless you have a specific interest in sex-trafficking or proposed legislation aimed at reducing it, you might not be familiar with a bill that was has been making its way through Congress, a bill known as the Stop Enabling Sex Trafficking Act or SESTA. The bill has already been approved by the House, and on Wednesday was overwhelmingly approved by the Senate, which means it is on its way to President Trump for approval, and if he signs it — which seems likely, based on his previous comments — SESTA will become law.

Why should you care? Because in the process of trying to combat sex trafficking, Congress could wind up endangering free speech online. As CJR described in a piece on the proposed law last year, when it was still going through the House, SESTA effectively weakens one of the key pillars of online speech: Namely, Section 230 of the Communications Decency Act of **. That’s the clause that gives platforms like Facebook and Google immunity or “safe harbor” for the user-generated content that appears on their platforms.

In a nutshell, Section 230 is the reason Facebook, Google and Twitter can distribute your tweets or status updates or video clips without being legally liable for everything contained in them. SESTA removes that protection or safe harbor in the case of anything involving sex trafficking — which wouldn’t be a problem, except for the fact that by removing that protection, it weakens the entire edifice that is Section 230.

This isn’t happening in isolation. Section 230’s safe-harbor provisions were already coming under fire from Congress because of the belief that they insulate platforms like Facebook or YouTube from responsibility for other kinds of speech the government doesn’t like, including Russian troll campaigns, “fake news,” sexual harassment, neo-Nazi sentiments and anything that falls into the large and growing bucket labelled “terrorism.”

The risk is that if Section 230’s protections for speech are weakened — as similar protections are being weakened in Europe to try and stamp out fake news and hate speech — it gives everyone from trolls to governments license to go after all kinds of speech they dislike or disagree with. And while Facebook and Google might have the resources to deal with that, lots of smaller publishers and online services don’t.

As Senator Ron Wyden, one of the authors of Section 230, put it: “”In the absence of Section 230, the internet as we know it would shrivel,” Wyden said on the Senate floor ahead of the vote Wednesday. “Only the platforms run by those with deep pockets, and an even deeper bench of lawyers, would be able to make it.” That would only entrench the dominance that Facebook and Google and other massive platforms have.

Did the Times change a story because Facebook complained?

It might not have registered for most people trying to keep up with the maelstrom of news this week about the Facebook data leak — the one in which the shadowy, Trump-linked data company Cambridge Analytica got personal details on more than 50 million users — but a number of sharp-eyed New York Times critics noticed that one of the paper’s stories about the topic changed as it was edited.

So what, you might ask? After all, that kind of thing happens on news websites all the time: A short version goes up quickly and then later is replaced by a longer version as more information comes in.

Except in this case, the Times removed a line suggesting that Alex Stamos — a senior Facebook executive in charge of security — wanted to be more open about Russian involvement on the platform, and Chief Operations Officer Sheryl Sandberg shut him down.

That sent the media conspiracy machine into overdrive. A site called Law & Crime, run by ABC News legal commentator Dan Abrams, noticed the change and wrote a story suggesting that the Times changed the story because of a complaint from Facebook.

The New York Times apparently offers powerful third parties the ability to edit away—that is, to delete from the internet—unfavorable coverage appearing in the paper of record’s online edition,” the site wrote. The story was picked up by Glenn Greenwald, the occasionally combative journalist who runs The Intercept, who also accused the Times of watering down the story after complaints from Facebook.

Soon others joined the fray, including Kurt Walters of Demand Progress, who tweeted: “The original has multiple sources saying advocacy to disclose info about Russian activities on FB caused friction/resistance by Sandberg & other execs. The second does not.”

To their credit, Times reporters involved in the story—including Sheera Frenkel and Nicole Perlroth—responded to these allegations at length on Twitter, describing the changes to the story as nothing more than the usual editing process. They and others pointed out that the final version of the story still suggested Stamos and Sandberg clashed over the former’s desire to be more open about Russian activity, it just didn’t use the same specific sentence or word (“consternation”) as the original.

None of this seemed to dissuade Greenwald, however, who continues to maintain that the Times made a significant change, after receiving criticism from Facebook, and is refusing to acknowledge it:

To be fair to Greenwald and other Times critics, some of this is the paper’s fault. It routinely changes news stories—in some cases significantly—and then never discloses or explains the change. In several cases, the changes have became the subject of columns by former Public Editor Margaret Sullivan (the Times no longer has a public editor, after shutting down the position last year).

Web geeks have been recommending for some time that the paper—and other publishers—implement a “diffs” approach, which maintains a record of all the changes in an article over time, the way Wikipedia does with its “talk” pages (WikiTribune, the new journalism venture from Wikipedia founder Jimmy Wales, has a similar system).

There is a site called NewsDiffs that tracks changes to Times stories, which is how the latest changes were discovered. But it would be so much easier if that kind of tracking system was built into the Times website. The chances of that seem astronomical, however. If the Times was interested in talking openly about those kinds of things, it would probably still have a public editor. All we got in this case was a response from Times PR department on Twitter about how the Law & Crime story was false.

Old Facebook got away with murder, New Facebook not so much

As the mushroom cloud continued to spread over the weekend from Friday evening’s nuclear blast—the news that Facebook provided personal data on more than 50 million users to a Trump-linked data company called Cambridge Analytica—one consistent theme amid all the noise and smoke was the increasingly defensive argument from senior Facebook executives that a) What happened wasn’t technically a data “breach,” and b) It happened a long time ago, before they tightened up their data usage policies, so it doesn’t relate to current events like the Trump election campaign, etc.

What’s interesting is that the response to the Cambridge Analytica incident—shock, horror, the pointing of accusatory fingers and threats of regulation—says a lot about the way that attitudes toward Facebook and what it does have shifted over time. The honeymoon isn’t just over at this point; both sides are looking at hiring expensive lawyers and taking each other to divorce court.

At the risk of appearing like a Facebook apologist, both of the points made by Facebook’s former ad executive Andrew “Boz” Bosworth and Chief Strategy Officer Alex Stamos have a certain amount of truth to them. The data wasn’t obtained as the result of hackers getting access to a database illegally, so it wasn’t technically a breach. Cambridge Analytica got the data because an academic researcher sold it, even though Facebook’s rules say you’re not supposed to do that, and then the firm failed to delete it.

On the second point, Facebook is right that the API access the researcher made use of—which gave him access not just to the friend graph of users who signed up for a quiz, but to their friends’ friends as well—was tightened up in 2014, after a number of privacy researchers and others pointed out it could be misused.

In one of the many responses to the Facebook/Cambridge incident, Benedict Evans, who works for Silicon Valley venture capital firm Andreessen Horowitz, defended the social network by pointing out that in the past, people complained that Facebook was doing too much censoring of the News Feed and was also too stingy with its data, and now that conversation has completely flipped:

As a VC staffer, Evans is naturally inclined to defend a great Silicon Valley success story like Facebook (which Andreessen Horowitz invested in, and AH co-founder Marc Andreessen once sat on the board of). But that’s not to say he doesn’t have a point.

Not that long ago, Facebook was criticized for removing posts too often and infringing on people’s free-speech rights, but now people seem to want it to do a lot more to remove offensive speech, fake news, and so on. And when it comes to the company’s data, one major complaint was that Facebook’s API was too locked down, not open enough, and that it should make it easier for others (including users) to get their data out. Now the criticism seems to be that it didn’t lock it down soon enough, or tight enough.

As Two-Face said in movie Batman: The Dark Knight, you either die a hero or live long enough to see yourself as a villain, and that’s where Facebook is now: All of the things it used to do that many people celebrated as a triumph of social technology—including the ability to target individuals based on their personal data, something the Obama campaign was celebrated for doing—are the fruit of a poisoned tree, in part because we understand what Russian trolls and other governments can do with such data. Our innocence has been lost, and perhaps that’s ultimately a good thing.


The media today: Google promises up to $300 million for media

As some media companies are questioning their commitment to Facebook, in the wake of changes to the News Feed and what some see as lackluster revenue from the platform, Google appears to be making a concerted effort to replace the social network as the media’s best friend. On Tuesday, it announced a new venture called the Google News Initiative at an event held in New York City. The new venture involves a range of different projects the company says are designed to help support media companies and quality journalism, along with a commitment to spend $300 million over the next three years.

The new entity is similar in name to the Digital News Initiative, which Google set up in 2015 to help European media entities figure out how to become more web savvy, and included a $150 million fund that anyone could apply to access. It has funded research (including the annual Digital News Report from the Reuters Institute) but mostly gives out grants every year to journalists and media companies to try digital projects. That all now becomes part of the much broader Google News Initiative.

On the new site devoted to the project, Google says the News Initiative is aimed at “building a stronger future for journalism,” and that it wants to “work with the news industry to help journalism thrive in the digital age.” Some of the things it includes as part of that effort—such as training for newsrooms, or partnerships with organizations like First Draft and the Local Media Consortium—have been underway for some time, either as part of the Digital News Initiative or Google’s News Lab, which helps media companies do research. But some of what was announced on Tuesday was new.

In the newish category is the expansion of a pilot project called Subscribe With Google, in which Google partners with publishers to make it easier for users to sign up and login to news sites. As reported earlier by Bloomberg, Google will also highlight content from outlets that users pay for when they do a search, and will share data that could help publishers figure out how to boost subscription revenue. Google also announced a new tool called Outline, which will allow media companies to create VPNs (virtual private networks) for their journalists, and the web giant plans to spend $10 million on a media literacy project through its non-profit arm, including an ad campaign involving YouTube stars.

Here’s more on Google and its expanding relationship with the media:

  • A Disinfo Lab: Google is helping launch a lab based at Harvard’s Shorenstein Center, in partnership with First Draft, where journalists will monitor disinformation in advance of and during elections around the world. And starting on April 2 (which is International Fact-Checking Day) Google says it will offer more than 20,000 students advanced training on how to distinguish misinformation online, through a partnership with the International Fact Check Network.
  • News Lab changes: As part of the new project, the Google News Lab is expanding its efforts, according to a post from head Steve Grove. It is adding full-time staff in Australia and Argentina to the 13 other countries where it already has employees, is hiring new Teaching Fellows and expanding its News Lab Fellowships program, which funds the hiring of journalists by newsrooms. But the News Lab’s website goes away, and gets absorbed by the broader GNI site.
  • More search fixes: In addition to all of the new announcements about funding, Google’s VP of news Richard Gingras also said the company is rolling out tweaks to the search giant’s algorithm in order to “put more emphasis on authoritative results over factors like freshness or relevancy.” How exactly it defines the term “authoritative” is unclear, but Google is probably hoping it will stop conspiracy theories from turning up in YouTube results after school shootings.
  • Sour grapes? Amid all the good news about the things it wants to do for media outlets, Google is still getting some criticism about its desire for control in some of the things it already does, including the AMP (Accelerated Mobile Pages) project. Although it is an open-source effort and Google says anyone can add to it, some complain that it gives the web giant too much of a say in the process.

Other notable stories:

  • Many journalists were mourning the loss of Les Payne on Tuesday. The 76-year-old Pulitzer Prize-winning former editor at Newsday was the founder of the National Association of Black Journalists and had a journalism career that spanned almost four decades. His family said he died unexpectedly at his home in Harlem. Nikole Hannah Jones, a writer with The New York Times magazine, called him “a fearless trailblazer, a door opener, and a fierce champion for black & brown journalists.”
  • The fallout from the Cambridge Analytica affair continues to cause turmoil at Facebook, and could lead to sanctions against the company in addition to its falling stock price, but so far there has been radio silence from co-founder and CEO Mark Zuckerberg. According to a report from The Daily Beast, the company held an all-hands Q&A about the incident, but Zuckerberg didn’t show.
  • Speaking of Cambridge Analytica, the shadowy Trump-linked entity that got its hands on the personal data of more than 50 million Facebook users, CJR spoke with NYC professor David Carroll about the lawsuit he launched in Britain recently to force the company to give him all the data it has on him. Carroll filed the claim under the UK’s Data Protection Act.
  • Karen McDougal, a former Playboy model who claims she had an affair with Donald Trump, is suing the publisher of the National Enquirer, trying to force the company to release her from a legal agreement she signed in 2016 that barred her from talking about the affair. Adult entertainment star Stephanie Clifford, also known as Stormy Daniels, is also trying to break an agreement she had to remain silent about an affair she says she had with Trump.
  • The TV news program 60 Minutes is under fire for what some see as an overly friendly segment on Mohammed bin Salman, the new ruler of Saudi Arabia. The Intercept said the piece, which praised bin Salman for cracking down on corruption but never mentioned allegations of torture or other criticisms, was “more of an infomercial for the Saudi regime than a serious or hard-hitting interview.” CJR writer Jon Allsop wrote recently about the challenges of reporting on Saudi Arabia.

David Carroll talks about his Cambridge Analytica lawsuit

Last week, David Carroll—a professor at the Parsons School of Design at the New School in New York—filed a legal challenge in Britain asking the court to force Cambridge Analytica to disclose how it came up with the psychographic targeting profile it had on him. Later that same day, Facebook announced that it had banned Cambridge Analytica from using the social network because the company had acquired the personal information of more than 50 million Facebook users in a way that contravened the social network’s terms of use, and had failed to delete it as requested.

Subsequent reporting by The Guardian, the Channel 4 TV network, and The New York Times suggests Cambridge Analytica not only used the data to target Facebook users for misinformation campaigns during the 2016 election, but also that the firm ran sophisticated black-ops campaigns in a number of countries including Kenya. Facebook, meanwhile, has been asked to appear before the UK’s Information Commission and a number of US Congressional committees, and there have been suggestions the company may have breached a consent decree it signed with the FTC in 2011 with respect to privacy.

Although David Carroll’s filing didn’t directly trigger these developments, the issues involved in the case—which has been crowdfunded through a web-based service—implicate not just the behavior of Cambridge Analytica or Facebook, but the entire commercial advertising-technology marketplace that both are a part of, which uses massive data-collection techniques to track, identify and target users. CJR spoke with Carroll about his case, and what follows is a transcript of that conversation, edited for length and clarity.

You seem to be at the center of a hurricane right now. How does it feel?

It’s been a crazy, crazy day. I haven’t even had time to reflect on it, really. I knew this day would come, I just didn’t know how big it would end up being. A few hours after I filed the [Facebook] suspension announcement came. I don’t necessarily think it was connected, I think there were parallel things in motion at the same time, but the way things converged was quite astonishing. I don’t know if hurricane is the right metaphor, but I’m still processing it all. I knew about the whistleblower going into this, so I knew the scale of some of it, but I did not know that the Channel 4 sting video was coming, and that kicks it up another level. I hope to get to a point where I can record it and process it, and maybe write a book about it, but right now I’m just trying to keep up with it and not lose my perspective.

Tell me about the filing. What are you asking Cambridge Analytica for?

The basic complaint is that what Cambridge gave me is not sufficient, it’s not complete, so it’s not compliant [with the law]. There are two ways of looking at it: One is that it’s not complete based on the company’s own public statements. The company’s public position is that I should have 4.000 to 5,000 data points on myself, but when I asked I only got about a dozen. The more sophisticated take on this question is actually included in the claim filing, two academic expert have both independently said, based on their own views and assessments of the data, that there’s no way this could be complete. There’s evidence of data points beyond just the demographic ones they provided, so if you were to look at the dataset and say how do you take these demographics and get to ideology, it’s insufficient. All you have is zip code, gender, birth date and party registration, and that’s not granular enough to have such nuanced predictions. The experts I used were Phil Howard from the Oxford Internet Institute and David Stillwell from the Cambridge psychometrics lab, who was one of the three scientists who originally created this model.

Did you ever think that your lawsuit would help trigger this kind of storm of controversy?

I did, yes. I thought it would really shift the ground that data-driven advertising and marketing sits upon, because it’s too intertwined with the ad-tech industrial complex to be a separate issue. We haven’t seen all the reverberations yet, and I don’t know if we will, but what will be interesting is if we get disclosure beyond this, that Facebook isn’t the only source for this data—that commercial entities like Acxiom, Experian, comScore and so on are also involved. Then all of those companies, their image is going to be tarnished by affiliation with what is potentially a black-ops contractor like Cambridge. I hoped the suit would cause a wakeup call for the whole industry. The line that they like to give to privacy advocates is that it doesn’t do harm, you can’t prove harm so it shouldn’t be regulated, and I feel like that whole mentality is crumbling before our very eyes. That is the thing that the whole ad-tech house of cards is based on, the idea that we should be able to collect people’s data, as much as we want because you can’t prove it’s harmful.

And you would argue that it is harmful, obviously.

The first question to ask someone who’s a skeptic is ‘Do you feel privacy in the voting booth is sacred?’ If the answer is yes, then we can work back from there. If your likes and credit-card purchases and the TV shows you watch allows us to predict what you will do to an accuracy level of 75%, that’s good enough to take away your privacy in the voting booth. It’s not just about predicting, it’s about how you can be exploited without your knowledge or understanding. What whistleblower Christopher Wylie represents is that this operation is not a typical voting-analytics operation, it doesn’t just create traditional campaign ads for candidates, it’s a full media operation that creates all manner of content, not just to resemble traditional campaign advertising, but literally fake news sites created as a proxy for political advertising. And then it starts to resemble the practices of the Internet Research Agency. If Wylie’s claims are corroborated and verified, we will be talking about a company that literally built vast networks of psychologically targeted and modelled media to distort truth and reality and to target people based on that. We’re not talking about ad banners, we’re talking about falsified media environments, completely fabricated editorial worlds, and tracking mechanisms and re-targeting mechanisms being actively used in a very sinister manner.

Some people seem to believe that Cambridge was mostly just a marketing scam, and that their methods didn’t really achieve what they promised.

I’ve heard that argument too, and all I will say is that we don’t know enough to know. I’m seeking maximum disclosure so that we can put this to bed. Here’s another possibility, if you take that idea all the way to its full completion: Maybe there is no data, and when I requested my data I gave them my driver’s license and my Con-Ed bill and they just fabricated the Excel spreadsheet, so it’s all an illusion, everything is a con or a scheme, a fabrication. It’s conceivable that that happened. That’s why we need the auditing and the forensics. The story is less about what people think about it and whether it works and more about how can we know what really happened, and then decide. People want to dismiss it, but we don’t know enough to make an assessment, and the more we learn the more it seems like it’s not what we thought it was. To get back to what drove me to do this, when I learned of the military work of the parent company [SCL Group] there was this idea that there was no longer a boundary between civilian and military sectors of this business. The data itself is intermingled, so there’s election campaign data being used for other unknown, potentially clandestine, covert purposes, and that was very disturbing and unsettling.

How did you come to start this case and why? What were you trying to accomplish?

The short answer is that it just felt right, but the long answer is that it was a natural trajectory of my career. From a sort of big picture view, I’ve been moving towards these issues very naturally, whether it’s working in the digital marketing space, trying a startup [a visual-content aggregation service called Glossy] and seeing how the sausage is made there, then being an academic, where you’re encouraged to be a critical voice, and then taking up the cause of privacy in 2014-2015. So when the issue of privacy and the technology/privacy conundrum came up in 2016, I was looking at the campaigns and I was curious what the news practices were. I knew that [presidential candidate Ted] Cruz was using Cambridge, and they were doing really really invasive data collection, so that was on my radar. After the election, it was a feeling of I know what happened, and I think I can prove it.

How did that lead to the filing? Can you take me through the steps that led up to it?

I started finding other people that had the same mindset, like [Tow Center research director] Jonathan Albright and Paul Olivier-Dehaye [founder of], and journalists and other people working on it, and we just started figuring stuff out forensically. Then when Paul Olivier-Dehaye encouraged me to do the data request from Cambridge, I thought why not? And that sort of set off a sort of an inevitable chain of events, because when I got [the profile from Cambridge] I knew it was significant but I didn’t know exactly why. It took time to figure out why it was significant, I started talking to British lawyers and they were like ‘This is not legal.” It was kind of alien concept to me as an American, the idea that they couldn’t do what they’re doing in the UK, it would be illegal, because they’re doing things without consent, without the proper rights. That led to the idea that since they processed our data, we have the right to request it.

And the UK government said you could force Cambridge to release that data, even though you aren’t a British citizen?

Yes, after [Cambridge Analytica CEO Alexander] Nix went before Parliament, then Elizabeth Denham the Information Commissioner was asked about my case, and she explicitly said I do have standing, because they processed my data, and nothing excludes people [from using the privacy act] by citizenship. And going back to why I decided to do this in the first place, proving that the jurisdiction is in effect is its own story, the precedent that we could set if we succeed is really significant and important. And it’s happening at a time when all of these things are coming to an apex, as well as the GDPR [General Data Protection Regulation] and its impact on the industry. The timing of this is really important, I think it will create a small but significant cataclysm in the industry, and I think it will allow for some change, and to shake up the status quo. I don’t know exactly how, but it’s clear that in a year or so things will be different.

You crowdfunded your case—can you tell me a bit about how that worked?

[Guardian journalist] Carol Cadwalladr helped me out significantly by publishing a story about my suit before it was filed so that I could do the crowdfunding for it, and that was instrumental in getting the momentum going. That was in October. It was critical to being able to have the money to do what we just did. The minimum target to convert was 25,000 pounds and we made it very organically, word of mouth, we didn’t have to do any aggressive marketing of the campaign, and we have kept it on a kind of stretch target since then, so anyone who wants to donate can. I think it was at about 28,000 pounds on Friday. The stretch target is up to 100,000 pounds, but the legal stage we’re at we don’t need that much money. Technically I haven’t filed a lawsuit, I’ve filed a claim for pre-action disclosure, so we’re asking the judge to force them to disclose so that we can file a lawsuit. The beauty of that is if the judge forces them to disclose then we don’t need the lawsuit because we’ll get what we’re after.

So you might not proceed with the actual lawsuit if you get full disclosure?

I can’t say for sure that we won’t pursue a lawsuit, because the situation is very fluid. Going into this all I cared about was disclosure, auditable full disclosure, and so if I get all 4,000 or 5,000 data points and the Information Commissioner says it’s a legitimate audited thing, then I could say we’ve achieved our goal. But my lawyers might advise me differently based on unfolding events, so we could take a different strategy now that the government has a warrant and is raiding Cambridge’s offices, the situation is very fluid.