Google bows to pressure, says it will pay publishers for news

Note: I originally wrote this for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

News publishers have been begging and/or threatening Google for the better part of a decade now, trying to convince the company to compensate them for the news that it carries in its search products. And for all of those years, Google has been adamantly refusing to do so — until now, that is. On Thursday, the search giant announced that it is launching a new product later this year focused on “high quality” news, and as part of that, it will be paying a select group of publishers in a number of countries for access to their content, including access to news stories that sit behind paywalls. According to a number of news reports, the company has already signed deals with leading media outlets in Germany, Australia, and Brazil, including Schwartz Media, Diarios Associados, and Der Spiegel. Google hasn’t said whether it is negotiating with or has signed agreements with any US-based publishers.

Google’s blog post on the announcement, and tweets by chief executive Sundar Pichai, tried to make it sound as though this is just another in a long line of helping hands the company has offered to the media industry. Google News vice-president Brad Bender said it would “build on the value we already provide through search and our ongoing efforts with the Google News Initiative,” and Pichai said “for decades we have worked with publishers to grow audiences and build value [and] we continue that progress today.” And it’s true that the Google News Initiative has given journalists, media companies, and industry groups tens of millions of dollars. But it’s also true that Google has never paid publishers directly for news, and has vowed repeatedly it would never do so (its argument has always been that it sends publishers traffic, and that this should be as good as money). When Spain tried to force it to pay, Google removed the country from its News service, and during the European Union’s discussion of new copyright legislation, hinted it might do the same for the EU.

As CJR pointed out in a feature on the more than $500 million in funding that Google and Facebook have provided to the media industry, what eventually became the Google News Initiative started as a way of placating news publishers in Belgium and France who were upset at having their content “stolen” by the search company. After being targeted by lawsuits and regulatory pressure, Google promised to help publishers figure out how to use the internet to monetize their content through ads and subscriptions, as opposed to having Google pay them, and it created a fund in both countries to help finance such efforts via grants and fellowship programs. The funding was rolled into the Digital News Initiative in 2015, and then it and the Google News Lab were combined and renamed the Google News Initiative in 2018.

Continue reading “Google bows to pressure, says it will pay publishers for news”

Objectivity isn’t a magic wand

Note: I originally wrote this for the daily newsletter at the Columbia Journalism Review, where I am the chief digital writer

The protests over the death of George Floyd and the way they have been covered (or not covered) by newsrooms around the country has widened existing stress fractures in journalism around the topic of race. One of the things that is being called into question is the concept of objectivity. Wesley Lowery, a reporter with 60 Minutes, put some of this into words with a recent essay in the New York Times entitled “A Reckoning Over Objectivity, Led by Black Journalists.” Whatever the ideals behind objectivity might be, Lowery wrote, in practice it translates into an industry in which “the mainstream has allowed what it considers objective truth to be decided almost exclusively by white reporters and their mostly white bosses.” And it’s important to note that this not only leaves Black journalists—and other journalists of color—on the outside looking in, but also makes for worse journalism, if by journalism we mean representing the truth about the world as accurately as possible.

What qualifies as objective journalism, Lowery says, “is constructed atop a pyramid of subjective decision-making: which stories to cover, how intensely to cover those stories, which sources to seek out and include, which pieces of information are highlighted and which are downplayed.” The piece sparked a conversation on Twitter, including a response from Tom Rosenstiel, a veteran journalist and executive director of the American Press Institute, and the co-author of a classic journalism textbook called The Elements of Journalism (a book that Lowery cites approvingly in his essay). In a multi-tweet thread, Rosenstiel tried to clarify what he said were some of the historic aspects of how objectivity became an industry standard principle. The practice began as a way of injecting more scientific rigor into the practice of journalism, he says, but instead it has turned into a devotion to false balance and other elements of what journalism professor Jay Rosen calls “the view from nowhere.”

Rosenstiel is quite right that objectivity started as an attempt to make journalism more rigorous by applying the scientific method, a structure and process designed to arrive at an objective truth. But the industry probably shouldn’t congratulate itself too much on the purity of the intentions behind this change: it wasn’t just that journalists or publishers suddenly decided that objectivity would be a good thing. It was also seen as a way to make journalism more palatable to advertisers, as the consumer-focused ad industry was becoming more national in scope. Over the next 50 years or so, objectivity came to be seen as a bedrock principle of journalism, to the point where some newspaper journalists— and journalism teachers—still argue that dismantling it will kill journalism. But as Lowery points out, what qualifies as objectivity is in the eye of the beholder, and that eye is still predominantly male and white.

Continue reading “Objectivity isn’t a magic wand”

Are digital giants like Facebook destructive by design?

Note: I originally wrote this for the daily newsletter at the Columbia Journalism Review, where I am the chief digital writer

The most benign view of Google, Facebook, and Amazon is that any social or political disruption and turmoil these behemoths have caused is a side-effect of the beneficial services they provide, and any over-sized market power they have is the result of good old-fashioned hard work or an accident of economics and technology. But what if that’s not the case? Dipayan Ghosh is a former Facebook staffer and a former policy advisor to the Obama White House who now runs the Digital Platforms and Democracy Project at Harvard, and the author of a new book called Terms of Disservice: How Silicon Valley is Destructive by Design. Ghosh argues that these companies are monopolists, and that they engage in a wide variety of disturbing conduct — much of it involving the data of their users — not accidentally but very deliberately. “I believe that Facebook, Google, and Amazon should be seen as out-and-out monopolists that have harmed the American economy in various ways, and have the potential to do much greater harm should their implicit power go uncurbed,” Ghosh writes.

All this week, we’ve been discussing some of the themes in Ghosh’s book — including privacy, competition, algorithmic accountability, and the idea of a new social contract — in a series of roundtables hosted on Galley, CJR’s discussion platform. The Tuesday roundtable, for example, started with a one-on-one conversation about privacy with Ghosh, followed by a day-long open discussion that included Ed Felten, a professor of computer science and public affairs at Princeton and a former Deputy Chief Technology Officer with the White House; Jennifer King, the director of privacy at Stanford Law School’s Center for Internet and Society; Olivier Sylvain, a professor of law at Fordham University and director of the McGannon Center for Information Research; and Jules Polonetsky, who is CEO of the Future of Privacy Forum. The question before the panel was: “Is online privacy broken, and if so what should we do about it?”

Ghosh argued that not only is online privacy broken, but the digital giants have played a key role in breaking it to their advantage, with personal data at the heart of their business model. “These firms increasingly and perpetually violate consumer privacy to serve this consistent business model by collecting personal information in an uninhibited manner,” Ghosh says. “And relatively little of that activity is properly scrutinized, resulting in the radical corporate violation of individual privacy.” One question that came up in the roundtable was why, after two decades of this digital platform model, there isn’t a federal privacy law? Ghosh says this is a result of what he calls the “privacy paradox.” Most users don’t see the privacy harm when they sign up for a free service — they get immediate gratification from connecting with friends, and only much later to do see the downsides in the form of data breaches, etc.

Continue reading “Are digital giants like Facebook destructive by design?”

No 1800’s country estate was complete without its own garden hermit

“It turns out that the garden gnome that we now use to ornament our garden were once real-life garden hermits. Yes, a real person who lives in a real hermitage, in a real garden. From the 15th to 18th century, wealthy estate owners were not content with just having lavish and perfectly landscaped grounds that looked natural with all the follies, rustic-looking trees, and lakes – there had to be a Garden Hermit that actually lived there.

Garden hermits, also known as ornamental hermits, were people who are hired by rich landowners to live in their estate where they purposely built a hermitage, with follies, grottoes, or rockeries to complete its overall look. They were expected to permanently live on-site, shun the public life, and basically live in solitude. These ‘hermits’ were encouraged to dress like druids, too. Some would go as far as not bathing, and or trimming their hair and nails.”

(via)

Jules Verne predicted cars, fax machines and the internet

In a novel he wrote in 1863, entitled “Paris In The Twentieth Century,” science-fiction pioneer Jules Verne described a city with a network of electric lights, gas-powered automobiles, wind power, machines that transmit text and photos across long distances, high-speed trains that run on magnetic levitation, and computerized weapon systems. The central character of the story is Michel Dufrénoy, who graduates in 1960 with a major in literature and the classics, but finds they have been forgotten in a futuristic world where only business and technology are valued (he eventually dies after a famine and nuclear winter-style disaster wipes out most of France). Verne’s publisher refused to publish the book because he thought it was too dystopian and would ruin the author’s career, and the writer locked it in a safe, where it was forgotten until one of Verne’s great-grandsons found it. It was eventually published in 1994.