Nick Carr’s Retreat From the Internet Continues

I’ll admit it — I’ve kind of missed Nick Carr, and his dyspeptic blog Rough Type. After he started on his latest book, he went on a blogging hiatus, and I kind of missed reading his fulminations on a variety of things, most of which I instinctively disagreed with. I think he may have spent too long away from the blogosphere, however, encased in that 16th-century form of blogging known as “books.” Either that or the topic of his new book, which appears to be how the Internet is dumbing us down (Carr and Andrew Keen are kind of a matched set) has taken hold of him and he now believes the internet is a kind of pernicious force in people’s lives.

His latest column is about how he has come to believe — or is close to believing — that links are bad. To be fair, his argument is a little more nuanced than that. He says that links are cognitive overhead, in the sense that they distract readers, even if they don’t follow them:

Sometimes, they’re big distractions – we click on a link, then another, then another, and pretty soon we’ve forgotten what we’d started out to do or to read. Other times, they’re tiny distractions, little textual gnats buzzing around your head. Even if you don’t click on a link, your eyes notice it, and your frontal cortex has to fire up a bunch of neurons to decide whether to click or not.

But you don’t have to take my word for it — you can go and read Nick’s argument yourself, because I have helpfully provided a link to it. You don’t have to click it if you don’t want to (possibly because you trust me to give you a fair representation of it), and you can click and open it in a tab to read later if you like, which I often do as I read things. The important thing is that I linked to it. I can also link to other things that might help you interpret it, like Marshall Kirkpatrick’s piece in response to Nick.

I could also link to a piece by Fred Wilson, a web native if there ever was one, about the “power of passed links,” in which he argues that links are the currency of the web. Like Nick’s criticism of links, currency can get in the way in our lives as well — it not only makes our pockets heavy with change, but it warps people’s minds in all sorts of ways. And yet, we couldn’t very well do without it. But links aren’t just useful to readers — I think adding them is also an exercise in intellectual discipline for the writer.

As I mentioned to a number of other people who were discussing Nick’s piece, including Chris Anderson and Vadim Lavrusik, I think not including links (which a surprising number of web writers still don’t) is in many cases a sign of intellectual cowardice. What it says is that the writer is unprepared to have his or her ideas tested by comparing them to anyone else’s, and is hoping that no one will notice. In other cases, it’s a sign of intellectual arrogance — a sign that the writer believes these ideas sprang fully formed from his or her brain, like Athena from Zeus’s forehead, and have no link to anything that another person might have thought or written. Either way, getting rid of links is a failure on the writer’s part.

As I said in a comment on Nick’s post, I fully expect his next move will be to remove links of any kind — and then to ban comments as well, as “thinkers” such as Seth Godin have, since they just get in the way of all that pure thought. And then, perhaps, Nick will finally decide that the internet itself is rather over-rated, and will retreat to his books, where no one can argue with him. And that would be a shame, because arguing with him is such fun.

The Agenda on privacy, taped live at mesh10

There were too many highlights from mesh2010 for me to pick a single one, but among the top moments on any list was the taping of a live version of TVO’s The Agenda with the always excellent Steve Paikin. TVO producer Mike Miner and I started talking about the idea last year, because we had always wanted to have Steve come and interview someone but it never seemed to work out — so Mike suggested taping a whole show there, and after much working out of details that’s exactly what happened. It was a fantastic show, with Ontario Privacy Commissioner Ann Cavoukian, consultant Alan Sawyer, the wonderful Joseph Menn (who did one of the keynotes at mesh), David Fewer of CIPPIC and yours truly. Thanks again to Mike and Steve and the rest of the TVO team for being such a pleasure to work with and for helping us make this a reality.

http://www.tvo.org/video/tvoMain.swf

What We Can Learn From the Guardian’s New Open Platform

British national paper The Guardian isn’t the kind of tech-savvy enterprise one would normally look to for guidance on digital issues or Internet-related topics. For one thing, it’s not a startup — it’s a 190-year-old newspaper. And it’s not based in Palo Alto or SoMa, but in London Manchester, England. The newspaper company, however, is doing something fairly revolutionary. In a nutshell, The Guardian has completely rethought the fundamental nature of its business — something it has effectively been forced to do, as many media entities have, by the nature of the Internet — and, as a result, has altered the way it thinks about value creation and where that comes from.

Enter The Guardian’s “Open Platform,” which launched last week and involves an open API (application programming interface) that developers can use to integrate Guardian content into services and applications. The newspaper company has been running a beta version of the platform for a little over a year now, but took the experimental label off the project on Thursday and announced that it is “open for business.” By that The Guardian means it is looking for partners who want to use its content in return for either licensing fees or a revenue-sharing agreement of some kind related to advertising.

To take just one example, The Guardian writes a lot of stories about soccer, but it can’t really target advertising to specific readers very well, since it is a mass-market newspaper. In other words, says Guardian developer Chris Thorpe, the newspaper fails to appeal to an Arsenal fan like himself because it can’t identify and target him effectively, and therefore runs standard low-cost banner ads. By providing the same content to a website designed for Arsenal fans, however, those stories can be surrounded by much more effectively targeted ads, and thus be monetized at a much higher rate — a rate the newspaper then gets to share in.

Open APIs and open platforms aren’t all that new. Google is probably the largest and most well-known user of the open API as a tool to extend the reach of its search business and other services, such as its mapping and photo services. Most social networks, such as Facebook and YouTube, also offer APIs for the same reason, though not all of them are as open as Google’s.

The Guardian, however, is the first newspaper to offer a fully open API (the New York Times has an API, but it doesn’t provide the full text of stories, and it can’t be used in commercial applications). It’s worth looking at why the paper chose to go this route, and what that might suggest for other companies contemplating a similar move — and not just content-related companies, but anyone with a product or service that can be delivered digitally.

For a content company like a newspaper, producing and distributing its content is the core of the business. Whether it’s in paper form or online, advertising usually pays the freight for the content, although subscription charges help, for both print papers and online versions like the Wall Street Journal, the Economist, etc. Many newspapers have regretted their decision to provide content online for free, since online advertising isn’t nearly as lucrative as print advertising (primarily because there are far more web pages to advertise on than there are newspaper pages, and therefore the supply outweighs the demand).

So why would a newspaper like The Guardian choose to provide access to its content via an open API, and not just some of its content, but everything? And why would it allow companies and developers to use that content in commercial applications? For one simple reason: There is more potential value to be generated by providing that content to someone else than the newspaper itself can produce by controlling the content within its own web site or service. You may be the smartest company on the planet, but you are almost never going to be able to maximize all the potential applications of your content or service, no matter how much money you throw at it.

As Thorpe described in a recent interview, the newspaper sees the benefits of an open platform as far outweighing the disadvantages of giving away content. By allowing developers to use the company’s content in virtually any way they see fit — and not just some of it, but the entire text of articles and databases the newspaper has put together — it can build partnerships with companies and monetize that content far more easily than it could ever do on its own.

This is effectively the opposite approach to the one that newspapers such as the Journal take, which is to up paywalls and charge users for every page they view, or charge them after a certain number of views (as the Financial Times does and as the New York Times is planning to do). It’s also the opposite approach to the one that companies like Apple take to their business — although Apple doesn’t produce content, it exclusively licenses and tightly controls the content it does handle (such as the music in iTunes), and it applies the same type of controls to its software and hardware.

Partnerships of the kind The Guardian is working on make a lot more sense for most companies that have lost the ability to control what happens to their content, something the Internet has done to virtually anyone whose product can be digitized and turned into bits, but has been particularly acute for content companies. By allowing others to make use of that content for their own purposes, and sharing in the revenue that comes from it, The Guardian takes what would otherwise be a disadvantage — the fact that it has lost control — and turned it to an advantage by becoming a platform. It’s a lesson other companies could stand to learn as well, instead of continually trying to reassert or recreate the control they have lost.