The Rise of the “Second Internet” and What It Means

What is the thread that ties together the rapid rise of companies as different as Facebook, Zynga, Twitter, The Huffington Post and Quora? Wedbush Securities, a brokerage firm that analyzes the valuations of private companies, says they are all players in what it calls the “Second Internet.” Wedbush says there are certain attributes that allow such players to grow and thrive while more traditional players — including some of the leaders from the early days of the Internet — fail to prosper and gradually recede into history. The most important of these attributes, the firm says, is an understanding of the value of the social web.

The social nature of this new wave of Internet companies is such a major factor that Wedbush also calls it the rise of the “Social Internet” in a new report on the sector, and says successful companies are powered by similar features, including:

  • Platforms open up their API to developers
  • Continuous and rapid pace of innovation (see Facebook)
  • The company/brand must listen to the dialogue and participate with customers
  • Customer contribution is a large percent of the value/experience
  • Every customer has a personalized experience
  • Social graph connections drive discovery rather than search

The report looks at the value of Facebook — comparing the growth of the company to the growth of Google — as well as the rise of other key players such as Quora, The Huffington Post and Zynga, and how each of them effectively took over from a leader of what it calls the “First Internet.”

So by the brokerage firm’s reasoning, The Huffington Post took over from CNN, Quora took over from Yahoo Answers — which in turn took over from Encyclopedia Britannica — Zynga has taken over from MiniClip, which took the place of former leader Electronic Arts, and Jive Software has taken over (or is taking over) from Google Docs, which took over from Microsoft Office. One of the few early Internet companies that seems to have what it takes to bridge this gap is LinkedIn, the firm says (although some might argue the opposite).

As part of the report, which also looks at the rise of players such as BranchOut — the Facebook-based business network that is trying to give LinkedIn a run for its money in that market — Wedbush also looks at Facebook’s potential market value, and comes to the conclusion that the company could one day be worth as much as $200 billion. That’s up from a recent private-market valuation of about $75 billion and would put Facebook firmly in Google territory.

According to the analysis by Lou Kerner, who also does secondary-market valuations of private companies like Facebook and Twitter for the website Second Shares, the giant social network could actually have even higher profit margins than originally forecast (as high as 50 percent, it says), and could grab an even larger share of the growing market for online social advertising and marketing dollars (as high as 15 percent of the global market, Wedbush estimates).

There are some caveats worth keeping in mind when reading the report, of course. For one thing, some of the leaders that it identifies could easily be replaced by something else — Quora, for example, may well have peaked in terms of awareness and growth after a recent surge in popularity, and it’s not clear whether it can continue and become mainstream in any real sense. And when it comes to Facebook and Zynga, the firm is part of a hot private market for the shares of those companies, and so has an obvious interest in making them appear as desirable and highly valued as possible.

That said, however, reports like this one help put the spotlight where it should be: on companies that have been able to take advantage of the social nature of the web — what at one point was being called “Web 2.0” — and how that has allowed them to grow at a speed that hasn’t been seen since the early days of Google. Sometimes we are so close to these events and companies that it’s easy to lose sight of how big a transformation they have helped create in our online lives.

It also helps reinforce how difficult it is for even early Internet leaders to adapt to and take advantage of these changes, as Google is trying to do by bolting social features onto its services through moves like its recent +1 launch. Leading in one wave is no guarantee that one can lead in another — and in some cases may make that even less likely to happen.

Gladwell: Social Media Still Not a Big Deal For Activists

Author and New Yorker writer Malcolm Gladwell caused some controversy last year when he said that social-media tools like Twitter aren’t worth much as a tool for social activism (or at least not “real” social activism). After the uprisings in Tunisia and Egypt — both of which involved extensive use of Twitter and Facebook by demonstrators and revolutionaries — many wondered whether Gladwell would alter this stance based on some powerful evidence to the contrary, but the author made it clear in a recent interview with CNN that he is still skeptical about how much of an effect such tools have.

In the interview (transcript here), the New Yorker writer says that Twitter and Facebook may have been used during the recent uprisings in countries like Tunisia and Egypt, but it isn’t clear that they were crucial in any way to the revolutions there. Gladwell argues that other similar events have taken place in the past — including the demonstrations in East Germany that eventually led to the collapse of the Berlin Wall — and they didn’t require any such tools:

I mean, in cases where there are no tools of communication, people still get together. So I don’t see that as being… in looking at history, I don’t see the absence of efficient tools of communication as being a limiting factor on the ability of people to socially organize.

This is the same point Gladwell made in a short note about Egypt that he posted at the New Yorker site in February, in which he wrote that “people protested and brought down governments before Facebook was invented. They did it before the Internet came along.” As more than one observer has pointed out, this isn’t much of an argument — there were political uprisings before guns and tanks came along too, but no one would deny that guns and tanks changed the nature of social revolutions considerably. Sociologist Zeynep Tufekci called arguments about how revolutions occurred before X or Y was invented “intellectually lazy.”

Gladwell also argues that social media and other such tools can just as easily be used dictators and governments to crack down on revolutions:

[Y]ou could also make the opposite argument that some of these new technologies offer dictators a – give them the potential to crackdown in ways they couldn’t crackdown before. So, my point is that for everything that looks like it’s a step forward, there’s another thing which says, well, actually, you know, there was a cost involved.

This might as well be called the Morozov principle, since it is a cornerstone of political writer Evgeny Morozov’s argument — in his book Net Delusion and in his columns at Foreign Policy magazine — that the Internet is as much of a danger to social movements as it is a benefit, because government forces can monitor Facebook to see what demonstrators are up to, and track their movements using Twitter and other social tools.

But even this argument acknowledges that social-media tools have changed the nature of social activism in significant ways. They may not be 100-percent beneficial, as Morozov alleges some “cyber-utopians” believe, but they clearly have altered the landscape — and in many cases this appears to have tipped incipient revolutions in places such as Tunisia and Egypt over into real-world uprisings, something that you might expect would interest Gladwell, the author of the much-hyped book The Tipping Point.

For whatever reason, the New Yorker author seems determined to downplay the effect that social media has in such situations, despite the evidence to the contrary.

Can Co-Founder Jack Dorsey Help Twitter Find Its Way?

Rumors have been circulating for some time now that Twitter co-founder Jack Dorsey might be taking on more of a role at the company, and today Dorsey confirmed he is going to head up product development at Twitter as executive chairman, while also continuing in his existing role as CEO and co-founder of mobile-payment startup Square. The move to give Dorsey more authority at the company appears to be an attempt to show that Twitter is putting more emphasis back on the product, rather than just on making money — something that the social network has been catching a lot of flak about lately. But can Dorsey help steer the company back onto the right track?

The most recent dust-up for Twitter was the response to new rules around using its API for pulling data into third-party applications. In contrast to the more open approach the company took in its early years, where developers were encouraged to create apps and services that leveraged the growing social network, the new rules seemed to clamp down on many aspects of Twitter’s ecosystem. Combined with some heavy-handed responses to Twitter app providers such as UberMedia, this struck many as showing a different — and less attractive — side of the company.

There was also some sharp criticism from users about a recent update to Twitter’s official mobile clients, which introduced a new feature called the Quick Bar (quickly dubbed the “dick bar” after CEO Dick Costolo) — one that seemed designed primarily to push the network’s new advertising-related services. Some users said they felt that Twitter was closing off avenues for alternative app suppliers at the same time as its own apps were becoming less useful.

Dorsey’s return to prominence at Twitter is a reversal of fortune in some ways for the Twitter co-founder and his former boss Evan Williams. Dorsey started Twitter in 2006 as a side project within Odeo, a media startup created by Williams after selling the Blogger platform to Google. It soon became obvious that Twitter was more interesting than what Odeo was originally doing, and Williams shifted his attention to the new service — effectively forcing Dorsey out as CEO, something Dorsey compared to “being punched in the stomach” in a recent profile for Vanity Fair.

Last year, Williams stepped down as CEO to devote more time to Twitter’s product development, and was replaced by former Feedburner chief executive Costolo. So far, there has been no mention of what Williams will be doing in light of Dorsey’s expanded role in developing the product, and sources within the industry say the former CEO is no longer actively involved in Twitter (although he did interview Lady Gaga at the Twitter offices recently).

The big question for Twitter, and for Dorsey, is whether the network can push forward with its attempts to control its ecosystem and find new sources of monetization, while still maintaining the strengths that made Twitter so appealing in the first place. That’s a tough assignment for someone who already has a full-time CEO job at a different company, and the stakes for Twitter continue to rise along with its market valuation.

Post and thumbnail photos courtesy of Flickr user Luc Legay

The Book Deal May Be Dead, But Google Is Still Right

The Google book settlement — which the search giant signed with the Authors Guild and the Association of American Publishers in 2008, after a dispute over the company’s scanning of books — was recently struck down by a judge as too far-reaching, which is arguably true (although Google would undoubtedly disagree). But the fact that the arrangement has been rejected might not be such a bad thing, because it puts the spotlight back where it should be: on the fact that Google is doing nothing wrong, legally or morally, in scanning books without the permission of the authors or the publishers of those books.

Just to recap, Google started scanning books sometime in 2002, as part of its expressed desire to “index all of the world’s information.” In addition to deals with certain publishers and various university libraries — deals that are not affected by the book settlement or the legal ruling — Google also began sourcing and scanning books that were either in the public domain or were “orphaned” (a term used to refer to books that are still under copyright, but whose author or publisher can’t be found).

So far, so good. But Google also started scanning and indexing books that were under copyright, and then offered authors and publishers the ability to “opt out” of the program and have their books removed. Some felt that this was a good bargain — especially since Google was going to help promote their books (by revealing them in search and at the Google Books site) and give readers an easy way to buy them. Others, however, said that scanning and indexing their books without explicit permission was wrong, and filed the lawsuits in 2005 that led to the agreement.

The crux of this argument is that scanning a book makes a copy of that book, and that copying is not permitted unless a copyright holder specifically agrees. The authors and publishers made this argument despite the fact that Google only ever shows a small fraction of a text when they display a book online. It’s not as though the company planned to make copies of all books freely available to anyone through some kind of Google Books version of Napster. But the plaintiffs argued that simply scanning them was bad enough.

This is a ridiculous position, and always has been. Scanning something makes a copy of it in the same way that my viewing a web page makes a copy of it in the RAM of my computer — I’m surprised that authors and publishers haven’t tried to argue that this is secondary copyright infringement as well.

The reality is that Google’s use of selected extracts from books or any other work is protected by the principle of fair use (PDF link), which allows anyone to make use of published content of all kinds (text, images, etc.) without asking for permission from the creator or the rights holder. It’s the same principle that allows Google to index and show search results for images, web pages and other content without having to ask every single site publisher or photographer.

Why is this important? Because without that ability, search engines as we know them couldn’t exist, and they are a positive force for society as a whole — just as having a single way to search (and buy) every published book in the world would be a positive thing. Imagine if we were setting up public libraries now: would any author or publisher agree to have copies of their books just sitting there on shelves, for free, with anyone allowed to borrow them for as long as they wanted to? Unlikely (and e-book publishers like Amazon are trying to roll back borrowing abilities for digital works as well). If I want to buy a book and rip it apart and then scan it and save a copy on my hard drive, I should be free to do so, and so should Google.

The big problem with the Google book settlement, as noted by the judge who struck it down (PDF link), is that the settlement gave the web giant the exclusive right to do whatever it wished with all scanned works, including selling orphan books, which is arguably over-reaching. But that doesn’t change the fact that Google’s initial impulse was the right one: it does have the right to scan and display extracts from books, regardless of what the Authors Guild and the AAP say, and it should be allowed to continue doing so.

The Biggest Flaw in the NYT Pay Plan: It’s Backward-Looking

I didn’t get a chance to write about the launch of the New York Times subscription plan last week for a number of reasons (okay, I was on a beach) but I’ve since read most of what others have written about it, and the general consensus seems to be that it is a) confusing, and b) a sign of desperation. But while both of these things are arguably true, my big problem with the newspaper’s money grab is that it is fundamentally backward-looking. More than anything else, it feels like a defensive move to buy some time while the paper figures out what it wants to be when it grows up.

There’s no question that the details of the plan are somewhat bewildering, as Felix Salmon of Reuters has noted: occasional readers get to see 20 “items” for free in a month — with the definition of an “item” subject to the restrictions described in the newspaper’s FAQ on the topic — and then they have to sign up for a plan. Paying $15 a month gets you access to the website and the iPhone app, but (somewhat surprisingly) not the iPad app. For $20 a month you get access to the website and the iPad app, but that doesn’t get you a subscription to the iPhone app. If you want access across all platforms, you have to pay $35 a month.

Once you get past these nuances, however, it becomes fairly obvious that the pay plan has little or nothing to do with promoting the iPad app or the iPhone app, or even the newspaper’s website. Instead, it seems pretty clearly designed to protect the subscription numbers for the printed version of the Times: if you subscribe to virtually any version of the paper, including the Sunday-only option, everything digital comes along with it for nothing. In other words, you can pay as little as $30 a month and get the entire contents of the newspaper in whatever form you want.

In that sense, the Times pay plan seems to be motivated by the same impulse as other paywalls at newspapers such as the Times and the Sunday Times in Britain — where News Corp. erected subscription plans last year — and at New York Newsday, which launched one in 2009. As I pointed out in a post at the time Rupert Murdoch was planning to launch paywalls at his British papers, the main point of these walls was to keep people in, not keep people out. Since print continues to deliver the majority of revenue for newspapers such as the Times and the NYT, it’s crucial to keep readers from cancelling their print subscriptions and simply moving to read everything for free online.

Is this a compelling financial rationale for a pay wall or subscription plan? Perhaps. There’s no question that, as the New York Times admitted when it announced the new plan, newspapers need to find new sources of revenue to replace declining advertising income. But I’m skeptical that the pay plan is going to produce the $100 million or so in new revenue that some seem to think it will.

Even more than that, however, the Times’ subscription model seems fundamentally reactionary, and displays a disappointing lack of imagination. The newspaper seems to be saying: “What we do is valuable, and you have been getting it for free, so it’s time to pay up.” Some readers may accept that rationale, and sign up — but others are going to go elsewhere, or reduce their NYT consumption to links that arrive from blogs, Twitter and Facebook, which the newspaper has (wisely) decided will be exempt from the subscription wall. And so the paper will continue to use its new digital assets primarily to subsidize its declining print business.

The success of aggregators like The Huffington Post — which NYT executive editor Bill Keller recently ranted about — is only the most recent sign that the way many people consume their news has changed forever. It is no longer about picking a specific outlet like the Times, and then relying solely on that for news and opinion about the world. Traditional media like the NYT have lost the control they historically had over distribution and consumption, and attempts to reimpose the scarcity that such things once had is ultimately futile.

The exemption for Twitter and other social media is a sign that the Times understands this on some level at least, but why not go further? Why not offer people a subscription to special Twitter direct messages or Facebook Q&A sessions with writer Nick Kristof as he reports on the uprisings in the Middle East? What about offering real-world events that involve some of its most prominent voices, where people can meet them and network with each other at the same time? The music industry seems to be waking up to the fact that individual songs are simply a loss leader that drives demand for other services, but the New York Times is still trying to charge people monthly fees for undifferentiated news content.

Will the subscription plan bring in some money? No doubt. But putting meters on its existing content is not going to save the Times. In order to really take advantage of the revolution that is underway in the content business, it needs to start thinking about what it does in a whole new way, and there are few signs of that happening any time soon.

Post and thumbnail photos courtesy of Flickr user Mark Strozier

What Twitter Needs to Learn From Digg’s Decline

Birthdays are a natural time for reflection, even if you’re only five years old, which is the age that Twitter officially turned today. It may not seem like much, but that’s about 35 in Internet years, which means the company is close to being middle-aged — and Twitter has definitely been struggling with some mid-life challenges lately. Another much-hyped web startup also just had a birthday recently: Digg, which turned six in December, has been struggling as well, after a failed redesign and the departure of founder Kevin Rose. And Digg’s decline from pioneering service to also-ran contains some lessons for its fellow social-media service.

In some ways, it’s hard to believe that Twitter has been around for five years. The service that Jack Dorsey and Biz Stone originally launched as a side project within Evan Williams’ company Odeo — which was later shut down, with Williams taking over from Dorsey as CEO of Twitter, in a move that reportedly caused some bad blood — didn’t seem like much when it first launched, even to Om. Like most users, I thought the service was fairly useless when I joined in early 2007, and I spent months wondering what I was supposed to do with it before a critical mass of friends and other interesting people joined, and it began to come to life.

Twitter’s status as a powerful real-time news platform didn’t really become clear until it was used to transmit updates about the forest fires in California in 2007 and during an earthquake in China in 2008. Gradually, people started to see it as something other than just a way of talking about what you were having for lunch — and when Janis Krums used Twitter to post a picture of a plane crash-landing in the Hudson River in 2009, the reality of Twitter as a news-publishing system started to go mainstream. Every subsequent event, from terrorist attacks in Jakarta to the earthquake in Haiti, has reinforced the idea that the service lowers the barriers to entry for publishing, as Evan Williams put it last year.

The most recent example was the use of Twitter and Facebook by dissidents in Tunisia and Egypt to co-ordinate demonstrations and uprisings against their governments, and the compelling stream of news from participants that was carried out of Tahrir Square by Twitter to the world, thanks in part to real-time news aggregators like Andy Carvin of National Public Radio, who created what was effectively a one-man wire service.

More than anything else, however, Twitter has become a platform for community — whether it’s a community of people interested in revolutions in the Middle East, or a community that is obsessed with the latest product release from Apple, or a community that wants to know what John Cusack or Steve Martin think about current events. And one of the hallmarks of a social service like Twitter and Facebook is that the more people use it to connect with each other around things they are passionate about, the more they feel like they own it to some extent — and that feeling is what Twitter is currently fighting as it tries to mature as a company.

You can see that in the outraged responses to the recent Quick Bar fiasco, and to the shutting down of third-party clients like Bill Gross’s UberMedia — which has been trying to develop its own competing monetization strategy for the social network — and to the rollout of services such as Promoted Tweets and Promoted Trends. As I’ve argued before, users have grown so used to seeing Twitter as a utility that every move the company makes to add money-making layers or to control its ecosystem is seen as an affront in some sense, like someone invited you to a party at their house and now is asking you for money or putting up turnstiles and imposing all kinds of rules on your behavior.

Although the two services are different in many ways, Digg has also been struggling with the same kinds of issues — and some of those struggles are directly related to Twitter, since Digg’s link-sharing features, which were once a pioneering example of what some called Web 2.0, have arguably been overshadowed by the growth of Twitter. But Digg has also rolled out its own poorly-received design features: the service launched Digg v4 last August and the new design was roundly criticized as unstable and (more importantly) a breach of faith with the traditional Digg community. The site’s traffic plummeted, the new CEO rolled back most of the new features and laid off almost 40 percent of the staff, and founder Kevin Rose is moving on to start a new venture.

So what are the lessons that Digg has to teach Twitter? One is that even pioneering services, whose founders appear on the covers of leading business magazines, can be overtaken by events, and by other services that don’t even exist yet. Yes, it’s true that Twitter is supposedly worth $10 billion, and is much larger than Digg ever was — but that lesson still applies (as MySpace is well aware). And the other lesson is that the core of a social network is the community of users, and arguably in Twitter’s case the community of developers as well, or the “ecosystem.”

Alienating either or both of those groups is a very risky strategy, as Digg has discovered. It could pay off in Twitter’s case, but it could also ruin one of the key features that make the network so powerful and compelling, and that is something that would be very difficult — if not impossible — to recapture.

Post and thumbnail photos courtesy of Flickr user Will Clayton

Why Twitter Should Think Twice Before Bulldozing the Ecosystem

In another shot fired across the bow of the Twitter ecosystem — or another volley in the ongoing Twitter wars of 2011 — the company has come out with new terms on which all developers must operate, which makes it clear that Twitter plans to own the majority of the value in the system, and relegate third-party apps to the periphery. As with the company’s other recent moves, including shutting down misbehaving apps, the response has not been friendly from many parts of the network. And while Twitter can probably get away with this kind of behavior, it is taking a real risk of losing much of the goodwill it has built up over the years.

Critics have accused the company of “nuking” the developers and services that helped it achieve its early growth in its drive to monetize its network, in much the same way that Hunch founder and angel investor Chris Dixon criticized the company last year for “acting like a drunk guy with an Uzi” after it acquired Tweetie. Some have given the company credit for at least laying out the rules in a clear manner with its latest API update, since much of the developer community has been unclear on what was permitted and what wasn’t, but those responses seem to be in the minority.

The point has become clear by now: anyone who is still under the impression that Twitter is the friendly, touchy-feely company that co-founder Evan Williams used to run — the one that admitted it “screwed up” relations with developers by moving too quickly — is living in a dream world. Twitter CEO Dick Costolo may have been a standup comedian at one point, but he is a businessman now, and Twitter is determined to do whatever it takes to come up with a business model to justify the huge valuations it is getting.

As MG Siegler has pointed out, what Twitter is doing is just business and not personal — but there is a reason that most businesses don’t operate the way the Mob does (other than the fact that killing people is illegal, of course). Acting that way, by routinely kneecapping people or setting their businesses on fire, is a risky proposition. Even if you *can* do it, it’s not clear that you *should* do it, especially if some of your business depends on goodwill (as opposed to fear), as Twitter’s clearly does, and especially if a large part of your success is due to that larger ecosystem.

Without the help of third-party apps like Tweetie and Tweetdeck, the company likely would not have been nearly as successful at building the network (and a ready-made client like Tweetie certainly wouldn’t have been sitting there waiting to be acquired). But the ecosystem didn’t just build demand for the network — it also helped build and distribute the behavior that now makes Twitter so valuable: the @ mentions, the direct messages, re-Tweets and so on, none of which were Twitter’s idea originally. That created a huge amount of goodwill, and led to the (apparently mistaken) idea of an ecosystem.

It’s all very well for Twitter to claim ownership of all those things now, since it is their platform. And obviously there are businesses that can get away with being arbitrary or dictatorial — Apple is well known for such behavior, after all, and it is one of the most valuable companies on the planet. But this only works over the longer term if your product is so unique and compelling that people will put up with it. Is Twitter in that category? Perhaps. The company managed to grow at an astronomical rate even when it was suffering repeated outages, because users (including me) were so addicted to it. That may have made Twitter a little cocky about how necessary it is.

It’s also true that there isn’t really much competition when it comes to micro-blogging, or whatever we choose to call Twitter. Open-source options such as Status.net have tried to get traction, and programmer Dave Winer has been lobbying for and trying to jump-start an open Twitter alternative for some time — even before the company made it obvious that it was planning to “prune” the ecosystem. So far nothing has come along that can compete, but Twitter’s behavior could serve to boost those efforts substantially. And there would be definite benefits to an open system — not just in terms of features, but for when governments decide to order companies like Twitter to release user information to the State Department about their espionage investigations.

In the short term, Twitter seems likely to get away with throwing its weight around and dictating the terms on which developers — and users, to a large extent — can access or make use of the network. And maybe the network has grown to the point where none of that matters any more. But sometimes when you bulldoze an ecosystem, what you wind up with is a lot of weeds and a corporate mono-culture in which growth no longer flourishes, and in some cases that growth subsequently moves elsewhere. That’s a risk Twitter seems willing to take — whether it is the right one remains to be seen.

NYT Editor Says It’s Only Journalism When He Does It

If you’re a traditional journalist, or someone who works for a traditional media outlet, the easiest way to cast aspersions at a web-based or digital media company is to use the A word: that is, “aggregation.” New York Times executive editor Bill Keller stayed true to form in a piece he wrote for his newspaper Thursday, in which he categorized The Huffington Post and other unnamed online media outlets as pirates, who are in the business of “counterfeiting” content rather than engaging in “real” journalism. In only a few paragraphs, the NYT editor managed to say volumes about how little he understands where media is now, or where it is going.

Keller’s piece starts out as a humble discussion of his status as the 50th most powerful person in the world (according to Forbes magazine) and how he thinks this is absurd, since he just runs a newspaper. But it quickly becomes a complaint about how members of the media — and assorted “flocks of media oxpeckers who ride the backs of pachyderms, feeding on ticks,” as well as professional pundits such as Clay Shirky and Jay Rosen — spend too much time talking about media in the abstract instead of doing it.

Then he launches into an attack on aggregators, saying the media industry has “bestowed our highest honor — market valuation — not on those who labor over the making of original journalism but on aggregation,” an obvious reference to the $315-million acquisition of the Huffington Post by AOL.

And what does the term aggregation mean? That seems to depend on who does it. The NYT editor says that aggregation can mean “smart people sharing their reading lists, plugging one another into the bounty of the information universe,” and then he admits that this “kind of describes what I do as an editor.” So aggregation is journalism then? But wait — Keller goes on to say that aggregation often amounts to:

[T]aking words written by other people, packaging them on your own Web site and harvesting revenue that might otherwise be directed to the originators of the material. In Somalia this would be called piracy. In the mediasphere, it is a respected business model.

This is where he calls out the Huffington Post, whose founder is “the queen of aggregation,” having discovered that “if you take celebrity gossip, adorable kitten videos, posts from unpaid bloggers and news reports from other publications, array them on your Web site and add a left-wing soundtrack, millions of people will come.” The NYT editor goes on to say that while AOL called the acquisition of Huffington Post a key part of its content strategy, buying an aggregator and calling it a content play is “like a company announcing plans to improve its cash position by hiring a counterfeiter.” (Update: Arianna Huffington has posted a response to Keller’s piece at her site).

Keller seems to be missing the point that all media — both online and offline — is to some extent about aggregation. Even newspapers like the New York Times aggregate content from newswires and occasionally rewrite that content to make it their own. Yes, they pay those newswires for the privilege, and so does the Huffington Post: the difference is that it pays in attention, which it directs back to the original source, just as Google pays with links when it aggregates content at Google News. According to a Huffington Post staffer, news websites actually beg the site to aggregate their content, since it gets more traffic.

Aggregation is a term that covers a wide variety of behavior, some of it nefarious and much of it not. To take just one example, look at what Andy Carvin of National Public Radio has been doing by pulling in and republishing Twitter posts from hundreds of different people — both individuals and journalists, including New York Times writer Nick Kristof — as a way of covering the revolutions in Tunisia and Egypt.

Is that aggregation? Sure it is (or “curation,” as some prefer to call it). Is Carvin not taking reports from unpaid bloggers and news reports from other publications and republishing them? Of course he is. But he’s also engaged in a very real form of 21st-century journalism. And maybe if Bill Keller spent a little more time trying to understand how aggregation works instead of railing against it, the New York Times would be a little further ahead in this new media game, instead of playing catch-up with Arianna Huffington.

The Race to Build a Personalized and Social News Reader

Ever since the web first started to become mainstream, there have been attempts to build the “Daily Me,” a personalized newspaper that learns what you like or are interested in (does anyone remember PointCast?). But as I noted in a recent post on the topic, many of these efforts are lackluster at best, and irritating at worst. They either require too much fiddling to tune them, or they don’t show any intelligence at all (or both). But that doesn’t stop companies from trying — and the most promising entrants in this race so far are those that try to build their recommendations on top of the social signals coming from Twitter and other networks.

The latest to join the field is a personalized magazine app for the iPad called Zite, whose name is a play on the German word “zeitgeist,” meaning “the spirit of the times.” The company behind the app is based in British Columbia, and has been funded by angel investors and research grants from the Canadian government, and CEO Ali Davar says Zite has been working on its recommendation engine for several years. An earlier version of the project, which is based on technology developed at the University of British Columbia’s Laboratory for Computational Intelligence, involved a browser extension called Worio that suggested related results when users did a Google search.

The Zite app pulls in your Twitter account and your Google Reader feeds (if you have them), and then suggests topics based on your interests. This was the first place where it fell down for me — it said that it didn’t have enough information about me, which I thought was odd, since I have been on Twitter for about four years, have posted more than 35,000 tweets and follow over 2,000 people. I’ve used Google Reader for years as well, and am subscribed to about 600 feeds. Although Zite got some of its suggestions right, it recommended Barcelona as a topic, which was totally out of left field — in fact, I can’t recall ever mentioning the Spanish town before.

Although Robert Scoble says that Zite doesn’t feel as slick as Flipboard, I thought the app worked quite well in terms of usability — you can swipe to move through articles, click to read them in a built-in browser, and share them easily (although you can’t save them to Instapaper, which is a shame). And you get asked with each one whether you like the content or want to see more of it, which is something that other apps and services such as Flipboard are missing. It requires some effort on a reader’s part to do this training, and many will probably not do it, but it is crucial for learning likes and dislikes.

One glaring omission from Zite is the lack of Facebook integration. Davar says that Facebook tends to provide sources that are too heterogeneous (that is, too diverse) to be a source of good recommendation data, and that might be true, but it’s still a giant social network and a huge part of many people’s online news consumption, so it seems odd to leave it out — especially when the data coming from the billions of “like” buttons scattered around the web could be a source of so much data on what people want to read (Yahoo Labs has just released an interesting survey of what that shows for some of the major news sites).

There’s one nagging question that keeps jumping out at me as I look at all of these apps and services, however, and that is: where is Google? The combination of smart aggregation and algorithm-driven personalization seems like something the search engine should be all over. Google News has added some personalization aspects, but they are anemic at best, and one of the original customized news-readers — Google Reader — hasn’t really capitalized on that opportunity much at all, although it does provide some recommendations.

The reality is that the RSS reader has been eclipsed (for the small proportion of the population who even used one) by Twitter and Facebook and other social news sources, or smart aggregators such as Techmeme and Mediagazer. Google has more or less failed to take advantage of that transition at all when it comes to news reading, although it is trying to add social signals to search. Why not take FastFlip and try to make it a Flipboard or Zite or News360 competitor?

And apart from the Washington Post’s new Trove project and the News.me spinoff from the New York Times that Betaworks is close to launching, newspapers — who should know a thing or two about filtering and recommending the news to people — are virtually nowhere in this game.

If there’s one thing that web users need more than ever, it’s smart filters to help them navigate the vast tsunami of information that comes at them every day (it’s not information overload, says Clay Shirky, it’s “filter failure”). Someone is going to solve that problem, and if they do it properly they could capture a significant share of the online news-reading market.

Newspapers Hope Readers Will Throw Money Over the Wall

As the financial screws continue to tighten on traditional media companies, more and more are choosing to throw their eggs into the basket labelled “paywall,” despite a conspicuous lack of evidence that erecting barriers to non-paying readers — or turnstiles that charge them after they have read a certain number of articles — has any beneficial effects. The latest to go this route is the Dallas Morning News, which put up its wall this morning, and the New York Times (s nyt) is also said to be close to launching its metered-access plan. But in the long run, these walls are really just sandbags against a rising tide.

The Dallas Morning News paywall, which the paper has been working on since the middle of last year, does have some holes in it that are designed to mitigate the extent to which it shuts out readers: non-subscribers to the paper will be able to read headlines, blogs, obituaries, classifieds and any syndicated content for free, but local news will be blocked. And the news doesn’t come cheap: a subscription to the print newspaper and all of the Dallas publisher’s digital content (which includes an iPad app) is $33.95 a month, and an online-only subscription is $16.95 a month. By comparison, Rupert Murdoch’s new iPad app The Daily costs $4 a month or $39.99 for a year.

Last month, Dallas Morning News publisher Jim Moroney admitted that he was unsure whether the paywall would work or not, telling the Nieman Journalism Lab that “This is a big risk — I’m not confident we’re going to succeed. But we’ve got to try something. We’ve got to try different things.” Moroney was similarly blunt in a memo to his newsroom staff about the launch of the wall:

So why, beginning tomorrow, are we going to require a subscription to access much of the content we originate and distribute digitally? The reason is straightforward: Online advertising rates are insufficient at the scale of traffic generated by metro newspaper websites to support the businesses they operate. We need to find additional and meaningful sources of revenue to sustain our profitability.

The bet being made by papers like the Morning News — and Gannett, which is experimenting with paywalls at a number of its papers, and says it plans to roll the strategy out to other publications — is that a paywall can do two things: one is to keep existing print readers from cancelling their subscriptions so they can read for free online, and the second is to generate more revenue, not just from subscriptions but by convincing advertisers that readers who pay for their content are more desirable as targets of advertising.

This is the argument being made by News Corp., (s nws) which launched paywalls at two of its British newspapers late last year, and saw its online readership plummet by more than 90 percent. The company has said that it isn’t concerned about the decline, and that advertisers are proving to be receptive to its claims that the remaining readers are more engaged and therefore worth more. What impact that will have on the company’s actual finances remains to be seen, however.

The New York Times, meanwhile, is expected to launch its “metered access” plan soon, which is based on a similar model used by the Financial Times that provides a certain number of free articles per month before readers hit a wall. The NYT has experimented with a paywall before — in 2005 it launched TimesSelect, which put the paper’s columnists behind a wall, but the service (which former Guardian digital head Emily Bell credits with helping to jump-start The Huffington Post) was shut down in 2007. And some financial analysts are skeptical that the new wall will be any better in terms of helping the paper’s business: William Bird of Lazard Capital recently rated the stock a “sell,” saying it was like buying “a declining annuity,” and that the paywall was unlikely to help.

The reality is that the biggest problem for traditional newspaper companies — a combination of high costs and falling ad revenues — isn’t something a paywall is going to help solve. At best, it is a stop-gap measure that might slow their decline, and an ultimately futile attempt to reimpose scarcity on their content in an age when the supply of free content is virtually unlimited.

Why Facebook Is Not the Cure For Bad Comments

There’s been a lot of discussion recently about Facebook-powered comments, which have been implemented at a number of major blogs and publishers (including here at GigaOM) over the past couple of weeks. Supporters argue that using Facebook comments cuts down on “trolling” and other forms of bad behavior, because it forces people to use their real names instead of hiding behind a pseudonym, while critics say it gives the social network too much power. But the reality is that when it comes to improving blog comments, anonymity really isn’t the issue — the biggest single factor that determines whether they are any good is whether the authors of a blog take part in them.

According to TechCrunch’s MG Siegler, the addition of Facebook comments seems to have improved the quality of the comments that the blog receives, but has reduced the overall number of them, which he says may or may not be a good thing — since some people may be declining to comment via Facebook as a result of concerns about their privacy, etc. A bigger issue, says entrepreneur Steve Cheney, is that using Facebook as an identity system for things like blog comments forces users to homogenize their identity to some extent, and thus removes some of the authenticity of online communication.

Although Cheney’s argument caused Robert Scoble to go ballistic about the virtues of real names online, Harry McCracken at Technologizer had similar concerns about the impact that Facebook comments might have, saying it could result in comments that are “more hospitable, but also less interesting.” And social-business consultant Stowe Boyd is also worried that implementing Facebook’s comments is a continuation of the “strip-malling of the web,” and that

Facebook personalizes in the most trivial of ways, like the Starbucks barristas writing your name on the cup, but they totally miss the deeper stata of our sociality. But they don’t care: they are selling us, not helping us.

There’s no question that for some people, having to put their real name on everything they do online simply isn’t going to work, because they feel uncomfortable blending their personal lives with their professional lives, or vice versa. Those people will likely never use Facebook comments, and that is a real deterrent to hitching your wagon to Facebook entirely.

But the biggest reason not to rest all of your hopes on Facebook comments is that Facebook logins are not a cure for bad comments, real names or no real names. The only cure is something that takes a lot more effort than implementing a plugin, and that is being active in those comments — in other words, actually becoming part of an ongoing conversation with your readers, even if what they say happens to be negative in some cases. This is a point that Matt Thompson of National Public Radio made in a blog post, in which he talked about the ways to improve the quality of comments:

Whether online or offline, people act out the most when they don’t see anyone in charge. Next time you see dreck being slung in the bowels of a news story comment thread, see if you can detect whether anyone from the news organization is jumping in and setting the tone.

As Thompson notes, the standard defense for not doing this is a lack of time, and responding to reader comments definitely takes time. But it’s something that we feel strongly about here at GigaOM, and it’s something that we are determined to do, to the best of our ability — regardless of whether it is through our regular comment feature, or through the Facebook plugin. In the end, it’s not the tool that matters, it’s the connection that it allows.

Hyper-Local News: It’s About the Community or It Fails

According to multiple news reports this morning, AOL has agreed to acquire hyper-local news aggregator Outside.in for a sum that is reported to be less than $10 million, substantially below the $14.4 million that the company has raised from venture funds and other sources. After four years of trying, the service has more or less failed to become much more than a local aggregator, pulling in automated feeds of news, blogs and keyword searches based on location.

There is a business in doing this, but not a very big one — and that’s because simply aggregating data isn’t going to produce enough traffic or engagement to get advertisers interested. As Marshall Kirkpatrick notes, the field is littered with hyper-local experiments that have not really succeeded. Why? I think it’s because many of these, including Outside.in, focus too much on the how of hyper-local — the automated feeds and the aggregation of news sources, which sites like Everyblock (which was bought by MSNBC in 2009) and Topix do with algorithms based on location — rather than the why. And the why is simple: to serve a community. Unless a site or service can do that, it will almost certainly fail.

So how do you do that? The most successful community news operations — like a startup called Sacramento Press, which continues to grow rapidly despite the presence of a traditional newspaper competitor in the McClatchy paper The Sacramento Bee, or a Danish newspaper project called JydskeVestkysten, which has thousands of community-based correspondents who submit content for a series of hyper-local sites — come from the communities that they serve. They aren’t data aggregators that are imposed on those towns and regions by some external source, but come from within them.

The easiest way to see whether a hyper-local site is working or not is to look at the comments. Are there heated discussions going on in the comments on stories? If not, then the site is likely to be a ghost town. History is filled with local news experiments like Backfence — which was founded by former Washington Post staffer Mark Potts and shut down in 2007 — and Dan Gillmor’s Bayosphere, which never really managed to connect with the communities they were supposed to be serving, despite all the best intentions. Among the startups trying to take a community-first approach is OpenFile, a kind of pro-am local journalism startup based in Toronto.

In the comments at Read/Write Web, the founder of Everyblock, programmer and entrepreneur Adrian Holovaty, said that his service is trying to add more community to its sites by focusing on comments and discussion around the issues — and that’s a good thing, because without it, there is nothing but a collection of automated data, and no one is going to form a strong relationship with that.

Topix, which says it is one of the largest local news services on the web, started out doing the news aggregation thing just like Outside.in and Everyblock, co-founder Chris Tolles said recently in an interview with me, and then almost accidentally started to become a community hub for lots of small towns and regions that didn’t have anywhere else to talk about the issues. Topix has focused on expanding those kinds of discussions, by targeting local hubs with features such as election-based polls during the recent mid-term elections, in order to spark more debate and engagement.

This is the central challenge for AOL and its Patch.com effort, which has already spent over $50 million launching hyper-local news operations in almost a thousand cities across the United States. The sites are designed to be one-man or one-woman units, with a local journalist (in many cases, one that came from a traditional media outlet) as the core of the operation, writing local news but also pulling in other local content from blogs, government sources and elsewhere. And most of the sites highlight the comments from readers prominently, which is smart.

But can this massive, manufacturing-style effort from a web behemoth manage to connect with enough towns on a grassroots level and really become a core part of those communities? Because without that, AOL is pouring money into a bottomless pit.

Newspapers Need to Be Of the Web, Not Just On the Web

The secret to online success for newspapers doesn’t depend on the choice of technology, or decisions about content, or even specific kinds of knowledge about the web, says Emily Bell — the director of the Tow Center for Digital Journalism at Columbia University, and the former head of digital for The Guardian. All it requires, she says, is a firm commitment to be “of the web, not just on the web.” Speaking at a journalism event in Toronto last night, Bell said the biggest single factor in the success that The Guardian had online was the determination to be part of the web, and to embrace even the controversial aspects of the online content game — including user-generated content and the use of tools to track readers and traffic. “Its useful to have the digital skills,” she said, “but more important to have a digital mindset.”

One of the most controversial things The Guardian did early on, according to Bell, was to launch the Huffington Post-style Comment Is Free platform in 2006, which allowed anyone to submit opinion or commentary pieces and have their blog posts run alongside the traditional columnists employed by the paper.

It was this last part of the project that really caused a furor within The Guardian, said Bell, because the traditional columnists didn’t want their pearls of wisdom to be appearing alongside the rantings of non-journalists, and they expressed their displeasure in no uncertain terms to Guardian editor-in-chief Alan Rusbridger. To his credit, Bell says the editor stood firm.

Bell also noted that one of the big factors in the rise of The Huffington Post was the New York Times‘ (s nyt) decision to put all of its columnists behind a pay wall, which it did in 2005. The wall was dismantled in 2007, but while it was in effect it locked the NYT’s opinion leaders away from the web, and effectively removed them from the discussion stream — which created a perfect opportunity for Arianna Huffington, and helped her build a business that AOL just acquired for $315 million (s aol). It remains to be seen what kind of impact the NYT’s new “metered” pay wall will have once it launches, which is expected to happen soon.

Bell said one of the mistakes most newspapers made was to not pay close enough attention to the technology side of the online content business, and to ignore the obvious impact of social networks such as Twitter and Facebook. Bell said she met with Google (s goog) executives in 2004, and they warned that the traditional media industry was out of touch with what readers and advertisers wanted. But newspaper executives thought “that was just about search, and that wasn’t our business — but the more I thought about it, the more I thought it was our business.” The same thing happened with the rise of social media, she says: “People thought, oh that’s not our business — but it was.”

The former Guardian executive said that using tools to track what readers click on doesn’t mean that “we will all just write about Britney Spears without her clothes on,” but simply means that journalists can keep an eye on what people are interested in reading about. The idea that paying attention to such metrics is somehow undercutting journalism is “just plain wrong,” she said. Bell also noted that newspapers have seen the digital side of their business as the risky part, when the reality is that the legacy print operations are actually more risky. “Even if you don’t know what is going to happen in your legacy business, you know what is happening now — you are losing money,” she said.

When asked during the Q&A session about how newspapers should blend their traditional newsrooms with their new digital operations, Bell said that “the jury is still out” on whether merging newsrooms is a good idea. But she said one thing was clear: that having traditional print editors telling digital staff what to do was “a recipe for disaster.” A number of newspapers that have merged their newsrooms — including the Washington Post (s wpo), which used to have its print and online operations in two completely separate buildings, with separate management — have suffered after the merger because, as journalism professor Jay Rosen and others have pointed out, the “print guys won.”

Bell’s views on who should be driving the innovation at newspapers echo those of publisher John Paton, CEO of the Journal-Register Co., which owns a chain of regional daily and weekly papers in New Jersey and Connecticut. In a digital manifesto he wrote for the company last year, Paton said that newspapers need to “be digital first,” and that the best way to do that is to “put the digital guys in charge of everything.”

Book Publishers Need to Wake Up And Smell the Disruption

The writing has been on the wall for some time in the book publishing business: platforms like Amazon’s Kindle (s amzn) and the iPad (s aapl) have caused an explosion of e-book publishing that is continuing to disrupt the industry on a whole series of levels, as Om has written about in the past. And evidence continues to accumulate that e-books are not just something established authors with an existing brand can make use of, but are also becoming a real alternative to traditional book contracts for emerging authors as well — and that should serve as a massive wake-up call for publishers.

The latest piece of evidence is the story of independent author Amanda Hocking, a 26-year-old who lives in Minnesota and writes fantasy-themed fiction for younger readers. Unlike some established authors such as J.K. Konrath, who have done well with traditional publishing deals before moving into self-publishing their own e-books, Hocking has never had a traditional publishing deal — and yet, she has sold almost one million copies of the nine e-books she has written, and her latest book appears to be selling at the rate of 100,000 copies a month.

It’s true that the prices Hocking charges for these books are small — in some cases only 99 cents, depending on the book — but the key part of the deal is that she (and any other author or publisher who works with Amazon or Apple) gets to keep 70 percent of the revenue from those sales. That’s a dramatic contrast to traditional book-publishing deals, in which the publisher keeps the majority of the money and the author typically gets 20 percent or even less. If you sell a million copies of your books and you keep 70 percent of that revenue, that is still a significant chunk of change, even if each book sells for 99 cents.

(Update: As a number of commenters have noted, only books that are priced at $2.99 or higher are eligible for Amazon’s 70-percent royalty rate; books priced cheaper than that are eligible for a 35-percent royalty rate).

The overwhelming appeal of that kind of mathematics has other authors moving away from traditional publishing deals as well, including Terrill Lee Lankford, who wrote recently about how he turned down a deal with a major publisher in the middle of negotiations over a new book because the publisher wanted him to agree to a deal for a future e-book that would have given the publishing house 75 percent of the revenue — and tried to entice him with a hefty advance for the original book. But the author said no to both deals, saying:

I see it as a permanent 75% tax on a piece of work that generates income with almost no expense after the initial development and setup charges.

Just as the music industry did, many book publishers seem to be clinging to their traditional business models, despite mounting evidence that the entire structure of the industry is being dismantled, and the playing field is being leveled between authors and publishers. And it’s not just individual authors who are taking advantage of this growing trend — author and marketing consultant Seth Godin has created something called The Domino Project in partnership with Amazon, which he sees as a new kind of publishing middleman that can help authors take advantage of the e-book wave. More traditional publishers should be paying attention, or they will find their lunch is being eaten.