Google: Why not make the cloud free?

Kevin Kelleher has a post up at GigaOm with an interesting proposition: he says Google should duplicate the kinds of cloud-based services that Amazon has — the S3 storage business, the EC2 virtual server business and so on — except do it all for free. He rightly argues that this would be an easy way to eat Amazon’s lunch (the idea stems from a post by my old friend Dave Winer, who wrote about a pig coming up to his car and talking to him, which may or may not be some kind of metaphor).

I’m not saying Google should make its cloud services free because I want free storage space and a virtual disk drive that is 100 gigabytes and free blog hosting that doesn’t go down for an hour or two every few days, the way my current host does — well, okay, that’s part of the reason I think Google should do it. But I also think it makes perfect sense for the company. Offering things like Gmail and Google Docs and Google Calendar for free is in their DNA. Why not use the spare space on those 500,000 servers to the maximum? It’s a slam-dunk.

Look ma, my docs are in the cloud!

Nice to see that Google has finally launched offline access for Google Documents — or at least for text documents anyway (apparently presentations and spreadsheets are coming later). I guess we should be grateful, although I still have to wonder why Zoho has had offline capability for its document-sharing service since way back in November sometime, which is based on Google Gears. I thought having inside knowledge of features helped companies triumph over their competitors — or is that the kind of thing that only works for Microsoft?

Late to the party or not, I still think Google is the one to beat. Zoho’s services are great, and I use Zoho Show in particular a fair bit, but when it comes to trusting a company with my data I would have to come down on the side of Google. Doesl being a multibillion-dollar company mean that they won’t be vulnerable to outages that take down the cloud? Hardly. But I expect them to have some pretty mean backups and redundancies, thanks to those 600,000 servers they have in warehouse farms around the world (or however many they are up to now).

Some — including Frederic of The Last Podcast — say the sharing part of Google Docs doesn’t interest them much, and that they need features that only an offline or desktop version of a word processor can offer. I have to say I don’t need the latter, and I think the former is a critical feature, especially as companies try to make it easier for their employees to collaborate and become more creative. (Note: Rafe Needleman at Webware points out that Mozilla is planning to build this feature into its browser).

Virgin volunteers to be Big Brother

According to a piece in The Telegraph this morning, Virgin Media — the Internet service provider run by Richard Branson’s Virgin conglomerate — has volunteered to play copyright cop and yank the Internet account of users who share infringing material. Virgin and the British Phonographic Industry are apparently working out the details, which will likely involve the “three strikes and you’re out” approach.

Under this system — which has been proposed by several copyright enforcement bodies as either a voluntary process or one that could be legislated — Internet users would get a letter from their ISP after the first “offence,” then their account would be suspended (no word on for how long), and after a third infraction they would be disconnected completely. It’s not clear whether Virgin is going to play ball with cutting people off, but the story says that “remains an option” (although Torrentfreak says there could be a silver lining to the Virgin move).

As more than one person (including me) has pointed out, this approach sounds like a great idea right up until you try to imagine how it’s going to work. Would users be cut off for a single shared file — and if not, then how many? Would they be cut off for days, or weeks? What if the account holder isn’t the one sharing the files? How is the BPI going to track activity? How will the money be shared? Determined pirates won’t be the ones caught by this plan — only the unwitting or stupid.

As I mentioned in my previous post on this topic, not only would turning ISPs into Internet police open up a giant can of worms — especially since Virgin would be voluntarily turning over the names and addresses of users suspected of engaging in illicit behaviour — but criminalizing copyright infringement on such a massive scale is all out of proportion with the damage that is allegedly being inflicted on the music industry. And yet, we seem to be facing either an ISP cop or ISP extortion.

The more I think about it, the more it looks like this could be the beginning of Act Two of the music industry’s ongoing self-immolation, with the lawsuits by the RIAA as Act One.

Publish2 gets Series A financing

Congratulations to Scott Karp and his team at Publish2, who just announced that they have closed a $2.75-million round of Series A funding from Ross Levinsohn’s Velocity Interactive Group. Scott was one of the most perceptive writers about the future of digital media while he was with Atlantic Media (which publishes The Atlantic magazine), and last year he left that job to pursue his vision of how traditional media can use social tools — such as the Publish2 social bookmarking platform — to improve and extend their reporting onto the Web.

I’ve been trying out the beta for awhile and reading what Scott has written about the trials he has done with U.S. papers, and I think he is definitely on to something. I’m looking forward to finding out what else he has up his sleeve. There are some more details at VentureBeat and Jeff Jarvis (who is on Publish2’s board) has a post as well. Steven Hodson at Winextra thinks that by focusing on journalists, Scott has turned his back on the blogosphere (but see Scott’s comment on Steven’s post for clarification).

Update:

Om Malik (a former journalist himself) says he thinks Scott is a smart guy, but he doesn’t see the business potential in Publish2. Mike Arrington also seems skeptical — or maybe it’s just because he didn’t get an invite to the beta 🙂

The blogosphere as high school, part XVII

My friend Mark Evans has a post about the lack of original thought in the blogosphere — or at least the pressures that tend to keep original thought from appearing — and as the closest thing to what MG Siegler calls a “bitchmeme” this weekend, it has grabbed a bunch of links. Dave Winer sees this as a sign that the end is near, and says he’s heading for the hills (we should all be so lucky), and lots of others have chimed in that Ed Bott was right and Techmeme is an “echo chamber” with no value whatsoever.

I know the “conversation” metaphor has kind of been beaten to death, and I apologize in advance for trotting it out again, but I think it’s the best one we have. To some, the clusters of “me-too” posts are a sign that there is no value in Techmeme.com — to which I would respond that value is where you find it. Yes, there are a lot of people posting things that just repeat what someone else said. But at the same time, there are also new bloggers coming along all the time who do add value.

In that sense, Techmeme (and the blogosphere in general) is a lot like a party or a crowd gathered at a bar. Some times there are people who are either boring, or have nothing of real value to say, or who are drunk and disorderly, or curmudgeons who sit off in a corner muttering to themselves and shouting from time to time. Does that mean you leave the party? Maybe. But you could be missing out on some great conversations, or meeting some interesting people.

Guys like Dave talk about how it’s all the insiders and the rest are hangers-on, and all that reminds me of is kids in high school, complaining about how they’re not in this or that clique, or how so-and-so always hangs out with the jocks or the geeks instead of them. The blogosphere is the closest thing I can think of to a meritocracy, and I would argue that for the most part Techmeme is as well — yes, there are cliques, but if you write a good post, it can hit the top and get links just like anyone else’s can. No one cares whether you’re tall or thin or pretty or athletic.

As I’ve mentioned before, in the clusters of me-too posts and bitchmemes and so on at Techmeme, I have found great bloggers like Frederic from The Last Podcast, MG Siegler from ParisLemon (and now of VentureBeat), Steven Hodson of Winextra.com, Jason Kaneshiro of Webomatica and others. Did I have to do some digging through useless echo-chamber posts? Yes. But that’s what some conversations are like. I’m not ready to give up on the whole party, that’s for sure.

I want my blog to be the aggregator

Loic LeMeur of Seesmic has a blog post that echoes something I’ve been saying for awhile: having cool services like FriendFeed or Twitter or Flickr or even Facebook is great, and they all serve a specific purpose and have a certain value — but it’s hard to keep track of what is where, and which conversations are going on with whom. A number of people (including me) wrote about this idea of fragmentation with respect to FriendFeed not long ago, but it applies to lots of other services as well.

That’s why I agree with Loic — and with Mike — that the best solution of all is to have a single portal to everything that matters about you, whether it’s your photos (Flickr) or your work history (LinkedIn) or your chats with friends (Twitter). For some people (like me) that portal is always going to be the blog, because that’s where we live most of the time and create most of our content. For others, it might be a Netvibes page or an iGoogle page with widgets for various services, or even their Facebook page — although Facebook isn’t customizable and adaptable enough, I don’t think.

I don’t think FriendFeed.com can be that portal, but it could be a component of that portal. I’m always looking for services that provide widgets and plugins that allow what they offer to be embedded somewhere else, which is why I like Google’s GTalk chat widget. If there was a Twitter widget that offered the same kind of functionality as Twhirl, I would definitely embed it here. And I’m hoping there will be a FriendFeed one as well soon, since Paul Buchheit and Bret Taylor seem to be moving at hyperspeed when it comes to new features. But it has to be a two-way widget, with data flowing in both directions.

I’m hoping that the Data Portability efforts that are going on, and Chris “Factory Joe” Messina’s DiSo project, can help make that kind of personal, customizable, widgetized portal a reality.

NAA to newspapers: advertise this

We’re long past the writing-on-the-wall stage for newspapers and advertising, it seems — the recent report from the Newspaper Association of America is more like a billboard, with one of those huge searchlight things they use for movie premieres and the opening of new car dealerships. And what it says is (pardon my French): You guys are totally screwed. Advertising has been declining for the past few years or so, but now the NAA is talking about the biggest decline since the association started keeping data — bigger than after September 11, 2001.

Some of that (full data here) is undoubtedly a result of the U.S. economic situation, which has everyone from banks to car dealers pulling back the reins and spending less. But the uncomfortable reality is that advertising in newspapers is declining for a bunch of other reasons as well, including the fact that newspapers appeal primarily to an aging population. At a recent meeting at one newspaper, an editor said that she felt a piece on hip-replacement surgery should be played more prominently because “that’s our core demographic.” Sad, but true.

But an even more important reason why paper ads are declining is that their cost-to-value ratio is way out of whack with what advertisers can get elsewhere, particularly the Internet. And it’s not just Craigslist.org decimating the classified business. Even traditional newspaper ads are difficult (if not impossible) to measure. Online ads can not only be targeted more specifically, they can also be tracked a dozen ways, and it quickly becomes obvious which ones are working — plus they are an order of magnitude cheaper than the paper version.

The NAA’s press release, of course, focuses on the much more positive news that online advertising for newspapers continues to grow at double-digit rates — but it still only accounted for revenue of $3.2-billion, compared with overall print revenue of more than $42-billion. It’s going to have to start growing a heck of a lot faster than that before it even starts to make a dent in the decline of print advertising. Update: Chris “Long Tail” Anderson doesn’t think it’s all that bad if you look at it properly.

Is PaidContent really a “blog” at all?

In a piece posted at the New York Times’ Bits blog, Saul Hansell pits Mike Arrington’s “vision of blogging’s future” against that of PaidContent founder Rafat Ali. One is personal and filled with lots of emotion (guess which one) and the other is more analytical and has more traditional journalistic integrity, at least according to PaidContent’s new CEO. Rafat quite freely admits that his model has very little to do with the rough-and-tumble of the blogosphere and more to do with the large trade publishers like Reed Elsevier and Informa, which cover industries like a blanket but don’t get into pissing matches about personalities.

I have nothing but respect for what Rafat and his team have built. He and Staci and the rest have doggedly pursued their model, and they have covered the media business within an inch of its life, and they should be congratulated for that. I think Rafat is totally right when he says he is going after the big trade publishers, and I have no doubt that one or the other of them will eventually come to their senses and just buy the operation outright, or Rafat might just buy them.

But I also find that PaidContent.org isn’t that… well, interesting. If I were a mid-level media executive, trying to figure out where the next layoffs were coming from, or who was rising up the ranks of whichever entity, then I might read it for information purposes. But it doesn’t have much in the way of colour to it — and to be fair, Rafat has never made any secret of the fact that colour isn’t what he’s after. I also notice that while PaidContent is set up like a blog, with comments and everything, there aren’t a whole lot of comments on the stories I read.

To me, however, part of the power that blogs have is that they are personal and direct, that they give you a connection of some kind to a person (or people, in the case of a blog like Gawker), and that they have a voice that either interests or amuses or enrages you. PaidContent doesn’t have that for me — it is pure information. That’s why it commands triple-digit CPM rates for its ads, no doubt. But while I wish Rafat and the rest of the team all the best, and I think they are doing a heck of a job, I hope that not all blogs are going to become just trade press in another form.

Musical interlude: virtual mixtapes

Maybe it was all the posts about the ISP music tax, but I started thinking about how one of the most important things about music is that we enjoy listening to it and want to share it with others — and that the Web is one of the best ways of doing that. Whether it’s emailing a friend an mp3 file, sending one through Pownce, or creating an mp3 blog and getting crawled by Hypemachine.com, there are lots of ways to do it. How do artists get compensated? I have to admit I don’t know. But having people share your music has to be good.

A couple of the newer ones I’ve come across are Muxtape.com and Mixwit — and I am indebted to Fred Wilson, the music-loving VC, for both of them, since I found out about them by reading his blog. As Fred has described, Muxtape is incredibly easy: fill in a few fields and upload some songs, and that’s it. The interface is also really stripped down, which is great (although I don’t understand why the typeface has to be so gigantic). Is it legal? Who really knows. It’s a great way to share music.

Mixwit.com is a little more complicated, but not much, and you can add an image of an old cassette, which is kind of cool for those of us who (like John Cusack in High Fidelity) remember when that was the primary means of music sharing. Plus, you can do one thing with Mixwit that you can’t with Muxtape (at least not yet) and that is embed it in a blog. To me that is a killer feature. My friend David Gratton of Project Opus has a Facebook app that is somewhat similar called Mixxmaker.

Is a music “tax” paid to ISPs the answer?

This is a big issue, with lots of sides to it, and I’m not going to try and get into them all right now, but it’s worth noting that Warner Music — the label run by Edgar Bronfman Jr. (who blew a few billion dollars worth of his family’s booze money on an ill-fated merger with Vivendi way back when) — has hired music-industry veteran Jim Griffin to create an ambitious, and possibly wrongheaded, digital music licensing entity that would see consumers pay their Internet service providers a monthly fee in return for the right to access music online.

Griffin outlines the idea a little in an interview with Portfolio magazine, and notes that it isn’t his idea but has been around a long time — it’s known as a “compulsory license,” and it was what helped the radio industry get out of the trouble it was in when it first became popular as an entertainment medium. Record labels argued that if people could listen to whatever they wanted for free on the radio, no one would buy albums and go to live shows. Sound familiar? Of course, radio play has sold billions of records and made the industry billions of dollars, but there you go.

In any case, Griffin wants to apply the same idea to downloading — and he’s not the only one. The Songwriters Association of Canada has proposed a similar thing, and so have other groups (the EFF has been proposing something similar since 2004). And ISPs are hopping on board this particular train in many cases, in part because they are being threatened with legislation in France and elsewhere that would hold them liable for policing their networks for copyright infringement. But does that make it a good idea? I don’t think so. And however Jim and others describe it, to me it sounds a lot like a tax. Mike Arrington goes further and calls it “protection money.”

Griffin says that “eventually” advertising might cover the charges, and those who wanted to surf without ads would have the choice to pay the fee. But it sounds like in the beginning the fee would be mandatory — even for those who don’t do any downloading at all. Does that sound fair? No. We have mandatory fees for things like education and road-building, but I don’t think music licensing falls into the same category. What about people who pay for songs legally through iTunes — do they get a free pass, or do they have to pay twice? Maybe Warner sees this as a way to put Apple out of business.

And what if such a fee is instituted — what about the movie companies, and other media companies? What about photographers? And what about the billions of dollars in software that is pirated online? After you add all the fees for those content creators, we’ll all be paying $100 for our Internet access (which of course the ISPs have started filtering and shaping because of all the downloading). And then there’s the goat rodeo that would be involved in figuring out who gets the money collected. Or maybe we could just let the ISPs and the music labels work all that out — I’m sure they would do it fairly, right?

Russell Smith: Web-bashing 101

I don’t like to pick on a colleague from the Globe and Mail, but in Russell Smith’s case I’m willing to make an exception. I like Russell, and I know he enjoys playing the curmudgeon — in fact, I think he would make a pretty good blogger. But in his latest column I think he goes for the facile, blog-bashing argument because, well… it’s easy. In the piece, which is entitled “Way more news sites, way less news,” he looks at the recent report from the Project for Excellence in Journalism which looked at the state of the news media in the U.S. and compared the number of unique news stories both in print and online in various forms, including blogs. One of the comments from the study is:

“News consumers may have had more choices than ever for where to find news in 2007, but that does not mean they had more news to choose from. The news agenda for the year was, in fact, quite narrow, dominated by a few major general topic areas.”

Russell uses this as a stick with which to beat the Web and particularly the blogosphere, saying blogs and websites focus on only a few stories and blow them out of proportion, and also that sites such as Digg (which the report barely mentions) accelerate this process. He says the report showed that “more than a quarter of the news stories on television and online last year in the United States were about the Iraq war and the presidential campaign” and says that

“this kind of concentration of attention runs against what was expected of the kind of information universe the Web would provide. What we expected, 10 years ago, was a wild diversity, a babble of voices bringing light to the stories that the supposedly stodgy, politics-and-economics-obsessed newspaper newsrooms were not connected to.”

I’m not sure who expected that (other than maybe Russell). In any case, is he saying that TV and news websites shouldn’t have focused on the Iraq war and the presidential campaign? Surely those were a couple of pretty important topics. Russell goes on to say that instead of the wonderful diversity that we expected from the Web, “what we’ve ended up with is a million sources reporting the same story.”

Two things about that: 1) Lots of the blogs and websites writing about those topics aren’t reporting them at all, they’re analyzing and commenting on them (people might take issue with that, but it’s a separate argument from the one Russell is advancing; and 2) What do plenty of newspapers do? Run the same set of a dozen or so newswire stories or press releases to fill out their pages — and often get them wrong, as Tim Burden notes in his post. How is that any different? Most of the report’s criticisms seem to extend primarily to cable television, rather than online, but Russell has his axe and he’s apparently determined to grind it.

“If the news is important, it will find me”

Brian Stelter has a great piece in the New York Times that I urge anyone interested in the media business to go and read right now — I’ll wait — and that includes reporters, editors and (most of all) managers, and probably IT departments and designers as well. The context of the piece is political reporting and political news, but I think the points Brian is making are relevant to the entire industry as a whole.

It’s not that there is anything earth-shatteringly new in the piece, mind you. But I think it does a great job of describing how digital “word of mouth” — in other words, social networking of all kinds including Twitter, IM, Facebook and so on — has become a dominant means of news delivery for young people in a way that I’m not sure old geezers like myself quite grasp, no matter how often people describe it (and Stelter knows whereof he speaks, since he was still in university when the NYT hired him away from TV Newser). As Brian describes it in the story:

In essence, they are replacing the professional filter — reading The Washington Post, clicking on CNN.com — with a social one.

And then Stelter mentions Jane Buckingham of the Intelligence Group, a market research company, and says that during a focus group, one of the subjects — a college student — said to her:

“If the news is that important, it will find me.”

Think about that for a second — or longer, if necessary. I think that sums up, in ten simple words, what has happened to the way that many people (and not just young people, but those who use RSS readers and blogs and social networks as well) consume the news (Mark Cuban seems to think so too). Not only is there just so much of it out there that it’s virtually impossible to consume it all, but the very fact that someone you know — or trust — has passed on or blogged or Twittered or posted a link makes it more likely that you will read it.

Are most websites designed with this kind of principle in mind? Not really. Most of them are still designed as though people read the news the same way they do in the paper — starting at the front and moving page by page towards the back (of course, many people don’t read the newspaper this way either, but that’s another story). In reality, people come from every conceivable angle, dropping into stories and then disappearing, finding them through links and posts and Digg and elsewhere.

If the news is that important, it will find me.

Crack deal on Google Streetview?

It has yet to make its way to Canada (although the cars have been spotted, and there has already been controversy over privacy laws) but in many U.S. cities, Google’s “Streetview” service provides high-resolution photos of the street when you select a location on Google Maps. A whole subculture has emerged on the Web since the service went live, of people trading and commenting on photos of people sunbathing nude, words carved into cornfields and so on.

This week, a series of photographs showed up on several sites that appeared to show two men engaging in a drug deal on the streets of Chicago. Although the photos have since been removed from Google’s database, there are still plenty of versions of them available on the Web. But do they actually show a drug deal? There’s a black man in a baseball cap, large white T-shirt and baggy jeans bending over into the window of a car, with what appears to be cash or a small package in his hand — but does that mean it’s a drug deal?

Some commenters at Gawker and elsewhere argued that to assume it was a drug deal was an overtly racist response. Others, however — including some Chicago residents — were more than happy to chime in that the area was a well-known drug neighbourhood, and in one case a commenter said that she had bought drugs at that exact location before. Another said that he was “robbed at gunpoint while trying to buy pot” at the same spot. And Gawker editor Nick Denton pointed to a local site with crime statistics for the area that seemed to back up the drug-deal explanation.

Just one troubling point, as more than one commenter noted: the Google Streetview photos are taken by a car — in most cases a Volkswagen Beetle — with a 360-degree camera mounted on a tripod on the roof. If the other men near the car were (as some argued) keeping an eye out for cops, how could they not notice a Beetle driving by with a gigantic camera strapped to its roof? Whoever the dealer is, he needs some new lookouts.

CBC follows Norway’s BitTorrent lead

My friend Steve O’Hear alerted me to a post on the Last100 blog (part of the excellent Read/Write Web network) written by Guinevere Orvis, an interactive producer with the CBC — that’s Canada’s national broadcaster, for any of my non-Canadian readers — about how the network came to distribute one of its shows using the BitTorrent peer-to-peer network. Guinevere says that the idea started with a post on BoingBoing about Norway’s state broadcaster doing the same thing with a show.

While it might have been nice to hear that the CBC got the idea from reading about it on my Globe blog (sorry, I couldn’t resist), it’s still nice to know that our national broadcaster is open to new ideas. And from the sounds of Norway’s experience, it should be one that they consider repeating. According to Eirik Solheim, who works for the Norwegian broadcaster, the show has been downloaded more than 90,000 times and the network has been “saving huge on bandwidth cost.”

That last part is important to note: BitTorrent may be known for piracy, but it is fundamentally a distribution method, plain and simple (ISPs argue that it is cheap because it piggybacks on their networks and sticks them with the bill, but that’s a topic for another day). Here’s hoping that the CBC decides to continue this experiment, and congratulations to Guinevere for helping them come to grips with the issues involved and spurring them on. Her full post on all the details is well worth a read.

Further reading:

Michael Geist has written about the CBC’s move, and so has TorrentFreak, and CNET. Mike Masnick at Techdirt has taken note of it as well.

mesh 2008: more details on meshU

As my fellow mesh 2008 organizer Mark Evans notes in his blog post on the topic, one of the things that we got asked to do after the last mesh conference was to come up with ways of providing more in-depth content for developers, programmers, designers and other hard-core Web and product types. We thought about trying to build more of that kind of content into the main mesh conference, but it felt as though it needed its own thing: hence, the creation of meshU.

meshU — which takes place on May 20th, the day before the main mesh conference — is designed to be a one-day, intensive series of workshops and discussions about issues that developers, designers and even managers (yes, even managers) need to know about, whether it’s using Amazon’s S3 distributed servers system or interface design or AJAX tools. And we’ve got some megawatt speakers to lead some of those discussions, including Dabble DB co-founder Avi Bryant, Pownce co-founder Leah Culver, John Resig of jQuery and Carsonified’s Ryan Carson.

We also want to hear from you what kind of workshops you’re interested in, or if you think you’re the one to present something or lead a tutorial or discussion. Just fill out this form and let us know a bit more about what you have in mind. You can read more about meshU here, and if you want to book a ticket you can do that for just $239.