Apple and the future of music

Here’s a column I’m working on for globeandmail.com:

Once an also-ran in the computer and personal electronics game, Apple Computer is now one of the superstars in that universe, thanks to the magic combination of a sexy and user-friendly music player — the iPod — and a profit-spinning online store called iTunes. The company’s handhelds have more than 80 per cent of the market for digital music players, and Apple’s financial performance has also been supernova-like: in the fourth quarter, the company’s profit more than quadrupled, and it had sales for the year of $14-billion (U.S.).

The market is looking for even better things in the future — all eyes are trained on Macworld this week, where Apple CEO Steve Jobs is expected to announce a number of new products. Among other things, the rumour mills have been churning out talk about an Apple Mini-style PVR, or personal video recorder, or even a kind of digital-media hub for the living room. Some have even speculated that Apple could launch a high-definition TV with a built-in computer.

Along more prosaic lines, the company is expected to announce Intel-powered laptops, an upgrade to the iPod Shuffle (perhaps adding a screen) and a new video player. Some or all of these products might — and no doubt will — find a ready market. But could Apple be sowing the seeds of its own failure, by pinning its success on a proprietary product, much as it did in the past with the Macintosh?

That’s the controversial argument being made by Clayton Christensen, the author of a well-received book called The Innovator’s Dilemma. Mr. Christensen told BusinessWeek magazine recently that he’s afraid Apple might be making some of the same mistakes it did with the Mac, by not opening up its products and software. Apple fans will no doubt scoff — after all, this isn’t the Mac we’re talking about, but a product with 80-per-cent market share. Still, it’s an argument worth considering.

Please read the rest of this column at globeandmail.com

Search engines aren’t leeches

Jakob Nielsen is highly regarded as a web designer and usability expert — although I think his website at useit.com could use a few more splashes of colour (that’s a joke, Jakob) — but I think his recent post about search engines being “leeches” of the Internet is way off base. His own summary of the post is as follows: “Search engines extract too much of the Web’s value, leaving too little for the websites that actually create the content. Liberation from search dependency is a strategic imperative for both websites and software vendors.”

No disrespect to Jakob, but this — as the philosopher Jeremy Bentham once said — is “nonsense on stilts.” It’s an issue that has come up before, and no doubt will again: Do search engines and aggregators “steal” content from the websites they index, and by selling ads based on that content, “steal” money from those sites? You might as well argue that the Yellow Pages steals from the companies that are listed in its pages, or that newspapers “steal” money from companies that advertise in their classified listings.

In his discussion of this “theft,” Jakob describes a website that becomes more profitable by increasing its usability, but then watches as all its competitors do likewise; because they are also more profitable, these competitors can then bid more for search-based ads, which drives up the price for the original website, thus robbing it of all those benefits. In reality, all Jakob has described is the normal functioning of a market — in this case, for search-based ads. Search engines drive traffic to a site, which helps increase its profitability. How is that wrong?

Jason Calacanis calls Jakob’s post “the stupidest thing I’ve read in a long time,” and he’s not far wrong. Danny Sullivan of SearchEngineWatch has a more balanced view, but even after giving Jakob points for a couple of aspects of his post, he still can’t agree with central argument. And that’s because it’s nonsensical. Jakob may know a lot about usability, but he doesn’t know a darn thing about economics — Internet or otherwise.

The secret Starbucks coffee hack

Who says the Internet isn’t useful? Courtesy of Tim Harford at Slate magazine, I learned that if you ask a server at Starbucks, they will serve you something that doesn’t appear on the menu: namely, a “short” cappucino — eight ounces, as opposed to the 12 ounces in a “tall.” Even though it is smaller, however, it has the same amount of espresso as a “tall,” and therefore (according to coffee afficionados) tastes better.

Tim goes into a long discussion of why Starbucks would have such a thing available but not put it on their menu — and it has something to do with why third-class railway carriages used to be roof-less, and market power and things like that. It’s a bit of a freakonomics kind of thing. Of course, it’s not that surprising that Starbucks would charge less for the short, considering it’s smaller, but that’s a technicality.

Anyway, all I really care about is that I can get a short cappucino that tastes better just by asking for it. Along the same lines — and just for Canadians — I have it on reliable authority that if you go into a Coffee Time donut shop and order a “dark roast,” you will get a much better coffee, even though nothing called “dark roast” appears on the menu. Just a little tip for the caffeine addicts out there.

Newspapers: Dead, or just evolving?

Michael Kinsley — who gave up a prestigious print job to run Slate.com magazine way back during the first Internet bubble and has since gone back to the print world — has a nice column in Slate and the Washington Post. about the death of newspapers, entitled “Black, White and Dead All Over.”

The piece — which Jeff Jarvis of Buzzmachine calls “cute” (although I’m not sure that’s a compliment) — does a nice job of describing how absurd the newspaper business seems now, cutting down trees and mashing them into pulp and printing stuff on them, then shipping them to people’s doorsteps in little plastic bags, all so they can throw 80 per cent of it in the garbage.

Kinsley’s essay is a little short on solutions, although he does mention that newspapers “have got the content.” Jeff does a better job of putting his finger on the light at the end of the tunnel in his post, in which he points out that newspapers have a chance to remain relevant provided they realize that “this is about control, about finding, packaging, editing, judging sources on our own.”

It’s interesting to note that while newspaper readership is declining, online news readership continues to grow. It still isn’t making up for the decline, however, and online readers still aren’t worth as much as print readers, but they are growing. And newspapers had better get them while they’re young.

Update:

My friend Stuart asked me whether I thought newspapers are dying, and here’s what I told him: I don’t think newspapers are dying, any more than radio is dead. That said, however, radio isn’t exactly a thriving medium, and neither are newspapers. I think the Internet has just increased the pressures that were already weighing on the newspaper business from television and other factors competing for people’s attention — and in a way I think the Internet offers a way out of the cul-de-sac papers are in.

I think there will always be people who read the newspaper, and want to read the newspaper — but there are likely to be fewer of them (just as there are fewer people who sit and listen to the radio every chance they get). But if anything there’s an even greater appetite for information and relevance and context, and that’s what journalism is designed to provide. Whether it’s done in paper or on the Internet isn’t really the point, it seems to me. But if newspapers don’t get doing it, then someone else will. And I think that’s Jeff’s point as well.

Google tiptoes into the homepage game

Kudos to Phil Lenssen of Google Blogoscoped – and a reader who emailed him – for pointing out that Dell is shipping PCs with a branded home page powered by Google’s customizable portal, an Ajax-driven feature not unlike Microsoft’s live.com or (my personal favourite) netvibes.com. The page has a toolbar at the top that features links to Dell services, as well as boxes of Dell content, but they can be moved around and other things can be added.

And the Dell page isn’t the only one out there: Someone commenting on the Blogoscoped post pointed out that Current Communications, which provides broadband over power lines, also has a custom page powered by Google, which owns a stake in Current. The next question, of course, is so what?

Paul Kedrosky might be right when he says that the home page venture could be as significant as the Google software pack, but as one poster on Paul’s blog noted, trying to take control of the home page is so 1990’s. Does it even matter any more? Perhaps. In any case, it’s an interesting move – and Steve Rubel is right, someone should be writing about this (other than me, that is).

Does Google rhyme with “bubble?”

It’s always fun to see brokerage analysts one-up each other with price targets – all the while maintaining the illusion that the latest outrageous figure is ‘based on fundamentals,’ or represents a ‘pretty attractive multiple’ based on projected profits 10 years down the road. Mark Stahlman of Caris & Co. is the latest to play this game, with his $2,000 target for Google.

Of course, Mr. Stahlman – who is described in his bio as the man who helped bring America Online public – says $2,000 isn’t actually a target per se. He says it’s merely an extrapolation based on current revenue patterns, and involves applying a Microsoft-style multiple to the $100-billion worth of revenue that he says Google could have in the future.

In case you’re wondering, $100-billion would be more than twice the annual revenues that Microsoft currently has, and would put Google in the same league as IBM and General Electric – while applying a multiple befitting a company like Microsoft, which has profit margins of about 80 per cent. Does that sound realistic?

In the end, it doesn’t matter whether it’s realistic or not. It’s the same game that Henry Blodget played in 1998 with Amazon, when his $400 price target got the attention of Wall Street. As Paul Kedrosky notes, it’s all about making an extreme forecast – not about whether it makes any sense or not. Mark Evans says it’s nice to see analysts thinking outside of the box, but I think some could use a little more time inside the box.

Update:

It’s somehow fitting, in a karmic sort of way, that Henry Blodget has one of the most nuanced posts about Google and its share price. He makes 10 solid points about how to arrive at a fair value for a stock – most of which boil down to “no one really knows” – and of course, ends off with his standard (and legally obligated) disclaimer that none of his comments should be perceived as investment advice. Too bad more analysts don’t have the same disclaimer. Andrew Ross Sorkin has also written something for the Times about it.

Is DRM evil — and does that make Google evil?

Do regular users care about DRM — digital rights management — or is it just open-source fans, libertarians and other geeks? It will be interesting to see what kind of reaction Google gets to the super-duper, Google-rific DRM built into the search company’s new video store.

As more than one person has pointed out, the last thing we really need is another form of DRM, what with Sony installing rootkits and Apple handcuffing you three different ways when you shop at iTunes.com. Google founders Larry Page and Sergey Brin are famous for their mantra “Don’t be evil” — and yet, for many, DRM is synonymous with evil (some interesting comments on this Digg post).

If it is evil, is it a necessary evil? Can Google manage to convince everyone that its DRM is somehow the lesser of several evils? Sure, many of us — like Fred Wilson — are crying a little on the inside. But do most people just care about having the ability to download NBA games or that great Star Trek episode with the green dancing alien girl, at the right price, without giving a rat’s behind about the DRM?

TagCloud a good idea that needs help

In my constant quest for new plug-ins and gizmos for my blog, I try out just about everything I come across, including the Quimble survey creator, which I quite like. I came across one called TagCloud that has been around for awhile and decided to try it, in part because I think tag clouds are a handy way of seeing patterns in large amounts of information. Del.icio.us has them, and someone just set one up for Google News that is kind of cool, called Newzingo.

Mike Arrington wrote about TagCloud when it first came out, back in June, and said he liked the idea, but was concerned about how long it took to generate the tag clouds. His concern was prescient, because TagCloud is now taking days to update a cloud. I set one up almost a week ago and there is still no data in it.

When I sent an email to TagCloud, I got a response saying the company was having server issues, and pointing me to this comment by founder John Herren at a Yahoo group related to TagCloud, in which he says that “the size of our userbase has grown to the point that we have a backlog of feeds to analyze,” and that updating can take several days. He says the company is changing hosting providers and hopes things will be resolved soon.

All of this is fine, and understandable — and not surprising, given the issues that companies such as Typepad and Bloglines and even del.icio.us have suffered from in recent months — except for the fact that the TagCloud website still says it will take a few minutes to update a cloud, and there is no mention even in the news section about it taking days, or any of the server-related issues Mr. Herren mentions. I would suggest that that’s not a great way to get your new users, or potential new users, to cut you some slack.

Update:

As you can see if you read the comments, John has responded to my comments within a couple of hours of my posting them, and admitted that TagCloud has been remiss in not keeping people more informed about the problems they’re having, even though it is a beta service, which he said he has been working on as a side project. Thanks for the quick response, John, and best of luck at getting TagCloud up and running again.

Update 2:

As of December 11, still no data in the clouds I created, and no info in the “news” section of the website.

The telecom payola gang strikes again

They’re at it again. As Om Malik reports, a story in the Wall Street Journal (which is now behind the pay wall), says the big U.S. telecom players are continuing their campaign for a multi-tiered Internet in which Google and Yahoo and Microsoft pay for their bits to get better treatment than someone else’s bits. Best quote: “During the hurricanes, Google didn’t pay to have the DSL restored,” said BellSouth spokesman Jeff Battcher. “We’re paying all that money.”

What are the big telecom companies smoking? They charge people $40 a month or so for high-speed Internet service, then put caps and download limits on them, or use “traffic shaping” to give some services priority over others — or even prevent some online applications from working at all — and then argue that Google and other companies should pay extra. Russ Shaw calls it a “shakedown.”

As John Battelle points out, this is all something that Internet users are already paying for, something Vonage CEO Jeffrey Citron also mentions in the WSJ article. Former Wall Street brokerage analyst Henry Blodgett says he wonders what all the fuss is about, but to me it is clear: the telecos want protection money from the big Net companies. I think Jeff Jarvis is right to call them “robber barons,” and of course the inimitable Doc Searls has written a treatise on the subject as well. Fred calls it a simple matter of jealousy.

Update:

Larry Page of Google and the chairman of the FCC both comment on the disturbing trend towards a tiered Internet in this Register story. And I also came across an excellent (and long) discussion of the issue by Mitch Shapiro over at IP Democracy.

Google Pack — colour me confused

I don’t like to think of myself as being a stupid guy, and the billions of dollars that Larry Page and Sergey Brin have would indicate that they aren’t stupid either, but I have to admit that I share Paul Kedrosky’s puzzlement about the rumoured Google Pack that Larry is supposed to be announcing at CES — at least according to the Wall Street Journal.

What the heck is the point of bundling all that software and branding it as the Google Pack? Sure, Firefox is great — I use it all the time, even though it still has a memory leak problem that drives me nuts. Trillian is another favourite of mine, and I recommend Ad-Aware to everyone I know. The pack will also have Google Earth, Google Talk, Desktop etc.

But why Adobe’s PDF Reader? A nice tool, many people will likely never need it, unless Google has some other plans I don’t know about. And Real Player from Real Networks is a bloated piece of cling-ware that loads so much crap that I wouldn’t install it if Larry and Sergey paid me to. As for Norton Anti-Virus, it used to be a great tool but has become an intrusive irritant for many people I know.

I’m at a loss to explain what Google hopes to gain. The idea that this bundle is somehow a competitive blow against Microsoft is almost laughable (InsideGoogle is also bemused). If all you looked at was Google’s RSS Reader, Orkut, Froogle and even Google Talk (although it’s still early), you would be right to wonder — as Paul does in his poll — whether the search giant has “jumped the shark.”