What happens when the OS doesn’t matter?

What happens when the operating system you use doesn’t really matter any more? It started with dual-booting Windows and Linux, and using things like Crossover Office to run Windows apps under Linux (which is balky at best), and then things like Virtual PC for Mac, and now we have Apples with Intel chips that can dual-boot Windows and Mac OS-X with Boot Camp. But dual-booting is a pain, because you have to close everything and restart your computer.

Virtualization is where it’s at — running two operating systems side-by-side, so you can flip back and forth. I’ve never used it, but Parallels looks like a truly amazing experience. Windows XP and Mac OS-X running right next to each other, and the latest upgrade allows you to move Windows apps outside the Parallels window and drag and copy things from one OS to the other. Very cool. Michael Verdi has a screencast here.


There has been talk that Apple would include some form of virtualization in Leopard, the next upgrade to the Mac OS, but Apple executives recently quashed that speculation, saying the company is happy with Boot Camp and that Parallels involves “performance degradation.” By which they mean it causes your system to run a lot slower. Some Parallels users have said the same, but others have said for most normal computing tasks it runs fine (in other words, no video games or other graphics-hogging apps).

If you can run Mac OS and Windows on the same machine and use whichever program you want, and drag data back and forth at will between the two, what does an operating system mean? In a sense, it just becomes a visual preference rather than a system or standards choice. And if you spend most of your time using Web apps, the operating system means even less. We’re not quite there yet, of course, but would such a world help Apple or Windows more?

Can Web 2.0 make spies smarter?

My friend Clive Thompson has a great piece in the latest New York Times magazine that looks at whether Web 2.0-type tools such as blogs, wikis and other “social media” can help the U.S. intelligence community get better at their jobs. As the article notes early on, most of the research that has been done on the attacks of September 11, 2001 has shown that many of the pieces were there to indicate that something serious was in the works, but no one put them all together.

The story begins with an anecdote about a young geek who shows up at his new job with the Defense Intelligence Agency expecting to find all kinds of great technology for tracking the bad guys, and instead finds an ancient computer network with either poor or non-existent connections between different intelligence agencies and instant messaging systems that were incompatible. What better way to get people sharing information than with wikis and blogs?


I can almost hear Web 2.0 skeptics like Nick “The Prophet of Doom” Carr and Andrew “Web 2.0 is Communism 2.0” Keen snorting with derision at this idea. Blogs and wikis for spies? What will they think of next. But Clive’s story deals with the downside of social media as well — including the issue of getting people to actually use the tools when they are available. Spies are notoriously secretive, even when dealing with other spooks. How do you get them to share?

When it gets right down to it, however, Clive’s story makes the point that intelligence is about information, and if you don’t have fast access to the right information then you are lost, as the 9/11 report made clear. And what is the Web but a tool for aggregating and finding information? Better still, because intelligence agencies are using Web 2.0-type tools on secure networks with restricted access, the signal-to-noise ratio should (theoretically) be higher.

Although they are fighting the inherent bureaucracy within the U.S. intelligence community — which is arguably larger and more entrenched than virtually any corporate bureaucracy — it’s nice to know that there are those who are fighting to use Web-style tools to help break down some of those walls. Andrew McAfee, the Harvard Business School professor who is a champion of “Enterprise 2.0,” seems to feel the same.

Is the Web bubble back? Ask Hitwise

From the London Telegraph comes a rumour that Hitwise — one of the half a dozen web-traffic measurement companies whose stats show up in press releases, and are used as fuel for takeover rumours — is itself the subject of takeover talks, with the price tag reportedly an eye-popping 180 million pounds or about $350-million (U.S.). Joe Duck says this sounds about right if Hitwise charges its 1,200 or so clients an average of $2,500 a month for access to its data.

I’m not sure where Joe gets those numbers from, but let’s assume he’s right. That works out to annual revenue of about $36-million, which makes the rumoured takeover price between 9 and 10 times revenue. Joe says that’s “not outrageous” for an established and growing Internet company, which leads me to believe one thing — no, not that Joe is on crack, but that he has a very high threshold for outrage.


I think between 9 and 10 times revenue is bubble-type math. And yes, I know that Google sells for 15 times revenue; in fact, that actually helps my case. Obviously, traffic measurement is a hot area right now, primarily because advertisers are desperate to find a way of deciding where to put their money, and websites are desperate to find a way of proving they are the right place to put it.

Using page views as a metric, as Steve Rubel notes, is broken. But then, the different standards used by Hitwise and comScore and Nielsen and Alexa aren’t much better. As Matt Marshall pointed out, website measurement as a whole is a train wreck. Alexa only measures users who install a browser plugin and is biased towards the U.S.; comScore uses a piece of software that has been accused of being spyware; Nielsen phones people and asks them what they do; and Hitwise uses ISP log files.

What you typically wind up with is half a dozen measurements that all say something different — in some cases, one firm will show a website falling in popularity or flat, while another shows its traffic zooming. Is Hitwise any better than its competitors? Who knows. But any way you slice it, 9 or 10 times revenue is a boatload of cash.

The new Pixelotto — a tax on the stupid

Remember the “Million-Dollar Homepage”? A 21-year-old guy named from Wiltshire, England named Alex Tew came up with an insanely brilliant and at the same time ridiculously stupid idea: auction off individual pixels on a webpage to companies as advertising space, and then use the money to pay for university. As Homer Simpson once put it, Alex was stupid like a fox. Companies paid, in part because people wrote about the site, and last January the site sold its last pixel.

The total haul? $1.04-million. Alex paid for his first year of university two weeks after he opened the site, and raised more than $150,000 within just two months. He went to university but dropped out because he was too busy with all the interview requests and related opportunities. So what has Alex decided to do? As Natali explains at TechCrunch, he’s doing pretty much the same thing, but without the university tuition pitch, and for $2 a pixel instead of $1 — with a lottery to see who wins half of the $2-million purse.


Natali figures people will be too smart this time, because it’s already been done, and because it’s obvious that the new site Pixelotto is just a marketing ploy, and not some innocent student trying to pay for his calculus books. I’m not so sure though.

As more than one commenter has pointed out over at TechCrunch, the secret is publicity — if it gets written about, companies will want to ride on that wagon no matter how stupid it might seem. And there’s no question it is unbelievably stupid. The original site looks like a bad acid trip, with tiny banners crammed up against each other, completely unreadable. Is that good marketing?

In the end, of course, Alex laughed all the way to the bank — and likely will again. Anyone who comes up with a new definition of the “tax on the stupid” (as I like to call lotteries) deserves everything he gets.

The WSJ is a bit muddled on copyright

Having written a few editorials, I know that it is a difficult art. The best editorials have a strong point of view — a way of gracefully cutting to the point of an issue — but enough nuance to make it clear the writer knows what he or she is talking about. The worst are all bombast, displaying an ignorance of the facts that undercuts the editorial’s argument. Unfortunately, it seems as though a recent editorial about Google in the Wall Street Journal fell towards the latter end of the spectrum.

The point of the editorial (which is behind a pay wall) is stated up high:

The firm’s practice of downloading and reproducing books, articles, photographs and other creative materials without approval of the copyright owners is legally ambiguous… Google pits the rights of intellectual property owners against the Web’s ability to “democratize” information for everyone.

There’s nothing wrong with that point — what Google is doing is legally ambiguous, and there is an inherent tension there between Google wanting to index information and content owners wanting it not to.


So far, so good. But then a little further on, things get confused, as Tim Lee at the Technology Liberation Front describes it. The WSJ editorial writer cites a paper that Tim wrote for the free-enterprise Cato Institute, but mostly gets it wrong. It says his paper argues that “transformative” technologies like search engines should be exempt from many copyright lawsuits — but as Tim points out, all he was doing in his paper was summarizing what the U.S. courts have said.

He also notes that the editorial says Google “claims “a legal safe harbor” from copyright infringement under the 1998 Digital Millennium Copyright Act, which allows Internet firms to provide a thumbnail of copyrighted material.” But while there is a “safe harbor” clause in the DMCA (which leads to the so-called “notice and takedown” rule), it has nothing to do with thumbnails.

Then the WSJ talks about how Google is claiming

A right to reproduce and distribute intellectual property without permission as long as it promptly stops the trespass if the copyright owner objects. That’s like saying you have a legal right to hop over your neighbor’s fence and swim in their pool — unless they complain.

The first problem is that Google isn’t asserting any such thing; the DMCA explicitly confers that right. It also isn’t anything like climbing over a neighbour’s fence, since that has to do with property rights and not copyright, and they are very different (for what should be obvious reasons).

So an editorial that has a good point gets all confused in the middle, loses track of the facts, and then employs a dubious metaphor. I would give it a B minus, or possibly even a C, given how important the topic is. Mike Masnick over at Techdirt is similarly unimpressed.

Why newspapers are like CDs


Scott Karp at Publishing 2.0 has a long and thoughtful post on the subject of the content business and aggregation, and so does Tim O’Reilly.

Original post

Jack Shafer, who like me is an old-media geezer, has a great piece in Slate about the newspaper business, the primary point of which is that the industry has been confronting an oncoming freight train — and having endless meetings and focus groups about it — for more than 30 years now. To that extent, the apocryphal frog in the pot of water (who never notices as it heats up) is probably a better metaphor, since it has been coming so slowly that it’s easy to ignore.

There are some nice bits in Jack’s piece, including a description of the Newspaper Readership Project from 1976 and a Los Angeles Times story written about it entitled “Newspapers Challenged as Never Before,” and an amazing statistic: “The number of U.S. households and the combined circulation of all daily newspapers was almost at par — about 70 million households versus 60 million in circulation. Today, the number of U.S. households exceeds 100 million, but daily circulation is flat or down a couple million from the 1970s.”

atomic explosion.jpg

But one of the things that really struck me was Jack’s comparison of newspapers to other forms of media, including TV and music (an extension of the argument made by William Bulkeley in a recent WSJ piece). Just as people grew frustrated with compact discs when technology came along that allowed them to sample or download just the songs they wanted, so newspapers are under pressure because people don’t necessarily want to sit down and read all the stuff in their newspaper, and now they have an alternative.

In that sense, the Internet is far more threatening that either radio or TV. Yes, both of those also put pressure on the news-gathering part of being a newspaper, but they were also very similar forms of media — you had to listen or watch at a specific time, and that had limitations. The Internet is always on, and there is as much or as little information as you could possibly want. Information is being atomized and distributed, and that is very difficult to compete with using traditional tools.