The courier, the driver and the Internet

Anyone outside Toronto might not have heard of this little story – unless they frequent the boingboing.net website – but a week or so ago there was an altercation downtown between a bicyle courier named Leah and a young male driver whose name remains unknown (probably for his own protection). A local photographer happened to be there and took some shots of the driver assaulting the courier, stomping on her bike and generally being a complete asshole. He was restrained (and pummeled, apparently) by some bystanders.

The photos produced an avalanche of comments on the citynoise.org forum, and that no doubt picked up after boingboing linked to it. After spreading through the blogosphere, a story made it into one of the Toronto newspapers, the National Post and then into the Toronto Star and the Globe and Mail. Since then there have been a number of stories about the larger picture surrounding the incident – including the fact that the courier threw garbage back into the driver’s car after he tossed it out the window, and that she keyed his car (she has apologized, and is not pressing charges)

One of the most interesting elements from my point of view, however, is how this event would never have even made it into the media if it were not for the blogosphere – and to that extent boingboing.net and citynoise.org and other types of sites act as a kind of proto-journalism, an early-warning system for the “old” media. My colleague at the Globe and Mail, columnist and author Russell Smith, put it well in something he wrote, which I’m going to quote here because it will soon be behind our “pay wall.” He’s talking about how the comments at citynoise.org start off with misconceptions, then there are flames, then misconceptions are corrected, then Leah comments, then the brother of the driver (apparently).

So there is a sort of fact-checking at work here: Multiple posters will correct each other, and at some point, a witness will step forward. The reporting, and its verification, happened about as fast as any mainstream news network could do it. The Internet is a parallel news network, spreading news much faster than we in the media can with all our technology and organization. The pictures were posted some time last Thursday; by Monday morning, the discussion about them had involved thousands of people from all over the world. By the time a newspaper ran the story the following day, it was old news on the Net.

This is an interesting point. There were plenty of trolls and flames and so on on citynoise.org, but the “true” story came out eventually, and there was plenty of commentary from both sides (pro-courier and pro-driver). Russell goes on to say:

And why did these pictures not make it to the newspapers and the TV stations right away? They – we – would have loved to have them. I think, first of all, because it didn’t occur to the photographer to go there with them. His first instinct was to post on-line. Not only is it easier to do this – no phone call, just a mouse click – but you can control how your story appears and how you get credit for it. And he knew, too, that his story’s dissemination would be just as quick and just as effective.

Interestingly enough, considering that last point about controlling the story and credit, according to a note at citynoise.org the photos there were taken and printed in the Toronto Star without the photographer’s permission – in fact, against his specific wishes. Whether he later gave permission (because I saw them in other papers) isn’t clear.

Blogs — it’s all about the conversation

This may or may not be part of the “secret sauce” in Gabe’s memeorandum.com, but I think Stowe Boyd is onto something. In a post about what makes blogs work — i.e., what makes them vibrant and helps them grow, as opposed to stagnating or becoming echo chambers — he says that he thinks it has something to do with the ratio of posts to comments and trackbacks.

Being a geek (and I meant that in a good way) Stowe comes up with a “conversational index” that quantifies that ratio, and figures if it is more than one — that is, if there are as many or more comments and trackbacks as there are posts — then the blog will flourish. Don Dodge has come to a similar conclusion, and so has Zoli Erdos.

I don’t know if the ratio needs to be one, or close to one, or whether you can even put a number on it, but I think this hits the nail on the head — what makes most blogs interesting isn’t so much the great things that the writer puts on there (as much as I like to hear the sound of my own voice), but what kind of response it gets, and how that develops, and who carries it on elsewhere on their own blog. And I agree that it would be nice if someone like technorati.com or memeorandum.com could track that kind of thing and make it part of what brings blogs to the top.

I like to see what people are talking about — not just what a blogger has to say, but what others have to say about what they say. That’s why I also agree with Steve Rubel that it would be nice to have a way of tracking comments, other than by subscribing to a feed of comments, or bookmarking posts you’ve commented on with del.icio.us or some other tool.

Update:

Stowe Boyd has more on the “conversation” conversation, as it were, here. And as far as tracking comments, no sooner did I mention it then CoComment.com came out with that exact thing. I’m sure that’s a coincidence though 🙂

Memeorandum is a black box

There’s no question that Gabe of Memeorandum.com has created a tremendous resource (there’s an interview with him at Don Dodge’s blog) but I must admit it baffles me sometimes. I considered not writing this post at all because it will probably sound like I’m just whining about not being on the top of tech.memeorandum.com with the A-listers, but I’ve followed the site for quite a while now, and the reason some posts rise or fall in the ranking of topics — and some stay longer while others disappear — eludes me. And it kind of bugs me a little bit. And no, I’m not writing this post just to try and get to the top by mentioning Gabe 🙂

I know that the algorithm behind the site is top secret, so there’s not much point in asking about it. But today is a good example of how mysterious the system is — I’ve been on memeorandum.com many times, either linked to other posts or sometimes as a major topic. I was even at the top of the site briefly one day (although it was a weekend, so that might have increased my chances). But today I wrote a post about IE7, commenting on some of the criticisms and joining in the conversation, but that post never appeared anywhere on tech.memeorandum.com — nor did one that I wrote the day before about network neutrality.

Neither one appeared despite the fact that I wrote them around the same time as several other people whose posts were linked to or formed major memeorandum.com topic headings, including Scott Karp of Publishing 2.0 and my friends Mark Evans and Rob Hyndman. Is there something I’m doing wrong, Gabe? WordPress pings technorati automatically, and a bunch of other sites. Is it that I’m linking too much to different people, or not linking enough? I have to know. Not that I care about that kind of thing, of course. It’s just bugging me. (Dave Taylor doesn’t like memeorandum because he says it adds “an amplifier to the echo chamber” of the blogosphere).

Update:

This post is now near the top of the section about Gabe, and showed up only a few minutes after I posted it, which actually makes me more confused instead of less.

Update 2:

Gabe emailed me and said that both of the posts I mentioned had actually been linked to on the site at different times, and sent me links to cached versions of the pages. I guess I must have missed when they were on the site and then they fell off the radar quickly and so I never saw them. The strange thing is, some posts (like the one above) show up right away, and the ones Gabe checked on didn’t show up for hours. Maybe I’m trying too hard to figure this whole thing out — I should probably just go read a book, or alphabetize my CDs or something useful 🙂

Hey bloggers — MSFT doesn’t care about you

Many of the reviews and comments about the new beta of Microsoft’s Internet Exploder Explorer, IE7, have focused on the RSS implementation. Adam Green at Darwinianweb.com got everybody’s attention when he said that he thought the browser would kill a lot of aggregators, and later amended this to say that while IE7’s handling of RSS wasn’t that great, it was probably good enough. As he put it — in a phrase I wish I had come up with

“Microsoft long ago mastered the trick of calculating exactly the minimal feature set needed to suck the air out of a market it wants to enter.”

That is exactly right. It’s not that IE7’s version of the RSS reader is that great — in fact, it is pretty much “just like favourites,” as Scott Karp at Publishing 2.0 puts it — it’s that it’s probably just good enough for most people. Dave Winer might be right when he says that the “river of news” is a better model for an aggregator, but IE7 doesn’t really have a dog in that race. It just wants something simple that people can use without too much trouble.

Is the way they have done it good enough? That remains to be seen. RSS is still not easy enough, as my friend Paul Kedrosky keeps pointing out, and people are (in general) lazy. Not everyone wants to see if they can break Robert Scoble’s record for most RSS feeds subscribed to. Kent Newsome asks why he should care about IE7, and the answer is that he probably shouldn’t.

We are all “edge cases,” as someone has pointed out, and I would have to go along with Jeff Nolan – IE7 wasn’t designed for us. Simple as that. We can keep on using Firefox and Performancing and Greasemonkey and all those great things, but the fact is IE still has 80 per cent of the browser market, and it got that way by not being on the edge.

Telecoms and the toll-road gambit

I wasn’t sure whether to write anything about the “network neutrality” issue, in part because my friend Rob Hyndman has done such a good job of covering the subject – particularly an overview of the current state of affairs in his latest post – but as usual I couldn’t resist 🙂 Verizon has reportedly filed documents with the Federal Communications Commission that say it plans to use as much as 80 per cent of its network for its own purposes. Everything else would get shoe-horned into the remainder (although Cynthia at IPDemocracy says it might not be as bad as it sounds, and it looks like Om Malik agrees).

This, of course, is just the latest step in a campaign by the major telcos to strong-arm convince Internet companies such as Google and Yahoo to pay extra for delivery of their broadband content to consumers, a campaign that got its start with comments from Ed “pay up for those pipes” Whitacre of AT&T (formerly SBC) and Bill Smith of BellSouth. Why should they have to carry all that content on their networks, the telcos complain – why should Google make money from broadband and not share some of it with the carriers whose pipes they use?

As Mike at Techdirt notes, part of the problem is that the phone companies haven’t spent the money necessary to do all the things they want to do on their networks. The telcos made all kinds of promises about upgrades they planned to make – in return for which they got various concessions from the U.S. government – and then they never followed through, as telecom analyst Bruce Kushnick writes in a new book called The $200-Billion Broadband Scandal.

The big question is: Will the U.S. government allow the telcos to get away with this move, or will they step in to enforce some form of network neutrality? There used to be a concept called the “common carrier” principle, in which telcos were required to carry any and all voice traffic — that idea seems to have gone out the window.

Newspapers need to get a clue – quickly

The Paris-based World Newspaper Association, a body that appears to be almost pathologically clueless when it comes to the Internet, is blustering and grumbling about how search engines such as Google News are “stealing” their content and should be made to either stop or to pay for it. Although the group hasn’t said what it has in mind, it is muttering darkly about challenging the “exploitation of content” that its members feel is going on. In a magnanimous gesture, they admitted that search engines help drive traffic to their sites, but said this didn’t justify the fact that Google and others have built their businesses on “taking content for free.”

This issue has come up before, when a representative of the European Publishers’ Council accused Google and other Web search companies of being “parasites” living off the content of others. Gavin O’Reilly of the WNA has been quoted as saying that the Web companies are engaging in “kleptomania.” Here’s what he told the Financial Times:

Mr O’Reilly likened the initiative to the conflict between the music industry and illegal file-sharing websites and said it was not a sign that publishers had failed to create a competitive online business model of their own. “I think newspapers have developed very compelling web portals and news channels but the fact here is that we’re dealing with basic theft,” he said [snip]. Services such as Google News link to original news stories on the home pages of newspapers and magazines and display only the headline and one paragraph of the story [but] “That’s often enough” for readers browsing the top stories, Mr O’Reilly said.

I must admit that I thought the WNA was out of its mind to even bring this subject up in the first place, but the comparison to the RIAA and its war against file-sharing took the association’s case well past stupidity and into the realm of farce (ironically, as Rafat at PaidContent points out, the WNA has a great blog called Editors Weblog). How exactly is linking the headline and first paragraph of a story to a newspaper’s website the same as people downloading an entire song from a P2P application? The answer: It isn’t.

As for Mr. O’Reilly’s argument that readers are often satisfied with the headline and one paragraph, whose fault is that? Maybe the WNA should try suing every user of Google News in court, the same way the RIAA has — that’ll show them. Or they could block all search engines, and get no traffic whatsoever. As James Robertson notes, this appears to be more about a cash grab than it is about the way that search engines work. Techdirt asks whether newspapers can really be that clueless, and the short answer is: Yes.

An expose on telecom bait-and-switch

I don’t know telecom analyst Bruce Kushnick, but I’m definitely interested in the subject of a new book he has written (and is selling himself using the Internet). In a nutshell, the topic of his book is a scam that the major U.S. telecoms pulled on the American government — and the American people — by effectively promising high-speed, fibre-optic Internet in return for concessions on licensing requirements and other regulations set by U.S. telecom regulators. Then they reneged on their end of the bargain.

Steve Stroh, who has been covering the telecom and networking industry as an independent consultant for some time, has written about Kushnick’s book on his blog, and so has veteran telecom consultant Gordon Cook, and Richard Stastny of the VOIP and Enum blog, and David Isenberg, a fellow at Harvard’s Berkman Center for Internet and Society, on his blog.

Given that kind of support, I’m prepared to believe Kushnick’s version of events has some truth to it, since several of the people mentioned above have said that he has documentation backing up his claims. Beyond that, it certainly sounds like something the telecom companies would do — they may even have believed it when they said it. But the U.S. certainly doesn’t have anything like the 45-megabit-per-second connections that the telcos promised.

And it definitely sheds a different kind of light on their repeated claims that Internet content companies should be paying more for access to their pipes (something my friend Rob Hyndman has written about many times). It sounds to me like U.S. consumers have already paid for it several times over.

Google misses – but will it matter?

Google may be working on a version of the Ubuntu Linux OS, as reported by The Register, but maybe it should be spending a bit more time on a good accounting app — it just missed Wall Street’s estimates for both sales and profit for the latest quarter. The stock dropped by as much as 19 per cent in after-hours trading.

Does that matter to the company’s long-term future? Probably not. But it will likely take some of the shine off for the momentum traders, of whom there are likely a lot. And there were some troubling signs in the numbers at first glance — even if you assume that the analysts’ estimates were inflated (which they likely were). For one thing, the company’s tax rate was substantially higher than expected – 41 per cent instead of about 26 per cent – and costs were also higher than anticipated. Too much money being spent on projects like the lame addition of bookmarks to the Google toolbar perhaps?

One caveat: Even assuming that a majority of analysts are craven weasels (just kidding, guys) it is difficult for analysts to analyze a company that is not only growing at an incredible rate, but which refuses to provide any guidance on future results, or any details on current operations. That’s going to make future surprises even more likely.

Update:

As usual, some of the hysteria that is common with after-hours trading (when there is less volume and therefore more volatility) dissipated on Wednesday, but Google’s stock was still down almost 10 per cent at one point in the morning. Not surprising, given the number of momentum traders that are riding this particular horse. Although UBS has downgraded the stock to “neutral” – in other words, closing the door after the horse has left the stable – Google’s explanation that the higher tax rate accounts for the bulk of the miss seems plausible. And the company has said it will provide more details on that kind of thing.

In the end, there’s no real smoking gun in these results for the Google bears – although my friend Paul Kedrosky notes that it’s worth asking why the tax rate was so much higher than expected. And whatever the answer to that question, Google’s “miss” serves as a healthy warning to investors. As the old saying goes, bulls make money and bears make money, but pigs often get slaughtered.

Cisco buy TiVo? Dream on, TiVo fans

CNet.com has a piece up on its website that talks about how networking equipment giant Cisco Systems might be looking to acquire TiVo, the digital-video recording pioneer. The article, which is labelled “news analysis” — which in the journalism business is code for “speculation” — starts off with Cisco’s recently announced $6.9-billion acquisition of Scientific-Atlanta, one of the largest makers of set-top boxes in the world next to Motorola, and then asks the question “Who’s next?”

One response might be “Why should anyone be next?” The purchase of SA is one of the largest acquisitions Cisco has ever done. The idea that it’s going to rush out and buy something else right away is more than a little wacky. But a better response might be “Why TiVo?” As much as everyone seems to want to see TiVo get snapped up by either Yahoo, Google or Microsoft, I’m not sure that’s as likely as TiVo fans might want it to be — and I think a purchase by Cisco is probably even less likely (The Stalwart isn’t convinced either).

Why? Because — as Rafat Ali also points out at PaidContent.org — TiVo doesn’t really bring anything to the table that Cisco doesn’t already have with Scientific-Atlanta. Yes, it’s true that TiVo (and Replay TV) pioneered the DVR business, and the company has a small legion of devoted fans who love the extra features it provides. But when it gets right down to it, DVRs are a commodity, SA already makes them — including ones that do high-definition, and have interactive features for integration with the Internet (or the ability to add them) — and so there is little or no reason to pay the $500-million or whatever it would take to buy TiVo. For what it’s worth, I think the idea of Cisco buying Nintendo makes even less sense, but maybe that’s just me.

More Google-bashing — this time on Picasa

I don’t want to get into a big Google-bashing rant, after knocking their lame bookmark offering, but Phil Sim of Squash makes a good point in a post today about another Google service: Picasa, the photo-organizing software the company bought way back when. His question — and I think it’s a darn good one: Why is there no online sharing component?

It’s not like certain services haven’t already shown that people really get a charge out of sharing their photos with others, and that this can make a viable business for companies such as, say, Yahoo. So why hasn’t Google, which has warehouses full of servers it could host terabytes worth of photos on with no trouble at all, added an online component to Picasa? One reason could be that Google also owns an instant-messaging/photo-sharing app called Hello, which interfaces with Blogger.com, and it would probably rather people used those tools. But why not have Picasa do it too?

Sometimes the things Google does or doesn’t do make perfect sense. And sometimes they make me wonder what the heck is going on over there in Mountain View at the Googleplex. Get off the Segways, guys, and get with the program.