FON sounds great, but will it work?

It’s nice to hear that FON, the share-your-Wi-Fi network founded by entrepreneur Martin Varsavsky, has gotten an investment from Google, along with Skype founders Niklas Zenstrom and Janus Friis – but while that is a huge vote of confidence, it doesn’t remove some of the uncertainties surrounding the FON business model. For one thing, as more than one person has mentioned (including in the comments on Scoble’s post) almost every major ISP specifies in their contracts that this kind of wide-open sharing isn’t allowed.

According to comments Martin sent to Om, the company is trying to bring ISPs on-side, but has so far only managed to strike a deal with Speakeasy (Update: According to Om, Speakeasy says it has no arrangement with FON). Alec Saunders of Iotum says that most ISPs don’t enforce these agreements, and that’s true – but they might decide to change their minds about that if they find widespread sharing of the type FON has in mind.

Glenn Fleishmann of Wi-Fi Networking News, who has been a major skeptic on FON, says the investment by Google and the Skype gang (as well as Index Partners, which made a bundle on its investment in Skype) makes him a little less skeptical, but he still has concerns – including the difficulty of getting ISPs on-side, but also the difficulty of building out a robust enough wireless network to make what the company has in mind actually feasible.

Not only that, but how many people are going to feel the same concerns over security that the commenter on Scoble’s post feels? FON has a response here, but that might not satisfy enough people to open up their networks – especially after everyone has been telling them to lock them down so no one piggy-backs on them. FON has a response to the ISP question too, but that amounts to trying to convince the ISPs they will share revenue with them (assuming there is any). Like my friend Rob Hyndman, I think many providers (particularly in Canada) would be skeptical.

At last, a way to track blog comments

If you’re like me (and I know I am), you travel around the blogosphere reading different posts on blogs and adding your thoughtful comments here and there – and again, if you’re like me, you often forget what you said or where. As a result, you miss the responses to your comments, which in most cases (okay, some cases) have valuable information in them, or make a point that corrects your initial impressions. Like others, including Steve Rubel of Micropersuasion.com, I’ve been looking for an easy way to track this kind of activity.

At last, it looks as though someone has come up with it: Robert Scoble and TechCrunch are both talking about a new beta service called CoComment.com, which allows you to track your comments wherever they are made, to be notified when someone responds to your comment, and to see all your comments in one place (and publish them on your blog, if you wish to do so).

I was unable to use one of the demo codes that the CoComment guys attached in comments (how fitting) to Scoble’s post, but I’m eager to try this service out. I think it is exactly the kind of juice we need to keep the conversation going.

Update:

Someone at CoComment.com was kind enough to send me an activation code, so I am now signed up with the service, and have installed a “comment box” in my sidebar, which will track comments I’ve made, as well as responses to those comments. You can also subscribe to an RSS feed of that comment stream, which I’ve done – and in the future, the site says blog publishers will be able to add code to their comment sections so that the service will index comments left there even if the commenters themselves haven’t signed up for the service.

Stowe Boyd has more here, and so does Solution Watch – including a greasemonkey script that avoids the need for a bookmarklet. Ben Metcalfe has some thoughts as well.

The disruptiveness of doing what you love

Anne Zelenka, whose excellent blog I have only recently discovered, has a great post about how doing what you love can lead in unexpected directions – in which she uses the example of Mary Hodder, who started a Web 2.0 video-sharing service called Dabble about six months ago and is almost ready to launch (which is part of a much larger story about how easy it is to start companies now… but I digress).

Mary wrote something about how she wanted to stop doing things she didn’t like and start doing something she loved, and how great it was to do that, and she mentions the insightful (if long) piece by uber-geek Paul Graham called How To Do What You Love, which is worth a read. Paul mentions how “The test of whether people love what they do is whether they’d do it even if they weren’t paid for it– even if they had to work at another job to make a living.” And when you combine that with Web 2.0, you wind up with something quite powerful. Even usually gruff blogger and Kurt Cobain-lookalike Ben Barren gets a little misty-eyed at the idea.

Anne says:

One thing that must scare the wigs off of media moguls is that many writers and other content creators will work for free, because it’s so intrinsically enjoyable. In fact, they’ll pay to be able to create and publish content like essays, software, videos, and photographs. I’m a great example. Not only do I pay for TypePad for my momblog and Haloscan for the comments here, I am foregoing a six-figure income in software development for the opportunity to write and think and develop what I want. I am effectively paying more than $100,000 annualized in order to do what I love.

That is a pretty incredible statement. And yes, it must scare the wigs off of many media moguls, not to mention people in lots of other businesses. How can you compete with something that allows people to do what they love and start a business all at the same time? Just think of Mary and Dabble, or Josh and del.icio.us, or Kevin and digg.com, or Gabe and memeorandum.com. A recent interview with Gabe said that email responses came in from him at 3 a.m. – would he be doing that if he worked at any other company but one he started and runs for the love of it?

The courier, the driver and the Internet

Anyone outside Toronto might not have heard of this little story – unless they frequent the boingboing.net website – but a week or so ago there was an altercation downtown between a bicyle courier named Leah and a young male driver whose name remains unknown (probably for his own protection). A local photographer happened to be there and took some shots of the driver assaulting the courier, stomping on her bike and generally being a complete asshole. He was restrained (and pummeled, apparently) by some bystanders.

The photos produced an avalanche of comments on the citynoise.org forum, and that no doubt picked up after boingboing linked to it. After spreading through the blogosphere, a story made it into one of the Toronto newspapers, the National Post and then into the Toronto Star and the Globe and Mail. Since then there have been a number of stories about the larger picture surrounding the incident – including the fact that the courier threw garbage back into the driver’s car after he tossed it out the window, and that she keyed his car (she has apologized, and is not pressing charges)

One of the most interesting elements from my point of view, however, is how this event would never have even made it into the media if it were not for the blogosphere – and to that extent boingboing.net and citynoise.org and other types of sites act as a kind of proto-journalism, an early-warning system for the “old” media. My colleague at the Globe and Mail, columnist and author Russell Smith, put it well in something he wrote, which I’m going to quote here because it will soon be behind our “pay wall.” He’s talking about how the comments at citynoise.org start off with misconceptions, then there are flames, then misconceptions are corrected, then Leah comments, then the brother of the driver (apparently).

So there is a sort of fact-checking at work here: Multiple posters will correct each other, and at some point, a witness will step forward. The reporting, and its verification, happened about as fast as any mainstream news network could do it. The Internet is a parallel news network, spreading news much faster than we in the media can with all our technology and organization. The pictures were posted some time last Thursday; by Monday morning, the discussion about them had involved thousands of people from all over the world. By the time a newspaper ran the story the following day, it was old news on the Net.

This is an interesting point. There were plenty of trolls and flames and so on on citynoise.org, but the “true” story came out eventually, and there was plenty of commentary from both sides (pro-courier and pro-driver). Russell goes on to say:

And why did these pictures not make it to the newspapers and the TV stations right away? They – we – would have loved to have them. I think, first of all, because it didn’t occur to the photographer to go there with them. His first instinct was to post on-line. Not only is it easier to do this – no phone call, just a mouse click – but you can control how your story appears and how you get credit for it. And he knew, too, that his story’s dissemination would be just as quick and just as effective.

Interestingly enough, considering that last point about controlling the story and credit, according to a note at citynoise.org the photos there were taken and printed in the Toronto Star without the photographer’s permission – in fact, against his specific wishes. Whether he later gave permission (because I saw them in other papers) isn’t clear.

Blogs — it’s all about the conversation

This may or may not be part of the “secret sauce” in Gabe’s memeorandum.com, but I think Stowe Boyd is onto something. In a post about what makes blogs work — i.e., what makes them vibrant and helps them grow, as opposed to stagnating or becoming echo chambers — he says that he thinks it has something to do with the ratio of posts to comments and trackbacks.

Being a geek (and I meant that in a good way) Stowe comes up with a “conversational index” that quantifies that ratio, and figures if it is more than one — that is, if there are as many or more comments and trackbacks as there are posts — then the blog will flourish. Don Dodge has come to a similar conclusion, and so has Zoli Erdos.

I don’t know if the ratio needs to be one, or close to one, or whether you can even put a number on it, but I think this hits the nail on the head — what makes most blogs interesting isn’t so much the great things that the writer puts on there (as much as I like to hear the sound of my own voice), but what kind of response it gets, and how that develops, and who carries it on elsewhere on their own blog. And I agree that it would be nice if someone like technorati.com or memeorandum.com could track that kind of thing and make it part of what brings blogs to the top.

I like to see what people are talking about — not just what a blogger has to say, but what others have to say about what they say. That’s why I also agree with Steve Rubel that it would be nice to have a way of tracking comments, other than by subscribing to a feed of comments, or bookmarking posts you’ve commented on with del.icio.us or some other tool.

Update:

Stowe Boyd has more on the “conversation” conversation, as it were, here. And as far as tracking comments, no sooner did I mention it then CoComment.com came out with that exact thing. I’m sure that’s a coincidence though 🙂

Memeorandum is a black box

There’s no question that Gabe of Memeorandum.com has created a tremendous resource (there’s an interview with him at Don Dodge’s blog) but I must admit it baffles me sometimes. I considered not writing this post at all because it will probably sound like I’m just whining about not being on the top of tech.memeorandum.com with the A-listers, but I’ve followed the site for quite a while now, and the reason some posts rise or fall in the ranking of topics — and some stay longer while others disappear — eludes me. And it kind of bugs me a little bit. And no, I’m not writing this post just to try and get to the top by mentioning Gabe 🙂

I know that the algorithm behind the site is top secret, so there’s not much point in asking about it. But today is a good example of how mysterious the system is — I’ve been on memeorandum.com many times, either linked to other posts or sometimes as a major topic. I was even at the top of the site briefly one day (although it was a weekend, so that might have increased my chances). But today I wrote a post about IE7, commenting on some of the criticisms and joining in the conversation, but that post never appeared anywhere on tech.memeorandum.com — nor did one that I wrote the day before about network neutrality.

Neither one appeared despite the fact that I wrote them around the same time as several other people whose posts were linked to or formed major memeorandum.com topic headings, including Scott Karp of Publishing 2.0 and my friends Mark Evans and Rob Hyndman. Is there something I’m doing wrong, Gabe? WordPress pings technorati automatically, and a bunch of other sites. Is it that I’m linking too much to different people, or not linking enough? I have to know. Not that I care about that kind of thing, of course. It’s just bugging me. (Dave Taylor doesn’t like memeorandum because he says it adds “an amplifier to the echo chamber” of the blogosphere).

Update:

This post is now near the top of the section about Gabe, and showed up only a few minutes after I posted it, which actually makes me more confused instead of less.

Update 2:

Gabe emailed me and said that both of the posts I mentioned had actually been linked to on the site at different times, and sent me links to cached versions of the pages. I guess I must have missed when they were on the site and then they fell off the radar quickly and so I never saw them. The strange thing is, some posts (like the one above) show up right away, and the ones Gabe checked on didn’t show up for hours. Maybe I’m trying too hard to figure this whole thing out — I should probably just go read a book, or alphabetize my CDs or something useful 🙂

Hey bloggers — MSFT doesn’t care about you

Many of the reviews and comments about the new beta of Microsoft’s Internet Exploder Explorer, IE7, have focused on the RSS implementation. Adam Green at Darwinianweb.com got everybody’s attention when he said that he thought the browser would kill a lot of aggregators, and later amended this to say that while IE7’s handling of RSS wasn’t that great, it was probably good enough. As he put it — in a phrase I wish I had come up with

“Microsoft long ago mastered the trick of calculating exactly the minimal feature set needed to suck the air out of a market it wants to enter.”

That is exactly right. It’s not that IE7’s version of the RSS reader is that great — in fact, it is pretty much “just like favourites,” as Scott Karp at Publishing 2.0 puts it — it’s that it’s probably just good enough for most people. Dave Winer might be right when he says that the “river of news” is a better model for an aggregator, but IE7 doesn’t really have a dog in that race. It just wants something simple that people can use without too much trouble.

Is the way they have done it good enough? That remains to be seen. RSS is still not easy enough, as my friend Paul Kedrosky keeps pointing out, and people are (in general) lazy. Not everyone wants to see if they can break Robert Scoble’s record for most RSS feeds subscribed to. Kent Newsome asks why he should care about IE7, and the answer is that he probably shouldn’t.

We are all “edge cases,” as someone has pointed out, and I would have to go along with Jeff Nolan – IE7 wasn’t designed for us. Simple as that. We can keep on using Firefox and Performancing and Greasemonkey and all those great things, but the fact is IE still has 80 per cent of the browser market, and it got that way by not being on the edge.

Telecoms and the toll-road gambit

I wasn’t sure whether to write anything about the “network neutrality” issue, in part because my friend Rob Hyndman has done such a good job of covering the subject – particularly an overview of the current state of affairs in his latest post – but as usual I couldn’t resist 🙂 Verizon has reportedly filed documents with the Federal Communications Commission that say it plans to use as much as 80 per cent of its network for its own purposes. Everything else would get shoe-horned into the remainder (although Cynthia at IPDemocracy says it might not be as bad as it sounds, and it looks like Om Malik agrees).

This, of course, is just the latest step in a campaign by the major telcos to strong-arm convince Internet companies such as Google and Yahoo to pay extra for delivery of their broadband content to consumers, a campaign that got its start with comments from Ed “pay up for those pipes” Whitacre of AT&T (formerly SBC) and Bill Smith of BellSouth. Why should they have to carry all that content on their networks, the telcos complain – why should Google make money from broadband and not share some of it with the carriers whose pipes they use?

As Mike at Techdirt notes, part of the problem is that the phone companies haven’t spent the money necessary to do all the things they want to do on their networks. The telcos made all kinds of promises about upgrades they planned to make – in return for which they got various concessions from the U.S. government – and then they never followed through, as telecom analyst Bruce Kushnick writes in a new book called The $200-Billion Broadband Scandal.

The big question is: Will the U.S. government allow the telcos to get away with this move, or will they step in to enforce some form of network neutrality? There used to be a concept called the “common carrier” principle, in which telcos were required to carry any and all voice traffic — that idea seems to have gone out the window.

Newspapers need to get a clue – quickly

The Paris-based World Newspaper Association, a body that appears to be almost pathologically clueless when it comes to the Internet, is blustering and grumbling about how search engines such as Google News are “stealing” their content and should be made to either stop or to pay for it. Although the group hasn’t said what it has in mind, it is muttering darkly about challenging the “exploitation of content” that its members feel is going on. In a magnanimous gesture, they admitted that search engines help drive traffic to their sites, but said this didn’t justify the fact that Google and others have built their businesses on “taking content for free.”

This issue has come up before, when a representative of the European Publishers’ Council accused Google and other Web search companies of being “parasites” living off the content of others. Gavin O’Reilly of the WNA has been quoted as saying that the Web companies are engaging in “kleptomania.” Here’s what he told the Financial Times:

Mr O’Reilly likened the initiative to the conflict between the music industry and illegal file-sharing websites and said it was not a sign that publishers had failed to create a competitive online business model of their own. “I think newspapers have developed very compelling web portals and news channels but the fact here is that we’re dealing with basic theft,” he said [snip]. Services such as Google News link to original news stories on the home pages of newspapers and magazines and display only the headline and one paragraph of the story [but] “That’s often enough” for readers browsing the top stories, Mr O’Reilly said.

I must admit that I thought the WNA was out of its mind to even bring this subject up in the first place, but the comparison to the RIAA and its war against file-sharing took the association’s case well past stupidity and into the realm of farce (ironically, as Rafat at PaidContent points out, the WNA has a great blog called Editors Weblog). How exactly is linking the headline and first paragraph of a story to a newspaper’s website the same as people downloading an entire song from a P2P application? The answer: It isn’t.

As for Mr. O’Reilly’s argument that readers are often satisfied with the headline and one paragraph, whose fault is that? Maybe the WNA should try suing every user of Google News in court, the same way the RIAA has — that’ll show them. Or they could block all search engines, and get no traffic whatsoever. As James Robertson notes, this appears to be more about a cash grab than it is about the way that search engines work. Techdirt asks whether newspapers can really be that clueless, and the short answer is: Yes.

An expose on telecom bait-and-switch

I don’t know telecom analyst Bruce Kushnick, but I’m definitely interested in the subject of a new book he has written (and is selling himself using the Internet). In a nutshell, the topic of his book is a scam that the major U.S. telecoms pulled on the American government — and the American people — by effectively promising high-speed, fibre-optic Internet in return for concessions on licensing requirements and other regulations set by U.S. telecom regulators. Then they reneged on their end of the bargain.

Steve Stroh, who has been covering the telecom and networking industry as an independent consultant for some time, has written about Kushnick’s book on his blog, and so has veteran telecom consultant Gordon Cook, and Richard Stastny of the VOIP and Enum blog, and David Isenberg, a fellow at Harvard’s Berkman Center for Internet and Society, on his blog.

Given that kind of support, I’m prepared to believe Kushnick’s version of events has some truth to it, since several of the people mentioned above have said that he has documentation backing up his claims. Beyond that, it certainly sounds like something the telecom companies would do — they may even have believed it when they said it. But the U.S. certainly doesn’t have anything like the 45-megabit-per-second connections that the telcos promised.

And it definitely sheds a different kind of light on their repeated claims that Internet content companies should be paying more for access to their pipes (something my friend Rob Hyndman has written about many times). It sounds to me like U.S. consumers have already paid for it several times over.

Google misses – but will it matter?

Google may be working on a version of the Ubuntu Linux OS, as reported by The Register, but maybe it should be spending a bit more time on a good accounting app — it just missed Wall Street’s estimates for both sales and profit for the latest quarter. The stock dropped by as much as 19 per cent in after-hours trading.

Does that matter to the company’s long-term future? Probably not. But it will likely take some of the shine off for the momentum traders, of whom there are likely a lot. And there were some troubling signs in the numbers at first glance — even if you assume that the analysts’ estimates were inflated (which they likely were). For one thing, the company’s tax rate was substantially higher than expected – 41 per cent instead of about 26 per cent – and costs were also higher than anticipated. Too much money being spent on projects like the lame addition of bookmarks to the Google toolbar perhaps?

One caveat: Even assuming that a majority of analysts are craven weasels (just kidding, guys) it is difficult for analysts to analyze a company that is not only growing at an incredible rate, but which refuses to provide any guidance on future results, or any details on current operations. That’s going to make future surprises even more likely.

Update:

As usual, some of the hysteria that is common with after-hours trading (when there is less volume and therefore more volatility) dissipated on Wednesday, but Google’s stock was still down almost 10 per cent at one point in the morning. Not surprising, given the number of momentum traders that are riding this particular horse. Although UBS has downgraded the stock to “neutral” – in other words, closing the door after the horse has left the stable – Google’s explanation that the higher tax rate accounts for the bulk of the miss seems plausible. And the company has said it will provide more details on that kind of thing.

In the end, there’s no real smoking gun in these results for the Google bears – although my friend Paul Kedrosky notes that it’s worth asking why the tax rate was so much higher than expected. And whatever the answer to that question, Google’s “miss” serves as a healthy warning to investors. As the old saying goes, bulls make money and bears make money, but pigs often get slaughtered.

Cisco buy TiVo? Dream on, TiVo fans

CNet.com has a piece up on its website that talks about how networking equipment giant Cisco Systems might be looking to acquire TiVo, the digital-video recording pioneer. The article, which is labelled “news analysis” — which in the journalism business is code for “speculation” — starts off with Cisco’s recently announced $6.9-billion acquisition of Scientific-Atlanta, one of the largest makers of set-top boxes in the world next to Motorola, and then asks the question “Who’s next?”

One response might be “Why should anyone be next?” The purchase of SA is one of the largest acquisitions Cisco has ever done. The idea that it’s going to rush out and buy something else right away is more than a little wacky. But a better response might be “Why TiVo?” As much as everyone seems to want to see TiVo get snapped up by either Yahoo, Google or Microsoft, I’m not sure that’s as likely as TiVo fans might want it to be — and I think a purchase by Cisco is probably even less likely (The Stalwart isn’t convinced either).

Why? Because — as Rafat Ali also points out at PaidContent.org — TiVo doesn’t really bring anything to the table that Cisco doesn’t already have with Scientific-Atlanta. Yes, it’s true that TiVo (and Replay TV) pioneered the DVR business, and the company has a small legion of devoted fans who love the extra features it provides. But when it gets right down to it, DVRs are a commodity, SA already makes them — including ones that do high-definition, and have interactive features for integration with the Internet (or the ability to add them) — and so there is little or no reason to pay the $500-million or whatever it would take to buy TiVo. For what it’s worth, I think the idea of Cisco buying Nintendo makes even less sense, but maybe that’s just me.

More Google-bashing — this time on Picasa

I don’t want to get into a big Google-bashing rant, after knocking their lame bookmark offering, but Phil Sim of Squash makes a good point in a post today about another Google service: Picasa, the photo-organizing software the company bought way back when. His question — and I think it’s a darn good one: Why is there no online sharing component?

It’s not like certain services haven’t already shown that people really get a charge out of sharing their photos with others, and that this can make a viable business for companies such as, say, Yahoo. So why hasn’t Google, which has warehouses full of servers it could host terabytes worth of photos on with no trouble at all, added an online component to Picasa? One reason could be that Google also owns an instant-messaging/photo-sharing app called Hello, which interfaces with Blogger.com, and it would probably rather people used those tools. But why not have Picasa do it too?

Sometimes the things Google does or doesn’t do make perfect sense. And sometimes they make me wonder what the heck is going on over there in Mountain View at the Googleplex. Get off the Segways, guys, and get with the program.

Google bookmarks — is that the best they can do?

Okay, it’s not as bad as the Google China thing, but I have to say the bookmark feature that Google just released has to be one of the lamest things to come down the Web 2.0 pike since Froogle. I mean, come on. Saving your bookmarks with a toolbar? How 1990s. Sure, you can keep them in one place so you can get to them from anywhere — Yahoo’s only had that for about two years.

Not only that, but I have to say that Google’s implementation sucks, from a whole bunch of different perspectives. One, it relies primarily on a toolbar, which I hate. I don’t need or want another toolbar offering to install itself, and I don’t care how useful it pretends to be. Whatever happened to bookmarklets and plug-ins? I thought that was the wave of the future. Of course, Google isn’t even supporting Firefox with this one yet, so there’s another strike against it. And when you go to the Google site — which you can do if you don’t want to use the toolbar — there’s no way to import bookmarks from a browser or file, or to sort them.

Then there’s the fact that there’s nothing even remotely different about what Google is doing — no digg.com-style ratings, no del.icio.us-style sharing, no integration with any other part of the Google-verse even. Kind of like the company’s blog search isn’t anywhere to be found when you’re searching Google news, which you would think would be a natural (Yahoo seems to think it is, since their search blends both). In other words, a completely ho-hum product. Why even bother?

Venture capital didn’t create the bubble

Dave Winer is a smart guy, and when it comes to Web 2.0 he’s been smart a lot longer than I have — but when it comes to investing and the stock market and venture capital, I think he might be a little out of his depth. I wouldn’t tell Dave how to put together an OPML editor, and by the same token I’m not sure anyone should listen to how he wants to “reform” the venture capital business.

Like a good friend, Robert Scoble is being kind when he says Dave’s post contains “great insight.” I would tend to agree with Paul Kedrosky that his proposed solution is more than a little on the wacky side. Even his description leaves me shaking my head. Here’s part of it:

That’s how Netscape and the dotcommers that followed went through the roof of the stock market. People who traded could see the raw power of the Internet and knew, one way or the other, that this was going to change how everything was done, from business to romance, travel, gambling, everything. So the users of the Internet bid the stock of the Internet up. And up. And up. And so on. So what did the middlemen do exactly? They invested in all kinds of idiotic things.

The point of this seems to be that “people who traded” were the ones who knew what to invest in, while the “middlemen” or VCs threw money at idiotic things like pets.com and boo.com, so we should get the middlemen out of the way and let users run things and decide what to invest in. I’m not sure which bubble Dave was watching, but I remember plenty of supposedly smart “investors” who bought stocks like theglobe.com and others all the way up into the stratosphere. Was that the fault of stupid or venal VCs? Hardly. They were just supplying what the market had already shown that it wanted: Internet-based anything, and right now.

As Nick Carr has pointed out, bubbles are born on the demand side, not the supply side. And yes, it’s true that there are problems with much of what goes on in the venture capital business, as Canadian VC Rick Segal and others have described. I’m interested to see what Rick has in mind — but with all due respect, I hope to God that he doesn’t take Dave’s advice on this one.

For more on the same topic, I think Fraser Kelton has some worthwhile points, and as usual Fred Wilson summarizes things well:

I would suggest one rule and only one. Be the entrepreneur’s partner. Help him or her. Be there for them. Support them. Counsel them. Share the risk with them. Have fun with them. Laugh and cry with them. And make boatloads of money with them. It’s a time tested formula and it will work forever.

Meanwhile, Rick Segal has obviously been thinking about all this as well — as he hinted in an earlier post — and does what I think is a great job of distilling what the “new” startup landscape looks like, and asking the question (my paraphrase): “If you don’t need much money, and you don’t need a lot of hardware or software, and the Web gives you lots of points of contact, what do you need VCs for?” Go read his post for the answer.

Update:

Dave Winer doesn’t think much of my comments, not surprisingly. Fair enough. To answer Dave’s questions (since he doesn’t allow comments on his scripting.com blog), I am not a VC, and whatever investing I do is through mutual funds, so my track record is effectively a blank slate. But I have been writing about investing and the stock market for about 15 years now. I wasn’t saying that I’m more experienced than Dave, just that his argument for reform in venture capital is logically flawed.

Anne Zelenka, who posted some comments here, has written an excellent post on her blog that breaks down — from an Econ 101 standpoint — the elements of Web 2.0 that make it different from Web 1.0, and why the venture capital business just keeps getting harder.