Is Facebook Moving in on Groupon’s Turf?

Facebook is testing the addition of Groupon-style discount offers to its Facebook Places location feature, according to a report from All Facebook, which based its conclusions on an email that a company said it received from the social network approving such a deal. Under the terms as described by this anonymous source, the retailer would offer a free product to a user if three of their friends checked in — or were tagged — at a specific location via Facebook Places. If true, such offers would bring the social network into direct competition with Groupon and other group-discount services.

Although there have been some questions raised about the authenticity of the email (based in part on the use of the term “Facebook followers” instead of the term “fans”), the idea that Facebook would tie discounts to its Places feature makes a lot of sense. As the social network tries to come up with more offerings for advertisers and merchants that leverage its 500-million-strong userbase, combining its new location-based feature with a pitch to retailers seems like a natural. Some businesses have experimented with offers for users who check in via Foursquare, but so far the location-based discount is a potential marketing feature that remains relatively untapped.

If Facebook was to bring the weight of its vast userbase to bear on that market, it would definitely have an impact on both Foursquare and Groupon, as well as other similar group-buying services such as LivingSocial, which I wrote about recently. While Groupon has been growing at a dramatic rate, thanks in part to the more than $160-million in venture financing it has received from a number of prominent VC funds — including Russian holding company Mail.ru (formerly known as Digital Sky Technologies) — there have also been increasing signs of competition in the space, including the launch of a similar group-discount feature from Walmart (s WMT) called CrowdSaver, which is offered through Facebook.

Groupon, meanwhile, is continuing to expand its reach: it has reportedly signed an agreement with Yahoo (s yhoo) to do a global distribution deal, and has signed similar agreements with eBay (s ebay) and Ning. There have also been rumors that Yahoo is interested in acquiring the company for as much as $2 billion

Faulty Metrics Make for a Digital Media Guessing Game

It’s an irony that is keenly felt by virtually every media entity, whether it is a tiny web-native outlet or a giant traditional publisher: the Internet, one of the most easily — and frequently — measured media in the history of humanity, is also one of the most difficult when it comes to getting a straight answer about who is consuming your content and when. That conundrum is at the center of a new report from the recently launched Tow Center for Digital Journalism at Columbia University, entitled “Confusion Online: Faulty Metrics and the Future of Digital Journalism.”

The report notes that even among the leaders in web measurement such as comScore and Nielsen, there can be widespread disagreement about how many visitors a site gets, how long they spend there, and plenty of other important metrics that might help media websites do their jobs better. In May, for example, comScore said that the Washington Post’s website had 17 million unique visitors, while Nielsen said it had less than 10 million — a difference of more than 40 percent. Estimates from the two companies of Yahoo’s traffic differed by 34 million, or approximately the population of Canada.

Although the report doesn’t go into the details, these gaps stem from differences in the way that each measurement firm pulls in the data it uses. Some companies (such as comScore) have software that users download and install, while others use information that comes directly from Internet providers. Some actually still sample web users by calling them on the telephone and asking them what sites they visit regularly. And all of the numbers being used that come from the servers themselves are polluted to some extent by the effect of “click-bots” and other scams that can inflate traffic counts, in some cases by huge percentages.

[inline-pro-content] And it’s not just comScore and Nielsen — there’s also Omniture, which many larger media companies use, as well as Google Analytics and other services that measure traffic directly from a site’s servers, and services like Hitwise and Compete that take data from ISPs and other sources. And then there are newer analytics tools that can give publishers an almost real-time look at the activity on their site, such as Chartbeat. While some news sites are excited by the potential of all this data, others say there is so much conflicting information that it is actually making their jobs harder rather than easier — particularly when it comes time to talk to advertisers.

Tom Heslin, senior vice president and executive editor of the Providence Journal, calls this the “irony of expectations”: neither publishers nor advertisers have been able to keep up with the flood of data. “Our biggest challenge is to simplify solutions for our clients, even for national advertisers,” he explains. “The development of metrics has far outstripped knowledge of ad buyers and sellers.”

As this comment makes clear, the biggest issue with metrics isn’t that newspapers or websites can’t figure out what their readers want to read — although that is part of the problem. The big issue, in financial terms at least, is that most publishers rely on advertising for the bulk of their revenue, and there is no simple standard for showing traffic or readership online the way there is in the dead-tree world, where NADBank and other measurements are accepted (although they also have their flaws). That leaves media outlets in a quandary, says the Tow report.

Uncertainty about audience measurement hinders online ad spending, with buyers and sellers of media favoring incompatible metrics [and] increasingly, the decisive information resides not with the publisher but in the databases of intermediaries such as ad networks or profile brokers.

So what is the future of online measurement? The report doesn’t come out and say this, but it probably looks a lot like the present — multiple competing measurement sources, each of them flawed in a slightly different way, so that most publishers have to throw as many different numbers as they can into a hat and then average them all out and hope for the best. The report’s authors say there is a chance that Nielsen and comScore might merge and become the dominant player and the de facto standard, or that publishers and advertisers will settle on Google Analytics as the main arbiter of numbers, but both of those seem relatively far-fetched.

In all likelihood, digital media will have to continue muddling along as best it can, for better or worse — tormented by the fact that in a medium where measurement is so easy, measurement by itself can mean very little.

Study For an MBA While Playing FarmVille

If you’re a student, you probably find it a pain to have to sign off Facebook — or even just close your browser window — because you have to go and study for an exam. What if you could study for your MBA without even leaving Facebook? That’s the tempting offer from an outfit called the London School of Business and Finance, a private degree-granting institution in England that has launched a free-to-study MBA program as a Facebook application. You have to pay to take the exam at the end in order to get the actual degree, but the studying itself is free of charge.

The school’s introductory video (which is embedded below) notes that most traditional MBA programs cost tens of thousands of dollars just to enroll, but the LSBF says its Facebook app offers the chance to try out the content of its program without having to pay anything. The application features video lectures, interactive case studies — including videos of a “panel of business experts” doing an analysis of the suggested courses of action from the case study — and an online discussion forum where students can debate the various topics discussed. The app also has a “briefcase” section where users can store notes and videos.

Before you get your hopes up about doing a quick MBA in between playing Facebook games, however, we should note that the LSBF program is effectively a teaser for the school’s existing online MBA program and other traditional degree courses (which are accredited through a partnership between the school and the University of Wales). In order to complete the MBA and get certified, students have to sign up for one of the school’s regular online courses (which require a Bachelor of Science or BA, or five years of work experience), and pay the regular fees — which can cost as much as $20,000 for a full MBA — or pay to take an exam related to a specific course.

In a sense, all the London school is really doing is taking the idea of “distance learning” or e-learning — in effect, digital correspondence courses — and moving it online and into the world of Facebook and social networking, which raises some interesting possibilities. What about other courses as Facebook apps? We probably wouldn’t want doctors to get certified via a Facebook app, but there are plenty of other disciplines that could probably make the leap quite easily, such as legal or accounting-related degrees. Maybe someone will come up with a game that uses FarmVille or Mafia Wars-style rewards to convince students to study. Let’s just hope no one adds features like “poke your professor.”

Search Ads Are a Hammer, But Not Everything is a Nail

Search-related keyword advertising is a multibillion-dollar industry for a reason — namely, it works extremely well when it comes to converting shoppers who are searching for information into shoppers who want to buy something. But it doesn’t work for everything, and it’s worth being reminded of that sometimes. Dropbox CEO and founder Drew Houston provided another example of that in a presentation at the recent Stanford Accel Symposium, where he talked about how the cloud-based storage company grew to where it is now, with more than 4 million users. Among other things, he described how search ads didn’t really work for the company when it was starting out.

The way Houston described it, Dropbox did lots of things that it thought young startups should do to get attention — including hiring a PR firm and buying search ads for Google keywords. But none of those worked particularly well: in particular, he said, search ads were a complete bust, because the company was paying hundreds and hundreds of dollars on advertising just to get a single new user — far more than it would ever likely make from that user.

So why didn’t Dropbox get much traction from search ads? Houston says it’s because people didn’t really know what they needed, or weren’t aware of how Dropbox could solve problems for them in their lives, so didn’t really respond as well to the keyword ads. Houston said something very similar about the growth of Dropbox at a startup conference Liz attended earlier this year — saying the company got far more mileage from word-of-mouth and

This goes right to the heart of something we have talked about a lot, which is the threat that Google faces from social media in general, and Facebook in particular. Google does well with search advertising because that involves people who have already made up their mind to buy something, and they are looking for where they can go to do so — but it doesn’t do as well with people who are just looking to talk or network with others around issues. To the extent that your potential customer base is closer to the networking end of the spectrum — as Dropbox’s were, because they likely weren’t even aware that the product could fill a need in their lives — social media is likely going to be more effective, because it is about word of mouth recommendations.

Where do you get word-of-mouth recommendations and related discussion about various issues from Google? The short answer is that you don’t. Maybe you get some of that from Buzz, but as a mainstream product Buzz is (no offence, Google) pretty much a bust. That’s not to say Google itself isn’t useful, of course, or that keyword ads are worthless — it’s a multibillion-dollar market for a reason. Even in Dropbox’s case, once users heard about how great it was from a social network, they would probably (as I did) go and search for the company and look for reviews, information about other related products, etc. before deciding whether to try it out or pay for it.

But Houston’s observations are just another example of why, as Om argued in a GigaOM Pro report last year (subscription required) Google needs to be afraid of social and what it implies. It’s why the company is focusing on adding what CEO Eric Schmidt called a “social layer” to its products and services — although being truly social, as I noted in a post, is not the same as simply adding a widget to your existing features, which is why Facebook has a leg up in that department. As ** noted in a blog post, Google is good at appealing to those who want to get information and move on, but not so good at appealing to those who want to engage with others around a shared goal or topic of interest.

Will Anyone Care About a Myspace Redesign?

Can a faded brand redesign its way back to popularity? That question seems to be coming up a lot lately, whether it’s Digg doing a relaunch or Yahoo redesigning its email. Now it’s Myspace’s turn — the social network owned by Rupert Murdoch’s News Corp. is rolling out a relaunch starting today, which it says is focusing on Generation Y and the music/entertainment scene. The big question is: will anyone care? Or, more broadly, will the redesign have any appreciable impact on Myspace’s also-ran status in the social-networking space? It may still have millions of registered users, but it has clearly lost a ton of momentum, and when you are a social network, momentum is everything.

Michael Jones, president of Myspace (which has given up the capital S in its name as part of the redesign), told the New York Times that the service “got very broad and lost focus of what its members were using it for.” But is that really what happened? More than anything, Myspace simply lost the hold that it had on users as Facebook grew, and the latter has gradually sucked virtually all the oxygen out of the room as far as social networking is concerned. Jones — who is the latest in a virtual revolving door of Myspace chief executives — told Bloomberg that the redesign is “a full rethink,” and that the new site is “an entirely different product.” But it clearly is not. The changes are a coat of paint and some new plumbing, but the house remains the same.

A recent chart of Myspace traffic from Quantcast shows an almost unbroken toboggan hill downwards from about 80 million monthly unique visitors in 2007 to a little over 40 million (Compete.com shows Myspace with about 60 million). The only other major service to see that kind of decline is probably AOL, which has spent the past decade hemorrhaging users of its Internet access service. Facebook’s chart goes in the exact opposite direction — up from the 45 million range in 2007 to almost 150 million today.

Myspace clearly still has an audience, just as Yahoo and Digg both do. The social network says that it has 100 million registered users, and that the number of Generation Y users grew by more than 20 percent this year — but Facebook crossed the 500-million mark earlier this year and has been adding several million more every month. And growth (or lack of it) is the crucial question for advertisers: RBC Capital Markets analyst David Banko told Bloomberg that Myspace lost about $350 million last year. News Corp. paid $580 million for the site in 2005, in what some said was a brilliant foray into social networking.

The problem for Myspace and Yahoo and Digg — not to mention dozens of other faded superstars of the web — is that once you are on that downward slide in terms of users and traffic, it is virtually impossible to recover. That’s just not how the network effect works. Myspace’s redesign might appeal to many of its existing users, and possibly encourage some to use it more, but will it bring in new users, or enough of them to make a difference? Unlikely. The same goes for Yahoo’s new email service, and the rolled-back relaunch of Digg. Those services can continue to try and maximize the revenue they get from their existing (or declining) user base, but growth is probably a thing of the past.

Sean Adams, who runs a UK music blog called Drown in Sound, had some suggestions about what the social network should do to really become a music and entertainment powerhouse, including

1. Commissioning exclusive content, maybe even trying to be the angel funder for music (a la the Starbucks label), so that artists don’t have to put their hand out for fan-funding, and in exchange they, like a record label, have exclusive content that if they get the right A&R could be really relevant and also vital to the music eco-system. MySpace shifting the Zeitgeist, like it did for a few weeks with Test-icicles, way back when, again, would blow people’s minds.

2. Paying people whose tastes people give a shit about to “DJ”/compile content. Even if it’s doing ATP style curation, taking-overs of the site and letting Paramore run the site one week, and Phoenix the next day. Give them a budget to stream one of their favourite films or documentaries for a day.

3. Be rumoured to be buying Spotify, and then maybe whilst no-one is looking merge with Last.fm and Bandcamp, and create something simple and universal that music fans want and tools that artists need, rather than sitting on all that data for “fans” that artists can no longer access, whilst doing the same thing with a different, more data-locked face on it.

There have been 117 different Myspace logos created for the service.

From the tour that Myspace has provided on its site (yes, I still have an account) and the screenshots it has sent to the media, the redesign makes profile pages somewhat cleaner looking — in other words, they have mostly lost that late-1990s look they used to have. But they are still quite cluttered, and it is still difficult to find things amidst all the widgets and plugins and assorted doo-dads. More than anything, the new design looks very Facebook-like, which probably isn’t surprising. And the new logo with the empty bracket is a little odd-looking, although I assume it is supposed to mean that you can fill in the blank with whatever you want Myspace to be.

Magazine Apps for the iPad: “Bloated and Unfriendly”

Khoi Vinh, former design director for the New York Times, has written a blog post giving his thoughts on magazine apps for the iPad (something he clearly gets asked about a lot). The bottom line? He hates them. With a passion. Why? Because, Vinh says, they are “bloated [and] user-unfriendly” and because they are a result of a “tired pattern of mass-media brands trying vainly to establish beachheads on new platforms, without really understanding the platforms at all.”

The New Yorker’s new app comes in for particular derision from the designer, who says it took too long to download, cost him money even though he already subscribes to the print edition, and was a walled garden without any connection to the web — a point I made in a recent post about the new Esquire magazine app. As Vinh describes it: “I couldn’t email, blog, tweet or quote from the app, to say nothing of linking away to other sources — for magazine apps like these, the world outside is just a rumor to be denied.”

It’s unfortunate that Vinh doesn’t say much about news apps like the one his former employer has for the iPad. The designer says that news-based apps “are really a beast of a different sort, and with their own unique challenges. There is a real use case for news apps (regardless of whether not not any players are executing well in this space).” Magazines, however, are in danger of losing the battle for readers in a digital age by making their apps so closed and monolithic, Vinh argues.

Even with an Apple-operated newsstand, I’m just not sure I believe these people will turn to publishers’ apps to occupy their tablet time. It’s certainly possible that a small number of these apps will succeed, but if publishers continue to pursue the print-centric strategies they’re focused on today, I’m willing to bet that most of them will fail.

strong advertiser interest “may be more a sign of a bubble than the creation of a real market for publishers’ apps. According to Advertising Age, the initial enthusiasm for many of these apps has dwindled down to as little as one percent of print circulation in the cases of some magazines.”

Too many publishers, the designer says, are looking at media consumption in the old-fashioned way (something Om described in a recent post), rather than taking advantage of the more social forms of media available online. This makes virtually no sense at all on a digital tablet that is connected to the web, he says.

In a media world that looks increasingly like the busy downtown heart of a city — with innumerable activities, events and alternative sources of distraction around you — these apps demand that you confine yourself to a remote, suburban cul-de-sac.

Vinh doesn’t just blame publishers though — he blames Adobe as well (which recently took over production of all of Conde Nast’s magazine apps) for “doing a tremendous disservice to the publishing industry by encouraging these ineptly literal translations of print publications into iPad apps.” And who comes in for praise? It’s a short list, including one of the few apps to take a creative tack on the iPad magazine: Gourmet Live, which has turned the magazine into an interactive game of sorts. In the long run, says Vinh, traditional magazines will lose out to apps like Flipboard, which are “more of a window to the world at large than a cul-de-sac of denial.”

LivingSocial and the Future of Local Group-Buying

Groupon gets all the press when it comes to group buying, primarily because it is the largest player — it has raised more than $165 million in financing and has sales that are approaching $500 million — but LivingSocial is a strong number two in that expanding space, and in some regional markets it is a larger player than Groupon. I had a chance recently to talk with Grotech Ventures partner Don Rainey about the company and where it is going, as well as his vision of the future of group buying — Grotech is an investor in LivingSocial, and Rainey is a member of the company’s board of directors.

Echoing comments made by founder and CEO Tim O’Shaughnessy in a number of interviews, Rainey said that LivingSocial takes a somewhat different approach to the group-buying market than Groupon does. While the larger company is acquiring foreign competitors in Japan and Russia and trying to grow to national or international scale, LivingSocial is more focused on local markets, the VC says — it sees itself as partnering with local merchants, and helping them market themselves and understand how group offers work. “LivingSocial fields a local sales force in every city in which it does business,” he said.

And what about location-based players such as Foursquare? Rainey sees them filling a different role for merchants. The kinds of rewards that can be distributed through Foursquare, he says, are primarily aimed at rewarding repeat customers: that is, the people who check in regularly and become “mayor” of a spot, etc. LivingSocial’s offers, however, are aimed at attracting new customers through discounts — in other words, a model focused on customer acquisition, rather than customer retention.

When it comes to the future of group buying, Rainey said he sees a day when merchants and potential customers interact through a kind of real-time exchange — like a stock exchange, with buyers and sellers, but for local offers on meals or other goods. “I can see local retailers and consumers bidding in a real-time system for where that consumer is going to go for dinner,” says Rainey. If a merchant is having a slow night, they can put an offer into the system and users can choose between that and multiple other offers, based on location and the time they want to go out. As someone who is constantly looking for new options for places to eat in my local area, this sounds like a winner to me.

In terms of the competitive landscape, meanwhile, although Groupon is much larger than LivingSocial, it’s not clear that the market is a zero-sum game — LivingSocial and other competitors (some of whom Liz described in a recent piece on “Groupon Wannabees”) could carve out some local market share for themselves, particularly through partnerships like the one LivingSocial has with the Washington Post, where the newspaper uses its local reach to publicize the company’s latest deals to its readers (Groupon has similar partnerships with some media outlets).

If anything, in fact, the group buying market could continue to expand beyond Groupon and LivingSocial: in one glimpse of where it could be going, Wal-Mart recently announced that it is experimenting with a form of group buying through a Facebook offering called CrowdSaver — if enough potential shoppers click the “like” button on a proposed discount, Wal-Mart goes ahead with it. If anyone has national and international reach when it comes to shopping, it is Wal-Mart. The entry of other retailers could make the space even more competitive in the future, and put pressure on both Groupon and LivingSocial to continue innovating.

LivingSocial raised a Series C round of $14 million in April from new investor Lightspeed Venture Partners and a group including U.S. Venture Partners, Grotech and Revolution Capital. The company raised a $25 million Series B round in March and a $5 million Series A-1 round in January.

Is Digg Headed For the Deadpool?

Digg, which appeared to be stumbling after an ill-fated relaunch sparked a user revolt, now looks to be under siege. The former king of link-sharing services has seen two top-level executives — chief financial officer John Moffett and chief revenue officer Chas Edwards — leave the company in the past two days, and new CEO Matt Williams has slashed the workforce by more than a third, and says the company needs to “get back into startup mode.” But is such a thing even possible? Or is Digg on its way to the deadpool?

Not that long ago, Digg was a superstar in the world of “Web 2.0,” with its crowdsourced approach to news aggregation, a social-media twist on the Slashdot model. Websites large and small prayed for a link from the Digg homepage, and then trembled as their servers buckled under the load of millions of hits. Getting your site on Digg was a crucial step in getting popular attention even for mainstream publishers, and founder Kevin Rose wound up on the cover of BusinessWeek (with a photo he likely regrets) as the $60-million kid. Now, Digg is in retreat — cutting staff, backpedalling on its new features and watching its traffic decline.

How did it all go so wrong? Like plenty of other social networks — Friendster, Bebo and even MySpace — Digg just seemed to lose its mojo along the way somehow. Did it get complacent? Did Rose and former CEO Jay Adelson take their eye off the ball? Perhaps. Both got involved with Revision3, the Digg video spinoff, and Rose started to do some angel investing and other activities outside the company (one Hacker News commenter says the Digg founder “probably made more on the ngmoco sale than he has on Digg”). But mostly, the service has just been passed by — lapped by other competitors who have moved with the times and provided more features that users seem to want.

For many, Twitter has probably taken the place of Digg as a way of sharing interesting links. Others who wanted a community of users based on link-sharing have moved to Reddit, which some say has a more welcoming attitude. Reddit seemed to gain some significant momentum in the wake of Digg’s poorly-received redesign and new features, in part because some Digg users hijacked the site’s front page and plastered it with Reddit links. As I pointed out in a GigaOM Pro report on Digg’s relaunch (subscription required), trying to appeal to new readers — as most of the site’s changes seemed designed to do — doesn’t accomplish much if you push away your core user base in the process.

Digg may well have suffered from a certain hubris as well — the assumption that it had a comfortable lead over other services — and over-expanded. A number of observers have pointed out that despite getting as many or more unique monthly visitors as Digg does, competitor Reddit has about 10 employees, compared with Digg’s 67 before the recent cuts and 42 after the layoffs (Reddit is also part of the Conde Nast empire, however, so it’s perhaps not a fair comparison). Should Digg have accepted one of the takeover offers it reportedly got from companies like Yahoo over the years? Perhaps — although former CEO Jay Adelson says he doesn’t regret not selling the company, and suggests that there weren’t as many firm offers as outsiders seem to think there were.

As Adelson notes, Digg is not on death’s door just yet: the site still has 20 million unique visitors a month, he says, which is a fairly large number, and revenues that are reportedly in the $10-million range. New CEO Matt Williams says the company is embarking on a new strategy of “engaging with users,” and is focusing on revenue-generating ventures such as Digg Ads. But those features depend on growing traffic and users, and for now at least Digg seems to have lost a lot of its momentum. Cutting costs may bring the company’s losses down, but you can’t cut your way to popularity.

Augmented Reality or Invasion of Privacy?

Like it or not, the web is getting more and more interconnected to the “real” world via what some like to call “augmented reality” apps, whether it’s the way Google Goggles now recognizes physical objects when you point your mobile device at them, or Yelp’s ability to show you reviews of nearby restaurants hovering in the air as you hold up your phone. This is all wonderful and Star Trek-like, but what are the privacy implications of this kind of technology? Take just one recent example of the trend: an iPhone and Android app called “Sex Offender Tracker,” which shows you the location of any registered sex offenders in your area.

It sounds like a joke, or a Saturday Night Live skit — an impression isn’t helped by the fact that BeenVerified.com, maker of the Sex Offender Tracker apps, is using Antoine Dodson as its pitchman in a YouTube video commercial for the product. Dodson is an Alabama resident who became famous earlier this year after an interview he gave to a local TV station about a sexual assault on his sister was turned into a YouTube viral hit that eventually made it to iTunes (and earned Dodson enough to buy a house). In the video, embedded below, Dodson holds up his phone and the app shows a series of red exclamation marks superimposed on the surroundings, with each denoting a registered sex offender.

Obviously, sexual offences are not a joke. And the ability to use your phone to see important information about your neighborhood or the place you happen to be is potentially hugely valuable. But do we really want apps that can pull up a person’s criminal history and other details about their lives and show it to us in real-time as we watch them walk down the street? At the moment, it’s only a sex-offender tracking app — but what’s to stop other companies or apps from pulling up anyone’s credit history, tax records or a list of criminal offences and superimposing them on your face as you shop for groceries?

As Om noted recently, services like Rapleaf and other private entities have all kinds of data about you, compiled from various public databases and the history of your movements around the web and various social networks. The potential for real-time invasions of privacy seems to be escalating, and it’s not just celebrities any more that are subject to services such as JustSpotted.com, which shows you real-time encounters with celebrities “in the wild.” When Gawker.com came out with its Gawker Stalker tool — a similar celebrity tracker — in 2006, there was outrage at the invasion of privacy it represented. Now such things are ho hum. And they are becoming more a reality of life for everyone, not just “stars.”

It may be true that on the Internet you have no privacy, as Sun Microsystems CEO Scott McNealy said in 1999 — and as this humorous Venn diagram of the intersection between the Internet and privacy demonstrates — but not everyone is comfortable with that bargain. The trials and tribulations of Facebook and its attempts to balance privacy and social sharing are evidence of that, as the company continues to face lawsuits and government inquiries related to its behavior. Meanwhile, Google CEO Eric Schmidt told an interviewer that if people don’t like the fact that the company is recording pictures of their homes via its Street View cars, “you can just move.” We’re assuming that’s a bad joke — or maybe not.

OpenFile Wants to Re-Energize Community Journalism

When OpenFile founder and CEO Wilf Dinnick was still working as a foreign correspondent for CNN in the Middle East, he was summoned to the network’s London office where the senior executives showed off iReport, their citizen journalism project. “They said that if the twin towers fell today, people wouldn’t be watching it on CNN, they would go to YouTube,” he recalls. The light bulb went on, and Dinnick says he started to think about the power of user-generated content and what some call “networked journalism.” The result of that brainstorm was the creation of OpenFile, which launched last month in Toronto and plans to expand to several other cities over the coming months.

OpenFile is not doing “citizen journalism,” says Dinnick. Instead, it uses trained journalists — many of whom have come from one of the mainstream media outlets in Canada, which have been shedding staff — as the core of its hyper-local news operation. So in Toronto, for example, former newspaper editor Kathy Vey acts as something like a managing editor, dealing with contributors and making sure that the stories they are working on are appropriately handled and reported. The company’s name came from the idea that any user of the site can suggest a story or post a news tip, which then “opens a file” on that topic that both readers and the journalist assigned to the story can contribute to.

The idea, Dinnick says, is to make reporting on local issues — whether it’s an abandoned building that residents feel is an eyesore, or a zoning change for a specific site — an ongoing process that the community can become a part of, rather than a one-off story that a reporter sitting in a newsroom miles away from the community files and then forgets about. Although the journalists working for OpenFile are not really bloggers, the startup’s approach seems very blog-like, with readers contributing comments and suggestions, and even uploading images and videos, which the reporter can then work into the ongoing story about that topic or issue.

When it comes to funding, Dinnick says that OpenFile approached a number of the major media entities in Canada as well as some traditional venture capital sources, and wound up getting a substantial amount of seed funding from a large financial player in Toronto that doesn’t wish to be identified — enough to fund the company’s capital requirements for at least three to four years, the former reporter for CNN and ABC says. OpenFile has also signed up a number of national advertisers for the site and is building a local sales force, and has been having discussions with some of the large media companies in Toronto about partnerships and syndication opportunities as well.

There are a number of startups and digital ventures that have been trying to make hyper-local journalism work at some kind of scale in the U.S., including sites such as ** and **, as well as aggregators like Outside.in and Topix.net — and of course the 800-pound gorilla that is Patch.com, the local journalism venture that AOL was planning to spend upwards of $50 million on this year alone. OpenFile is similar to Patch in at least one sense, in that both it and Patch are looking to cover communities by hiring a journalist who can effectively become an editorial co-ordinator for that local effort by finding freelancers, bloggers, etc.

If Voting is a Social Game, Will It Make Democracy Better?

Updated: Can social games, such as those built around location-based services like Foursquare or Facebook’s Places, help encourage people to get out and vote during elections, and also improve the way democracy functions? At least one congressional candidate is hoping that they can: Clayton Trotter, who is running for Congress in Texas, has used the recently launched Facebook Places feature to create a “social election game” in which citizens get points and badges for checking in at various locations such as the polling booth or voter registration — and also for tagging their friends at those locations, or convincing them to check in.

The game was actually created by Fred Trotter, a programmer who is the son of the congressional candidate. In a blog post at his personal website, the younger Trotter says that the game was a result of him “frantically coding for the last few weeks,” and that it differs from some other attempts at applying gaming principles to elections because it isn’t aimed at getting users to check-in at rallies or other events, but is targeted specifically at getting them to vote. Players get a badge for checking in at the polling station (although Trotter notes that you don’t need to be of voting age or even registered in order to play) and can also get credits for volunteering, and of course can post to their Facebook wall to show that they have voted.

But Fred Trotter says that he doesn’t just see the game as a fun way of trying to convince voters to choose his father for Congress — he believes that such social games are “the future of politics.” The game developer says that U.S. politics has become polarized as a result of big-spending interest groups and “fanatical single-issue voters” (interestingly enough, Trotter’s father is a hyper-conservative Tea Party candidate). The younger Trotter says that he sees Facebook and the kind of engagement it allows between potential voters as an antidote to some of that polarization:

In this hopeful/hypothetical world, real-world trust relationships, enabled by virtual social networks, will become the new political currency. I want people like my father and his opponent to care much more about someone who has 1000 followers on facebook or twitter, and has shown that 730 of those followers take their endorsement seriously, than the person who can pay for a political ad for them for $100k.

Update: In an email, Fred Trotter said that his father initially asked him for his help in running his campaign website, but “what I do is code for social change (which I define as Hacktivism) and I realized that by writing him an application I could engage him with voters that he would never have been able to engage with before. That counts as social change in my book.” Trotter said that “the whole point is to change to a political engagement system that does not center on money or fanaticism, but rather relationships and trust,” and that he is hoping to open-source the code behind his election game so that others can make use of it as well.

There are those who dislike the whole trend of what some are calling “gamification,” which has seen game mechanics such as points, badges and “levelling up” applied to all kinds of non-game situations. But the Trotter campaign’s use of game rewards as motivation seems like a natural extension of the kinds of social-networking methods that were used so effectively during the Obama campaign, both to get voters interested in the candidate, to help them connect with other like-minded voters, and to motivate them to actually get out and vote. Whether or not it helps Mr. Trotter get elected or not — and whether social games can reduce the polarization of the U.S. political scene, as his son hopes — it is certainly an interesting experiment.

Murdoch Admits He Can’t Compete With Google

News Corp. founder Rupert Murdoch has likely never acknowledged (at least not publicly) that he has failed at something, particularly when it involves a market worth billions of dollars, but he appears to have conceded defeat in his attempt to build a competitor to Google News. According to several news reports today, an ambitious project to create an aggregation service — code-named Project Alesia — has been axed, just weeks before it was supposed to launch.

The project, which was named (in typical Murdoch style) after a famous military campaign by Julius Caesar, had reportedly already sucked up about $30 million in funding and had a staff of more than 100. The venture was being led by Ian Clark, former managing director of thelondonpaper, and a News Corp. digital specialist named Johnny Kaldor, and according to one report had already booked $1.5 million worth of advertising to promote the launch — which was expected within a matter of weeks.

The project was designed to aggregate content from all of News Corp.’s various properties — including The Wall Street Journal, The Times, The Sunday Times, The Sun and News of the World — and distribute it via the web, mobile devices and the iPad. But it was also intended to include content from other publishers and broadcasters as well, and those partnerships appear to have been part of the problem. According to sources who spoke to The Hollywood Reporter, some of the partners were not ready technologically or administratively, while others apparently preferred to work on their own mobile and iPad stratgies rather than bending to Murdoch’s will (not wanting to partner with Rupert Murdoch? Imagine that!)

A report by the industry news site MediaWeek, meanwhile, said that there were also concerns about the runaway costs of the venture — which isn’t surprising, given that $30 million over a year is a rather impressive amount, and the site hadn’t even launched yet. The project is expected to shed most of the 80 or so freelancers that were working on the launch, and will try to find room for the rest of the staff at some of Murdoch’s other properties. And now MySpace has another heavily-subsidized News Corp. failure to keep it company, and the digital savvy of the company’s octogenarian founder (not to mention his senior executives) takes another shot in the flank.

Hey Washington Post: It’s Called Social Media

It’s been awhile since we had a blow-up among traditional media entities about using Twitter and other social media, but now the Washington Post has provided yet another compelling example of how newspapers in particular aren’t really getting the whole “social” aspect of social media. This was easy to forgive a year or two ago, when Twitter was relatively new and social media was unfamiliar territory, but it’s really hard to cut the Washington Post or any other entity much slack at this point. Now it almost seems like they don’t *want* to get it.

The issue, according to a report from Washington-based news startup TDB, was the response by an editor using the newspaper’s Twitter account defending the Post’s decision to run a particular column. The Post hasn’t provided any details, but TBD says that the complaint came from the Gay and Lesbian Alliance Against Defamation (GLAAD), which was upset about a column written by an anti-gay activist that ran in the paper’s faith section. According to TBD, an editor posted to Twitter that the newspaper was trying to cover both sides of the issue and defended the running of the column.

This led to a memo from Post managing editor Raju Narisetti, entitled “responding to readers via social media.” In it, Narisetti said that posting the tweet was wrong, and that while the newspaper encourages everyone in the newsroom to “embrace social media and relevant tools,” the main purpose of the Post’s accounts on these various networks is to “use them as a platform to promote news, bring in user generated content and increase audience engagement with Post content.” Isn’t responding to readers a way of increasing engagement? Apparently not. The Post editor went on to say that:

No branded Post accounts should be used to answer critics and speak on behalf of the Post, just as you should follow our normal journalistic guidelines in not using your personal social media accounts to speak on behalf of the Post.

Narisetti said that while the newspaper welcomed responses from readers in the form of comments on its stories — and was prepared to “sometimes engage them in a private verbal conversation” — debating issues with readers personally through social media was not allowed. Why? Because this would be “equivalent to allowing a reader to write a letter to the editor and then publishing a rebuttal by the reporter.” The Washington Post ME didn’t provide any details on why this would be a bad thing, just that “it’s something we don’t do.” But why not? Surely criticism over the newspaper’s coverage of issues is a perfect occasion to engage with those readers, both on Twitter and elsewhere.

The fact that Narisetti was the one delivering this message is more than a little ironic, since the Post editor was involved in his own run-in with Post management over Twitter: last year, the ME came under fire from senior editors after he posted some of his thoughts about political topics on his personal Twitter account. After the paper instituted a restrictive new social-media policy, Narisetti posted a message saying that “for flagbearers of free speech, some newsroom execs have the weirdest double standards when it comes to censoring personal views,” and then subsequently deleted his account.

There’s no question that Twitter has been the source of much tension in newsrooms across the country and around the world, because of the way in which it makes journalism personal — something that many journalists see as a positive thing, but many traditional media entities see as a threat. Earlier this year, a senior editor at CNN was fired over remarks she made on Twitter, and just today the BBC reprimanded its staff for sharing what the service called “their somewhat controversial opinions on matters of public policy” via social-networking sites like Twitter. It seems that many media outlets are happy to use social media to promote their content, but when it comes to really engaging with readers they would much rather not, thank you very much.

What We Can Learn From the Guardian’s New Open Platform

British national paper The Guardian isn’t the kind of tech-savvy enterprise one would normally look to for guidance on digital issues or Internet-related topics. For one thing, it’s not a startup — it’s a 190-year-old newspaper. And it’s not based in Palo Alto or SoMa, but in London Manchester, England. The newspaper company, however, is doing something fairly revolutionary. In a nutshell, The Guardian has completely rethought the fundamental nature of its business — something it has effectively been forced to do, as many media entities have, by the nature of the Internet — and, as a result, has altered the way it thinks about value creation and where that comes from.

Enter The Guardian’s “Open Platform,” which launched last week and involves an open API (application programming interface) that developers can use to integrate Guardian content into services and applications. The newspaper company has been running a beta version of the platform for a little over a year now, but took the experimental label off the project on Thursday and announced that it is “open for business.” By that The Guardian means it is looking for partners who want to use its content in return for either licensing fees or a revenue-sharing agreement of some kind related to advertising.

To take just one example, The Guardian writes a lot of stories about soccer, but it can’t really target advertising to specific readers very well, since it is a mass-market newspaper. In other words, says Guardian developer Chris Thorpe, the newspaper fails to appeal to an Arsenal fan like himself because it can’t identify and target him effectively, and therefore runs standard low-cost banner ads. By providing the same content to a website designed for Arsenal fans, however, those stories can be surrounded by much more effectively targeted ads, and thus be monetized at a much higher rate — a rate the newspaper then gets to share in.

Open APIs and open platforms aren’t all that new. Google is probably the largest and most well-known user of the open API as a tool to extend the reach of its search business and other services, such as its mapping and photo services. Most social networks, such as Facebook and YouTube, also offer APIs for the same reason, though not all of them are as open as Google’s.

The Guardian, however, is the first newspaper to offer a fully open API (the New York Times has an API, but it doesn’t provide the full text of stories, and it can’t be used in commercial applications). It’s worth looking at why the paper chose to go this route, and what that might suggest for other companies contemplating a similar move — and not just content-related companies, but anyone with a product or service that can be delivered digitally.

For a content company like a newspaper, producing and distributing its content is the core of the business. Whether it’s in paper form or online, advertising usually pays the freight for the content, although subscription charges help, for both print papers and online versions like the Wall Street Journal, the Economist, etc. Many newspapers have regretted their decision to provide content online for free, since online advertising isn’t nearly as lucrative as print advertising (primarily because there are far more web pages to advertise on than there are newspaper pages, and therefore the supply outweighs the demand).

So why would a newspaper like The Guardian choose to provide access to its content via an open API, and not just some of its content, but everything? And why would it allow companies and developers to use that content in commercial applications? For one simple reason: There is more potential value to be generated by providing that content to someone else than the newspaper itself can produce by controlling the content within its own web site or service. You may be the smartest company on the planet, but you are almost never going to be able to maximize all the potential applications of your content or service, no matter how much money you throw at it.

As Thorpe described in a recent interview, the newspaper sees the benefits of an open platform as far outweighing the disadvantages of giving away content. By allowing developers to use the company’s content in virtually any way they see fit — and not just some of it, but the entire text of articles and databases the newspaper has put together — it can build partnerships with companies and monetize that content far more easily than it could ever do on its own.

This is effectively the opposite approach to the one that newspapers such as the Journal take, which is to up paywalls and charge users for every page they view, or charge them after a certain number of views (as the Financial Times does and as the New York Times is planning to do). It’s also the opposite approach to the one that companies like Apple take to their business — although Apple doesn’t produce content, it exclusively licenses and tightly controls the content it does handle (such as the music in iTunes), and it applies the same type of controls to its software and hardware.

Partnerships of the kind The Guardian is working on make a lot more sense for most companies that have lost the ability to control what happens to their content, something the Internet has done to virtually anyone whose product can be digitized and turned into bits, but has been particularly acute for content companies. By allowing others to make use of that content for their own purposes, and sharing in the revenue that comes from it, The Guardian takes what would otherwise be a disadvantage — the fact that it has lost control — and turned it to an advantage by becoming a platform. It’s a lesson other companies could stand to learn as well, instead of continually trying to reassert or recreate the control they have lost.

Twitter Annotations and the Future of the Semantic Web

Among the announcements at Twitter’s first “Chirp” conference for developers this past April was the launch of a new feature called Annotations. Unlike, say, “promoted tweets” or Twitter Places, Annotations aren’t so much a product launch as a substantial rethinking of the way the service functions on a fundamental level. The changes and extra dimensions it adds to Twitter could have a tremendous impact, not just on the social network and the developers and companies who make use of it, but on the way we interact with the web itself.

The new feature will be one of the first large-scale implementations of something called the “semantic web,” a term coined by World Wide Web Consortium director Sir Tim Berners-Lee. It refers to web technology with a built-in understanding of the relationships between its different elements — that is, everything from web pages to specific pieces of websites and services. Equipped with these kinds of tools, developers and companies can create applications and services that allow different pieces of the web to function together and exchange information, and therefore make services — from stocks to shopping to news — easier to use and more efficient.

An example of the semantic web used by Berners-Lee is the simple act of getting a cup of coffee with a friend. Instead of having to manage multiple different services or applications — calling or emailing the friend, checking a calendar, looking for a coffee shop nearby, checking a bus schedule — building semantic knowledge in would allow all of these different applications to talk to each other. You could simply choose a task in a specific piece of software, such as a calendar, and see dates and times that would work, as well as locations and bus routes automatically laid out for you.

While Annotations won’t make this high of a level of integration possible (at least not right away), the underlying principle is the same: Additional information, attached to an action, adds meaning to the behavior of users and can be interpreted in some way by software. The feature is expected to launch sometime later this year, and will allow developers to add that additional information to a tweet. That might include a keyword, a string of text, a hyperlink, a geographic location or virtually anything else that could be related to a message on the social network. These pieces of “metadata” won’t affect the character count of the original tweet, but will be carried along with it through the network and eventually be decoded, aggregated and filtered by a variety of applications or services (or by Twitter itself).

Twitter’s new feature isn’t the only large-scale experiment implementing the semantic web: Facebook is also rolling out its own version of metadata with its “open graph platform.” This involves an API as well as social plugins developers can add to web pages and services to allow users to “like” the pages they visit by clicking a button. Developers can then use the company’s open graph protocol to add metadata to this behavior, then track and filter that data in a variety of ways. For example, the site could detect that a user’s “like” occurred on a web page devoted to a movie, song or restaurant and track the most popular movies, etc. more easily.

Although Twitter and Facebook have both provided some guidelines for what kinds of activity and metadata they see developers and web sites integrating into their services, both social networks have also said that they will allow companies a substantial amount of leeway in coming up with their own ideas about what data to track or include.

The potential implications of this kind of semantic intelligence in social networks are substantial, because they will change the way we interact and use the web. A few examples include:

Reviews: Sites that involve restaurant, music or movie reviews could include metadata related to what a user is browsing when they post a comment to Twitter from a page, allowing other services to aggregate and filter that information to track popularity or make recommendations.

Stocks: Attaching a simple stock quote symbol to any tweet about a stock or a publicly traded company would allow services and users to track and aggregate information about those stocks, in the same way StockTwits does now.

Coupons: Companies could easily attach special offers to tweets that would be restricted to specific locations or specific times, allowing them to target users directly based on time or location.

Shopping: Metadata would allow sites to provide transaction info (if a user opted in) that would be attached to a tweet posted from a shopping site. This would also make it easy for services to rank and filter purchases, the same way Amazon does with its “people who bought X also bought Y” feature.

Music: Both users and services could track music-related tweets based on metadata involving the artist, genre, track, etc. Companies that want to target users based on a specific preference could then filter and analyze that data.

Games: Using metadata related to a specific social game, developers and companies could allow users to trade messages and play a form of reality game within a social network.

News: Any message that involved a current news story or location could have that information encoded in metadata, allowing users and services to track a developing story or event, as well as the conversation about it.

The impact of both Annotations and Facebook’s open graph protocol could turn out to be larger than either of those services individually: If services and applications that make use of one or both of these new technologies become popular with consumers, and the tools themselves become popular with developers, the semantic web envisioned by Sir Tim Berners-Lee and others could come closer to reality. That could change the way we interact with the web by making the software and services we use smarter and removing some of the friction between us and our social networks — and that will create new business opportunities not just for Twitter and Facebook, but for other smart technology companies as well.