Why Facebook Is Not the Cure For Bad Comments

There’s been a lot of discussion recently about Facebook-powered comments, which have been implemented at a number of major blogs and publishers (including here at GigaOM) over the past couple of weeks. Supporters argue that using Facebook comments cuts down on “trolling” and other forms of bad behavior, because it forces people to use their real names instead of hiding behind a pseudonym, while critics say it gives the social network too much power. But the reality is that when it comes to improving blog comments, anonymity really isn’t the issue — the biggest single factor that determines whether they are any good is whether the authors of a blog take part in them.

According to TechCrunch’s MG Siegler, the addition of Facebook comments seems to have improved the quality of the comments that the blog receives, but has reduced the overall number of them, which he says may or may not be a good thing — since some people may be declining to comment via Facebook as a result of concerns about their privacy, etc. A bigger issue, says entrepreneur Steve Cheney, is that using Facebook as an identity system for things like blog comments forces users to homogenize their identity to some extent, and thus removes some of the authenticity of online communication.

Although Cheney’s argument caused Robert Scoble to go ballistic about the virtues of real names online, Harry McCracken at Technologizer had similar concerns about the impact that Facebook comments might have, saying it could result in comments that are “more hospitable, but also less interesting.” And social-business consultant Stowe Boyd is also worried that implementing Facebook’s comments is a continuation of the “strip-malling of the web,” and that

Facebook personalizes in the most trivial of ways, like the Starbucks barristas writing your name on the cup, but they totally miss the deeper stata of our sociality. But they don’t care: they are selling us, not helping us.

There’s no question that for some people, having to put their real name on everything they do online simply isn’t going to work, because they feel uncomfortable blending their personal lives with their professional lives, or vice versa. Those people will likely never use Facebook comments, and that is a real deterrent to hitching your wagon to Facebook entirely.

But the biggest reason not to rest all of your hopes on Facebook comments is that Facebook logins are not a cure for bad comments, real names or no real names. The only cure is something that takes a lot more effort than implementing a plugin, and that is being active in those comments — in other words, actually becoming part of an ongoing conversation with your readers, even if what they say happens to be negative in some cases. This is a point that Matt Thompson of National Public Radio made in a blog post, in which he talked about the ways to improve the quality of comments:

Whether online or offline, people act out the most when they don’t see anyone in charge. Next time you see dreck being slung in the bowels of a news story comment thread, see if you can detect whether anyone from the news organization is jumping in and setting the tone.

As Thompson notes, the standard defense for not doing this is a lack of time, and responding to reader comments definitely takes time. But it’s something that we feel strongly about here at GigaOM, and it’s something that we are determined to do, to the best of our ability — regardless of whether it is through our regular comment feature, or through the Facebook plugin. In the end, it’s not the tool that matters, it’s the connection that it allows.

Hyper-Local News: It’s About the Community or It Fails

According to multiple news reports this morning, AOL has agreed to acquire hyper-local news aggregator Outside.in for a sum that is reported to be less than $10 million, substantially below the $14.4 million that the company has raised from venture funds and other sources. After four years of trying, the service has more or less failed to become much more than a local aggregator, pulling in automated feeds of news, blogs and keyword searches based on location.

There is a business in doing this, but not a very big one — and that’s because simply aggregating data isn’t going to produce enough traffic or engagement to get advertisers interested. As Marshall Kirkpatrick notes, the field is littered with hyper-local experiments that have not really succeeded. Why? I think it’s because many of these, including Outside.in, focus too much on the how of hyper-local — the automated feeds and the aggregation of news sources, which sites like Everyblock (which was bought by MSNBC in 2009) and Topix do with algorithms based on location — rather than the why. And the why is simple: to serve a community. Unless a site or service can do that, it will almost certainly fail.

So how do you do that? The most successful community news operations — like a startup called Sacramento Press, which continues to grow rapidly despite the presence of a traditional newspaper competitor in the McClatchy paper The Sacramento Bee, or a Danish newspaper project called JydskeVestkysten, which has thousands of community-based correspondents who submit content for a series of hyper-local sites — come from the communities that they serve. They aren’t data aggregators that are imposed on those towns and regions by some external source, but come from within them.

The easiest way to see whether a hyper-local site is working or not is to look at the comments. Are there heated discussions going on in the comments on stories? If not, then the site is likely to be a ghost town. History is filled with local news experiments like Backfence — which was founded by former Washington Post staffer Mark Potts and shut down in 2007 — and Dan Gillmor’s Bayosphere, which never really managed to connect with the communities they were supposed to be serving, despite all the best intentions. Among the startups trying to take a community-first approach is OpenFile, a kind of pro-am local journalism startup based in Toronto.

In the comments at Read/Write Web, the founder of Everyblock, programmer and entrepreneur Adrian Holovaty, said that his service is trying to add more community to its sites by focusing on comments and discussion around the issues — and that’s a good thing, because without it, there is nothing but a collection of automated data, and no one is going to form a strong relationship with that.

Topix, which says it is one of the largest local news services on the web, started out doing the news aggregation thing just like Outside.in and Everyblock, co-founder Chris Tolles said recently in an interview with me, and then almost accidentally started to become a community hub for lots of small towns and regions that didn’t have anywhere else to talk about the issues. Topix has focused on expanding those kinds of discussions, by targeting local hubs with features such as election-based polls during the recent mid-term elections, in order to spark more debate and engagement.

This is the central challenge for AOL and its Patch.com effort, which has already spent over $50 million launching hyper-local news operations in almost a thousand cities across the United States. The sites are designed to be one-man or one-woman units, with a local journalist (in many cases, one that came from a traditional media outlet) as the core of the operation, writing local news but also pulling in other local content from blogs, government sources and elsewhere. And most of the sites highlight the comments from readers prominently, which is smart.

But can this massive, manufacturing-style effort from a web behemoth manage to connect with enough towns on a grassroots level and really become a core part of those communities? Because without that, AOL is pouring money into a bottomless pit.

Newspapers Need to Be Of the Web, Not Just On the Web

The secret to online success for newspapers doesn’t depend on the choice of technology, or decisions about content, or even specific kinds of knowledge about the web, says Emily Bell — the director of the Tow Center for Digital Journalism at Columbia University, and the former head of digital for The Guardian. All it requires, she says, is a firm commitment to be “of the web, not just on the web.” Speaking at a journalism event in Toronto last night, Bell said the biggest single factor in the success that The Guardian had online was the determination to be part of the web, and to embrace even the controversial aspects of the online content game — including user-generated content and the use of tools to track readers and traffic. “Its useful to have the digital skills,” she said, “but more important to have a digital mindset.”

One of the most controversial things The Guardian did early on, according to Bell, was to launch the Huffington Post-style Comment Is Free platform in 2006, which allowed anyone to submit opinion or commentary pieces and have their blog posts run alongside the traditional columnists employed by the paper.

It was this last part of the project that really caused a furor within The Guardian, said Bell, because the traditional columnists didn’t want their pearls of wisdom to be appearing alongside the rantings of non-journalists, and they expressed their displeasure in no uncertain terms to Guardian editor-in-chief Alan Rusbridger. To his credit, Bell says the editor stood firm.

Bell also noted that one of the big factors in the rise of The Huffington Post was the New York Times‘ (s nyt) decision to put all of its columnists behind a pay wall, which it did in 2005. The wall was dismantled in 2007, but while it was in effect it locked the NYT’s opinion leaders away from the web, and effectively removed them from the discussion stream — which created a perfect opportunity for Arianna Huffington, and helped her build a business that AOL just acquired for $315 million (s aol). It remains to be seen what kind of impact the NYT’s new “metered” pay wall will have once it launches, which is expected to happen soon.

Bell said one of the mistakes most newspapers made was to not pay close enough attention to the technology side of the online content business, and to ignore the obvious impact of social networks such as Twitter and Facebook. Bell said she met with Google (s goog) executives in 2004, and they warned that the traditional media industry was out of touch with what readers and advertisers wanted. But newspaper executives thought “that was just about search, and that wasn’t our business — but the more I thought about it, the more I thought it was our business.” The same thing happened with the rise of social media, she says: “People thought, oh that’s not our business — but it was.”

The former Guardian executive said that using tools to track what readers click on doesn’t mean that “we will all just write about Britney Spears without her clothes on,” but simply means that journalists can keep an eye on what people are interested in reading about. The idea that paying attention to such metrics is somehow undercutting journalism is “just plain wrong,” she said. Bell also noted that newspapers have seen the digital side of their business as the risky part, when the reality is that the legacy print operations are actually more risky. “Even if you don’t know what is going to happen in your legacy business, you know what is happening now — you are losing money,” she said.

When asked during the Q&A session about how newspapers should blend their traditional newsrooms with their new digital operations, Bell said that “the jury is still out” on whether merging newsrooms is a good idea. But she said one thing was clear: that having traditional print editors telling digital staff what to do was “a recipe for disaster.” A number of newspapers that have merged their newsrooms — including the Washington Post (s wpo), which used to have its print and online operations in two completely separate buildings, with separate management — have suffered after the merger because, as journalism professor Jay Rosen and others have pointed out, the “print guys won.”

Bell’s views on who should be driving the innovation at newspapers echo those of publisher John Paton, CEO of the Journal-Register Co., which owns a chain of regional daily and weekly papers in New Jersey and Connecticut. In a digital manifesto he wrote for the company last year, Paton said that newspapers need to “be digital first,” and that the best way to do that is to “put the digital guys in charge of everything.”

Book Publishers Need to Wake Up And Smell the Disruption

The writing has been on the wall for some time in the book publishing business: platforms like Amazon’s Kindle (s amzn) and the iPad (s aapl) have caused an explosion of e-book publishing that is continuing to disrupt the industry on a whole series of levels, as Om has written about in the past. And evidence continues to accumulate that e-books are not just something established authors with an existing brand can make use of, but are also becoming a real alternative to traditional book contracts for emerging authors as well — and that should serve as a massive wake-up call for publishers.

The latest piece of evidence is the story of independent author Amanda Hocking, a 26-year-old who lives in Minnesota and writes fantasy-themed fiction for younger readers. Unlike some established authors such as J.K. Konrath, who have done well with traditional publishing deals before moving into self-publishing their own e-books, Hocking has never had a traditional publishing deal — and yet, she has sold almost one million copies of the nine e-books she has written, and her latest book appears to be selling at the rate of 100,000 copies a month.

It’s true that the prices Hocking charges for these books are small — in some cases only 99 cents, depending on the book — but the key part of the deal is that she (and any other author or publisher who works with Amazon or Apple) gets to keep 70 percent of the revenue from those sales. That’s a dramatic contrast to traditional book-publishing deals, in which the publisher keeps the majority of the money and the author typically gets 20 percent or even less. If you sell a million copies of your books and you keep 70 percent of that revenue, that is still a significant chunk of change, even if each book sells for 99 cents.

(Update: As a number of commenters have noted, only books that are priced at $2.99 or higher are eligible for Amazon’s 70-percent royalty rate; books priced cheaper than that are eligible for a 35-percent royalty rate).

The overwhelming appeal of that kind of mathematics has other authors moving away from traditional publishing deals as well, including Terrill Lee Lankford, who wrote recently about how he turned down a deal with a major publisher in the middle of negotiations over a new book because the publisher wanted him to agree to a deal for a future e-book that would have given the publishing house 75 percent of the revenue — and tried to entice him with a hefty advance for the original book. But the author said no to both deals, saying:

I see it as a permanent 75% tax on a piece of work that generates income with almost no expense after the initial development and setup charges.

Just as the music industry did, many book publishers seem to be clinging to their traditional business models, despite mounting evidence that the entire structure of the industry is being dismantled, and the playing field is being leveled between authors and publishers. And it’s not just individual authors who are taking advantage of this growing trend — author and marketing consultant Seth Godin has created something called The Domino Project in partnership with Amazon, which he sees as a new kind of publishing middleman that can help authors take advantage of the e-book wave. More traditional publishers should be paying attention, or they will find their lunch is being eaten.