Alan Rusbridger on the future of news

Alan Rusbridger is the editor-in-chief of The Guardian, easily one of the most prestigious newspapers in the English-speaking world, and is widely admired as a journalist’s journalist. At the same time, he has also been one of the driving forces behind making his newspaper a leader online, which has involved embracing community — including ground-breaking experiments such as Comment Is Free — as well as social-media tools such as Twitter. The video embedded here (click through if you’re reading this via RSS) is a great summary of some of his views about the future of newspapers, the wisdom of the crowd, the blurring of the line between journalists and non-journalists, the need for community and the appeal of Twitter. Highly recommended. Hat tip to Adam Tinworth for the link to the video.

The benefits of a live-blog

Like a lot of newspapers and media outlets, the paper I work for in Toronto — the Globe and Mail — has been experimenting a lot with a great live-blogging and live-discussion tool called Cover It Live. The software comes from a company located in Toronto, but is being used by everyone from Newsweek and Yahoo to Vanity Fair and the Austin Statesman-Review. We’ve hosted live discussion/news stories involving the Obama inauguration, the NHL hockey trade deadline, federal communication hearings and even a shooting in a Toronto subway station.

One of the big benefits of the software is that it allows you to do so much within the app itself, which is embedded in a story page as a widget via javascript. You can post photos right in the stream, embed video clips and do instant polls — and integrated into all that are comments from readers. You can also pull comments from Twitter, either by approving individual users or by pulling in tweets that use a specific hashtag or keyword related to the topic. The editor or “producer” can see all the comments and moderate them, and the live blog can be archived and replayed.

For large public events such as the Obama inauguration (or the Oscars), there is a very powerful desire to interact with other people who are watching the same event, and Cover It Live makes that very easy and appealing. News updates are interspersed with user comments in a very natural way, and reporters and editors can respond easily. For events such as the NHL trade deadline, several readers asked specific questions of the reporters and columnists who took part, and got answers within minutes — something that simply doesn’t happen with traditional newspaper stories, even online.

(please read the rest of this post at the Nieman Journalism Lab)

It’s all about dematerialization

If climate-change experts are correct in thinking that we could be close to a “tipping point” that might accelerate the already substantial climate change we’ve seen to date, the pressure to find solutions is sure to intensify. One of the potential mitigating factors is something that economists and scientists like to call “dematerialization.” Simply put, this is the process by which products and services with greater environmental impact are gradually replaced by those with less impact. As Wikipedia defines it, the term refers to the absolute or relative reduction in the quantity of materials required to serve economic functions in society.

Theoretically, the IT industry should be a great contributor to this effect or process, for a variety of reasons. For example, computers have gotten a lot smaller, cheaper and more efficient, in terms of computing power per watt of energy, etc. As this piece from Hewlett-Packard notes, the fastest computer in 1946 performed about 5,000 operations a second, weighed more than 30 tons, and consumed almost 200 kilowatts of electricity. In contrast, an off-the-shelf laptop purchased today has thousands of times the processing power, weighs just a few pounds, and consumes less than one-thousandth the electricity.

This effect can have related “rebound” effects, however, as a number of researchers have noted. The fact that electronic devices including computers have become smaller and cheaper to both buy and operate means that there are also a lot more of them, as this paper points out, and can contribute to a consumer attitude of ”replace rather than repair.” As the researchers note, while dematerialization “may be the case on a per-unit basis, the increasing number of units produced can cause an overall trend toward materialization with time.” And then there are all-new devices as well: according to Apple, the lifecycle carbon footprint of an iPhone involves the emission of 121 pounds of CO2-equivalent greenhouse gas emissions over the course of a three year expected lifetime of use.

While IT services might lead to improvements in a number of ways — thanks to developments such as videoconferencing, telework, and electronic commerce — it’s difficult to draw a straight line connecting these services to an overall reduction in environmental impact, according to some scientists who have looked at the issue. While IT can improve the efficiency of existing operations and create new, more efficient ways of doing the same things, “having the means to use less materials does not mean that they will be adopted, nor does it guarantee that their adoption will actually lead to dematerialization.” As an example, despite the arrival of personal computers and other electronic services such as email, the total consumption of paper has doubled since 1950.

So while dematerialization has promise as a mitigating factor in climate change — and the contributions of the IT industry on that score are many — it is far from being a panacea, since the effects of such changes in the IT business can be more than counterbalanced by contrary activity elsewhere in the global economy.

Cluetrain: Human speech, human concerns

Earlier this year, my friend and former Globe colleague Keith McArthur came up with the idea of celebrating the 10th anniversary of The Cluetrain Manifesto by having 95 people blog about the 95 theses that formed the core of the book. So he set up the Cluetrainplus10 wiki and asked people to sign up, and after looking at the available choices, I settled on number 38: Human communities are based on discourse — on human speech about human concerns. Why? I guess in part because I’ve been thinking a lot about those kinds of issues in my still relatively new role as Communities Editor at the Globe and Mail, where my job consists of trying to find new and better ways to connect readers with our writers and our content.

In many ways, the Globe isn’t really all that different from any other company. We have a product — namely, our content — and we have customers, except that we call them readers. Of course, unlike many companies, we also play a kind of public-service role, but that’s a service to readers and to the community as a whole as well. And just like other companies, we are trying to find our place in this new, more connected world, where our customers are not just looking to interact and engage with us, but are also interacting with each other, carrying on conversations that we theoretically helped to start. How can we become a part of those conversations? I think the Cluetrain message is a simple one: by being human, and by speaking the way that human beings do.

Continue reading “Cluetrain: Human speech, human concerns”

America has a new Chief Technology Officer

So America has a new “Chief Technology Officer” (although the actual name of the position is Associate Director for Technology in the Office of Science and Technology Policy). Aneesh Chopra, formerly Virginia’s Secretary of Technology, didn’t get any resounding cheers from the hardcore geeks — in part because he isn’t from the center of the geek-o-sphere, Silicon Valley — but even having the position at all is a big step up from previous administrations, so most industry observers are fairly positive about the appointment. But will the position become simply a platform for simplistic solutions, such as a broadband-for-all policy, as some fear, or will it have broader and more far-reaching effects, as some are recommending?

Those who are hoping for more depth than a simple broadband rollout strategy will likely be pinning their hopes on the newly announced President’s Council of Advisors on Science and Technology (PCAST), a non-policy making body that President Obama named on Monday. Perhaps the most prominent name on the list is Eric Schmidt, the CEO of Google. As one observer noted, it wasn’t that long ago that the search-engine giant was seen as an outsider in Washington, with little or no ties to the government and no real influence. That has definitely changed with the arrival of Barack Obama and his technology-friendly White House.

The full PCAST roll call (the complete list is here) also includes Craig Mundie, the Chief Research and Strategy Officer at Microsoft Corporation; Shirley Ann Jackson, president of Rensselaer Polytechnic Institute and former Chair of the US Nuclear Regulatory Commission; James Gates Jr., director of the Center for String and Particle Theory at the University of Maryland, and Rosina Bierbaum, dean of the School of Natural Resources and Environment at the University of Michigan. Breadth is something that the Obama administration seems to have more or less covered, since the list Includes specialists and experts in chemistry, physics, biology, geology, computer science, engineering, environmental science, astrophysics and internal medicine.

The President said that the PCAST board would be charged with advising him “about national strategies to nurture and sustain a culture of scientific innovation.” Some hope that Schmidt and Mundie will be able to sway Obama on the topic of net neutrality, something both are interested in. And those hoping for a broad mandate can also take comfort in the fact that the government has pledged to spend 3 per cent of the country’s GDP, or about $415.2-billion, on science and technology research, development and education — the largest financial commitment in America’s history (larger even than the original moon-shot space program). Among other things, Obama has proposed doubling the budgets for the National Science Foundation, the Department of Energy’s Office of Science, and the National Institute of Standards and Technology.

Could these kinds of investments be jeopardized by the shaky economy? No doubt. But the President has made it clear that investment in science and technology research and development is a priority for his government, and he arguably has an advisory group with enough intellectual heft and real-world experience to make it a reality.

Is Craigslist the victim of a witch-hunt?

In the aftermath of a horrible murder by someone who is now routinely referred to as “the Craigslist killer,” the online classified site has been coming under increasing pressure from both the government — which has been waging a prostitution-related crusade for some time now — and others who see the service as somehow complicit in these kinds of crimes. Venture capitalist and blogger Jeff Nolan, for example, says in a recent post that Craigslist “has a problem” and should find some way to deal with it, and suggests that both founder Craig Newmark and CEO Jim Buckmaster don’t seem to care much, or want to do anything about it.

“Instead of waiting for a community solution to a problem that will only get worse, Newmark and Buckmaster should be taking a leadership position and driving effective change to combat crime taking place on Craigslist.”

Jeff seems like a smart guy, but I couldn’t disagree more with his post. As far as I can tell, Craigslist has been doing everything it can to remove posts that are linked to criminal behaviour, whether prostitution or anything else, and they appear to have bent over backwards to co-operate with the attorneys-general from a number of states when it comes to imposing fines on wrong-doers and other strategies for limiting that kind of behaviour. What more could they possibly do — turn over their server log files to the authorities? Let Craigslist become an arm of the government?

Continue reading “Is Craigslist the victim of a witch-hunt?”

When does curation become scraping?

Curation has become a popular term in media circles, in the sense of a human editor who filters and selects content, and then packages it and delivers it to readers in some way. Many people (including me) believe that, in an era when information sources are exploding online, aggregation and curation of some kind is about the only service left that people might be willing to pay for. That’s why it’s been interesting to watch one prominent website — All Things Digital, the online blog property that is owned by the Wall Street Journal, but run as a separate entity by Kara Swisher and Walt Mossberg — wrestling with how to handle that kind of aggregation, amid criticism from some prominent bloggers that it has been doing it wrong.

As described by Andy “Waxy” Baio in an excellently reported roundup of the brouhaha, the fuss seemed to start with comments from Wall Street Journal editor Robert Thompson about how Google and other aggregators of news are “parasites” in the intestines of the Internet, because they republish the content of others and then make money from it. Pretty soon, some bloggers were pointing out that All Things Digital did exactly the same thing in a section called Voices — namely, published long excerpts from a variety of prominent bloggers, displayed in exactly the same way as the rest of the site’s content, and surrounded by ads.

Josh Schachter, founder of Delicious, noted this behaviour in a Twitter message, and Metafilter founder Matt Haughey said that “apparently The Wall Street Journal’s All Things D does a reblogging thing. I sure wish they asked me first though. That’s a hell of a lot of ads on my ‘excerpt’.” Merlin Mann, who blogs at 43folders, said on Twitter that “republishing online work without consent and wrapping it in ads is often called ‘feed scraping.’ At AllThingsD, it’s called ‘a compliment.”

(please read the rest of this post at the Nieman Journalism Lab)

Newspapers: more creativity, please

As many people probably know by now, Google came out with another of its Google Labs features on Monday: a Google News timeline view, which gives users the ability to see and scroll through headlines, photos and news excerpts by day/week/month/year. The sources of this data can also be customized to include not just traditional news sources but also Wikipedia, sports scores, blogs, etc. It’s a fascinating way of interpreting the news — not something that is likely going to replace a regular old Google News headline view, but an additional way of looking at things.

One question kept nagging at me as I was looking at this latest Google effort at delivering the news, and that was: Why couldn’t a news organization have done this? Why not a newspaper, or even a collective like Associated Press (which seems to prefer threats to creativity)? Isn’t delivering the news in creative and interesting ways that appeal to readers what we are supposed to be doing? Apparently not. Even the most progressive of newspaper sites still looks very much like a traditional newspaper — not that there’s anything wrong with that, of course. But is it too much to ask for a little variety? Why not have some alternative display possibilities available? Who knows, it might even con some people into reading more.

(please read the rest of this post at the Nieman Journalism Lab blog)

Google helps newspapers, period.

As the newspaper industry has grown weaker and weaker, there has been a steady stream of articles and blog posts blaming Google for some or all of this decline. I’m not going to link to them all, because there are simply too many, and they are easy enough to find. The standard allegation is that the search engine, and other similar engines such as Yahoo and MSN, hijack readers by aggregating content, and then monetize those eyeballs by posting ads near the content. Newspapers get traffic, but Google critics argue that this traffic is essentially worthless — or at least can’t make up for the value that Google has siphoned off.

One of the most recent articles to take this tack appeared in the Guardian and quoted Sly Bailey, the chief executive office of newspaper publisher Trinity Mirror. Among other things, Ms. Bailey said that:

“By creating gargantuan national newspaper websites designed to harness users by the tens of millions, by performing well on search engines like Google, we have eroded the value of news. News has become ubiquitous. Completely commoditised. Without value to anyone.”

This argument is almost too absurd to be taken seriously. In a nutshell, Ms. Bailey is claiming that by expanding their readership and making it easier for people to find their content, newspapers have shot themselves in the foot, and should do their best to avoid being found by new readers. It’s particularly ironic that the Mirror CEO is making these comments in a story in The Guardian, which has built up an impressive readership outside the UK thanks to its excellent content.

(read the rest of this post at the Nieman Journalism Lab)

Ashton Kutcher and the evolution of media

The standard response from many people on Twitter this week to the news that Ashton Kutcher wanted to get a million followers was thinly veiled (or not-so-thinly veiled) disgust. Long-time Twitter fans were outraged that anyone — let alone a two-bit TV actor — would be so blatantly egotistical, and trivialize such a great social-media tool in that way, just so he could get on the Oprah show. Shane Richmond said that it wasn’t clear who was the bigger “Twitter tool,” Ashton or Oprah. All of these comments, of course, ignored the fact that Kutcher was using his campaign to raise money for malaria relief efforts, and has in fact raised a total of almost $1-million, according to a recent tweet.

So Ashton is more or less using Twitter as the 21st-century version of Jerry Lewis’s telethon for muscular dystrophy. That isn’t the interesting thing about his use of the social network, at least as far as I’m concerned. Far from being just an egotist who wants to take advantage of a medium to promote himself — although there could well be an aspect of grandstanding to it, as there is for many people — it seems clear that the actor has thought fairly seriously about the implications of Twitter from a media-industry standpoint (my friend Andrew Cherwenka seems to agree). And as a celebrity who is in the public eye almost all the time, he also has a somewhat unique take on the media industry and how it is being transformed.

(read the rest of this post at the Nieman Journalism Lab blog)

Steve Brill wants to charge for news… again

In the media industry, the name Steven Brill tends to bring back a lot of memories. The founder of CourtTV and Brill’s Content, he went on to create a new media entity called Inside, which was staffed with writers from Fortune and other leading publications. But the venture eventually folded. As more and more content moved online, Brill later tried to create a venture known as Contentville, which he envisioned as a sort of one-stop shop for content of all kinds — text, photos, video, audio — which publishers and distributors could offer through his online store.

Sound familiar? It should, because Brill is trying to revive the idea through a new project called Journalism Online, which he and his new partners announced this week. But the new venture has an unusual twist that Contentville — which eventually shut down due to a lack of revenue — did not.

Whatever you think of his idea, it’s clear that Brill still has some pretty high-powered contacts in media: one of his partners is Gordon Crovitz, the former publisher of the Wall Street Journal, and one of the guys who decided to charge money for the WSJ online, something virtually every other newspaper publisher dreams of doing someday. The third partner is Leo Hindery, a former telecom industry executive. Also on the board of advisors are former senior U.S. attorney David Boies and former Solicitor General Ted Olsen. The news release says that Journalism Online LLC will:

“…quickly facilitate the ability of newspaper, magazine and online publishers to realize revenue from the digital distribution of the original journalism they produce.”

How will it do this? Brill promises a four-point Marshall Plan for news, including a password-protected site where publishers can put their content and users can buy “annual or monthly subscriptions, day passes, and single articles from multiple publishers.” But it’s the third point in this plan that raises some interesting questions: the release says that the venture will “negotiate wholesale licensing and royalty fees with intermediaries such as search engines and other websites that currently base much of their business models on referrals of readers to the original content on newspaper, magazine and online news websites.”

(read the rest of this post at the Nieman Journalism Lab)

Defending “rule-breaking” journalism

Gina M. Chen, a veteran journalist and editor who works at The Post-Standard in Syracuse, N.Y., writes an excellent blog called “Save The Media,” which is aimed at helping journalists get used to some of the new tools in social media. Chen’s recent post, titled “10 ‘Journalism Rules’ You Can Break on Your Blog,” caused a stir in my newsroom at The Globe and Mail. One of my colleagues, for example, suggested that the post was irresponsible and that such rule-breaking is one of the reasons there is a “credibility gap” between bloggers and mainstream journalists.

You can read Chen’s post for the full list, but among other things, she suggested that bloggers should:

  • Use partial or fake names because “there are times on a blog that what a person says as an indication of public sentiment is more important than who said it.”
  • Tell only part of the story because “the beauty of a blog is you can update immediately as more details become apparent or earlier reports are disputed.”
  • Insert an opinion because “I think readers appreciate knowing that journalists have feelings, opinions, lives that shape how they view the world.”
  • Link to the enemy because “with blogging, you can give your readers the best — even if it’s not from your staff.”
  • Get personal because “you’re creating a community; that community wants to know you’re a person, not a robot.”
  • Answer your critics because “blogging is a conversation with readers. If someone criticizes your post or raises an opposing point of view, you should respond.”
  • Fix your mistakes because “I still don’t want to make any mistakes, but if I do, I can fix it in real time, not just run a correction the next day that few may see.”

So is this list an invitation to be careless, cut corners and risk your credibility as a journalist, as my colleague suggested? Hardly. I would argue that nearly every suggestion on Chen’s list makes perfect sense. Breaking these so-called rules not only isn’t bad, it could improve the practice of online journalism.

Continue reading “Defending “rule-breaking” journalism”

Nick Carr is wrong about Google

After seeing recommendations on Twitter from Clay Shirky and others, I was expecting a tour de force from author and former Harvard Business Review editor Nick Carr, but I confess that I found his post on Google as middleman — and its effect on newspapers — disappointing. Not just because the middleman comparison is one that has been made repeatedly over the past couple of years, and therefore doesn’t really add much to the conversation, but also because I think he is wrong. Or rather, I think that his description has some merit, but the lessons he draws are flawed, and ultimately unhelpful for newspapers (I would have put these thoughts into a comment, but Nick says he has disabled comments because they are too distracting).

Is Google a “middleman made of software,” as Nick describes it? In many ways, yes. And as he points out, entities that act as middlemen in a market typically act in their own interest. But what about his third point, in which he says:

The broader the span of the middleman’s control over the exchanges that take place in a market, the greater the middleman’s power and the lesser the power of the suppliers.

I think there’s a fundamental misunderstanding here. The broader the control that Google has over the exchanges that take place in a market, the greater its power — but that power doesn’t lessen the power of Google’s suppliers. If anything, in fact, it amplifies it. Does Google indexing my website, and providing a link to it when someone searches for my name, lessen the power that I have over my content? If you think of power as control over who sees the content and where, then yes. But in reality, it provides me with far more reach than I could otherwise achieve on my own, by exposing that content to people.

(read the rest of this post at the Nieman Journalism Lab blog)

Anonymity in reader comments has value

Doug Feaver, the former executive editor of the Washington Post, has a great column up about comments and the value of allowing them to not only be anonymous but unmoderated (other than by fellow commenters). This is a case I have tried — and continue to try — to make at the Globe and Mail, where I am the communities editor.

When I first took the job (and since) one of the first things people said was that our comments were unrelentingly bad and that we should require people to use their real names. I try to point out that while we are working on a number of ways to improve the tone of our comments, it’s virtually impossible to actually guarantee that someone has provided their real name, unless we ask them for their driver’s licence or credit card or SIN number, in which case we would dramatically reduce the number of people who would be willing to comment (I think in many cases what people want are real-*sounding* names, as opposed to obvious pseudonyms).

But in addition to that, I think the anonymity issue is largely a red herring, and that in fact there are many virtues to offering it, some of which I tried to outline in this post. Here’s a great excerpt from the Feaver piece:

I believe that it is useful to be reminded bluntly that the dark forces are out there and that it is too easy to forget that truth by imposing rules that obscure it. As Oscar Wilde wrote in a different context, “Man is least in himself when he talks in his own person. Give him a mask, and he will tell you the truth.”

Twitter: A workshop for journalists

I did a workshop about Twitter today for some of the journalists I work with at the Globe and Mail, and uploaded it to our internal wiki — and then I figured I might as well upload it to Slideshare so others could see it as well. I’ve embedded it in this post (click through if you’re reading via RSS) and you are free to share it or download it as you wish. I took a couple of slides out that had Globe-related traffic data in them — traffic pushed to stories by Twitter — but other than that it’s as I gave it (without my witty commentary, of course). I’m happy to say that while there was a range of knowledge in the room when it came to Twitter and social media, from a general familiarity to virtual nothing at all, I detected a lot of openness to the idea of using such tools to connect with readers in different ways.

I tried to make a number of points in the workshop, among them that Twitter is extremely simple to use (so why not give it a shot); that yes, it has a silly name, but that doesn’t mean it can’t be useful or valuable (Google had a silly name at one point too); that it is a great way of a) reaching out to and connecting with users, b) promoting our stories and c) finding sources for stories (otherwise known as “real people”); and that there are a number of tools that can make it even more useful (Tweetdeck, etc.). I also noted that you really only get out of it what you are prepared to put into it, and that the experience depends a lot on whom you choose to follow. And just to drive the point about promoting our stories home, I noted that our most-read story ever racked up a lot of those views because of Twitter.