Alan Rusbridger is the editor-in-chief of The Guardian, easily one of the most prestigious newspapers in the English-speaking world, and is widely admired as a journalist’s journalist. At the same time, he has also been one of the driving forces behind making his newspaper a leader online, which has involved embracing community — including ground-breaking experiments such as Comment Is Free — as well as social-media tools such as Twitter. The video embedded here (click through if you’re reading this via RSS) is a great summary of some of his views about the future of newspapers, the wisdom of the crowd, the blurring of the line between journalists and non-journalists, the need for community and the appeal of Twitter. Highly recommended. Hat tip to Adam Tinworth for the link to the video.
Like a lot of newspapers and media outlets, the paper I work for in Toronto — the Globe and Mail — has been experimenting a lot with a great live-blogging and live-discussion tool called Cover It Live. The software comes from a company located in Toronto, but is being used by everyone from Newsweek and Yahoo to Vanity Fair and the Austin Statesman-Review. We’ve hosted live discussion/news stories involving the Obama inauguration, the NHL hockey trade deadline, federal communication hearings and even a shooting in a Toronto subway station.
For large public events such as the Obama inauguration (or the Oscars), there is a very powerful desire to interact with other people who are watching the same event, and Cover It Live makes that very easy and appealing. News updates are interspersed with user comments in a very natural way, and reporters and editors can respond easily. For events such as the NHL trade deadline, several readers asked specific questions of the reporters and columnists who took part, and got answers within minutes — something that simply doesn’t happen with traditional newspaper stories, even online.
(please read the rest of this post at the Nieman Journalism Lab)
If climate-change experts are correct in thinking that we could be close to a “tipping point” that might accelerate the already substantial climate change we’ve seen to date, the pressure to find solutions is sure to intensify. One of the potential mitigating factors is something that economists and scientists like to call “dematerialization.” Simply put, this is the process by which products and services with greater environmental impact are gradually replaced by those with less impact. As Wikipedia defines it, the term refers to the absolute or relative reduction in the quantity of materials required to serve economic functions in society.
Theoretically, the IT industry should be a great contributor to this effect or process, for a variety of reasons. For example, computers have gotten a lot smaller, cheaper and more efficient, in terms of computing power per watt of energy, etc. As this piece from Hewlett-Packard notes, the fastest computer in 1946 performed about 5,000 operations a second, weighed more than 30 tons, and consumed almost 200 kilowatts of electricity. In contrast, an off-the-shelf laptop purchased today has thousands of times the processing power, weighs just a few pounds, and consumes less than one-thousandth the electricity.
This effect can have related “rebound” effects, however, as a number of researchers have noted. The fact that electronic devices including computers have become smaller and cheaper to both buy and operate means that there are also a lot more of them, as this paper points out, and can contribute to a consumer attitude of ”replace rather than repair.” As the researchers note, while dematerialization “may be the case on a per-unit basis, the increasing number of units produced can cause an overall trend toward materialization with time.” And then there are all-new devices as well: according to Apple, the lifecycle carbon footprint of an iPhone involves the emission of 121 pounds of CO2-equivalent greenhouse gas emissions over the course of a three year expected lifetime of use.
While IT services might lead to improvements in a number of ways — thanks to developments such as videoconferencing, telework, and electronic commerce — it’s difficult to draw a straight line connecting these services to an overall reduction in environmental impact, according to some scientists who have looked at the issue. While IT can improve the efficiency of existing operations and create new, more efficient ways of doing the same things, “having the means to use less materials does not mean that they will be adopted, nor does it guarantee that their adoption will actually lead to dematerialization.” As an example, despite the arrival of personal computers and other electronic services such as email, the total consumption of paper has doubled since 1950.
So while dematerialization has promise as a mitigating factor in climate change — and the contributions of the IT industry on that score are many — it is far from being a panacea, since the effects of such changes in the IT business can be more than counterbalanced by contrary activity elsewhere in the global economy.
Earlier this year, my friend and former Globe colleague Keith McArthur came up with the idea of celebrating the 10th anniversary of The Cluetrain Manifesto by having 95 people blog about the 95 theses that formed the core of the book. So he set up the Cluetrainplus10 wiki and asked people to sign up, and after looking at the available choices, I settled on number 38: Human communities are based on discourse — on human speech about human concerns. Why? I guess in part because I’ve been thinking a lot about those kinds of issues in my still relatively new role as Communities Editor at the Globe and Mail, where my job consists of trying to find new and better ways to connect readers with our writers and our content.
In many ways, the Globe isn’t really all that different from any other company. We have a product — namely, our content — and we have customers, except that we call them readers. Of course, unlike many companies, we also play a kind of public-service role, but that’s a service to readers and to the community as a whole as well. And just like other companies, we are trying to find our place in this new, more connected world, where our customers are not just looking to interact and engage with us, but are also interacting with each other, carrying on conversations that we theoretically helped to start. How can we become a part of those conversations? I think the Cluetrain message is a simple one: by being human, and by speaking the way that human beings do.
So America has a new “Chief Technology Officer” (although the actual name of the position is Associate Director for Technology in the Office of Science and Technology Policy). Aneesh Chopra, formerly Virginia’s Secretary of Technology, didn’t get any resounding cheers from the hardcore geeks — in part because he isn’t from the center of the geek-o-sphere, Silicon Valley — but even having the position at all is a big step up from previous administrations, so most industry observers are fairly positive about the appointment. But will the position become simply a platform for simplistic solutions, such as a broadband-for-all policy, as some fear, or will it have broader and more far-reaching effects, as some are recommending?
Those who are hoping for more depth than a simple broadband rollout strategy will likely be pinning their hopes on the newly announced President’s Council of Advisors on Science and Technology (PCAST), a non-policy making body that President Obama named on Monday. Perhaps the most prominent name on the list is Eric Schmidt, the CEO of Google. As one observer noted, it wasn’t that long ago that the search-engine giant was seen as an outsider in Washington, with little or no ties to the government and no real influence. That has definitely changed with the arrival of Barack Obama and his technology-friendly White House.
The full PCAST roll call (the complete list is here) also includes Craig Mundie, the Chief Research and Strategy Officer at Microsoft Corporation; Shirley Ann Jackson, president of Rensselaer Polytechnic Institute and former Chair of the US Nuclear Regulatory Commission; James Gates Jr., director of the Center for String and Particle Theory at the University of Maryland, and Rosina Bierbaum, dean of the School of Natural Resources and Environment at the University of Michigan. Breadth is something that the Obama administration seems to have more or less covered, since the list Includes specialists and experts in chemistry, physics, biology, geology, computer science, engineering, environmental science, astrophysics and internal medicine.
The President said that the PCAST board would be charged with advising him “about national strategies to nurture and sustain a culture of scientific innovation.” Some hope that Schmidt and Mundie will be able to sway Obama on the topic of net neutrality, something both are interested in. And those hoping for a broad mandate can also take comfort in the fact that the government has pledged to spend 3 per cent of the country’s GDP, or about $415.2-billion, on science and technology research, development and education — the largest financial commitment in America’s history (larger even than the original moon-shot space program). Among other things, Obama has proposed doubling the budgets for the National Science Foundation, the Department of Energy’s Office of Science, and the National Institute of Standards and Technology.
Could these kinds of investments be jeopardized by the shaky economy? No doubt. But the President has made it clear that investment in science and technology research and development is a priority for his government, and he arguably has an advisory group with enough intellectual heft and real-world experience to make it a reality.
In the aftermath of a horrible murder by someone who is now routinely referred to as “the Craigslist killer,” the online classified site has been coming under increasing pressure from both the government — which has been waging a prostitution-related crusade for some time now — and others who see the service as somehow complicit in these kinds of crimes. Venture capitalist and blogger Jeff Nolan, for example, says in a recent post that Craigslist “has a problem” and should find some way to deal with it, and suggests that both founder Craig Newmark and CEO Jim Buckmaster don’t seem to care much, or want to do anything about it.
“Instead of waiting for a community solution to a problem that will only get worse, Newmark and Buckmaster should be taking a leadership position and driving effective change to combat crime taking place on Craigslist.”
Jeff seems like a smart guy, but I couldn’t disagree more with his post. As far as I can tell, Craigslist has been doing everything it can to remove posts that are linked to criminal behaviour, whether prostitution or anything else, and they appear to have bent over backwards to co-operate with the attorneys-general from a number of states when it comes to imposing fines on wrong-doers and other strategies for limiting that kind of behaviour. What more could they possibly do — turn over their server log files to the authorities? Let Craigslist become an arm of the government?
Curation has become a popular term in media circles, in the sense of a human editor who filters and selects content, and then packages it and delivers it to readers in some way. Many people (including me) believe that, in an era when information sources are exploding online, aggregation and curation of some kind is about the only service left that people might be willing to pay for. That’s why it’s been interesting to watch one prominent website — All Things Digital, the online blog property that is owned by the Wall Street Journal, but run as a separate entity by Kara Swisher and Walt Mossberg — wrestling with how to handle that kind of aggregation, amid criticism from some prominent bloggers that it has been doing it wrong.
As described by Andy “Waxy” Baio in an excellently reported roundup of the brouhaha, the fuss seemed to start with comments from Wall Street Journal editor Robert Thompson about how Google and other aggregators of news are “parasites” in the intestines of the Internet, because they republish the content of others and then make money from it. Pretty soon, some bloggers were pointing out that All Things Digital did exactly the same thing in a section called Voices — namely, published long excerpts from a variety of prominent bloggers, displayed in exactly the same way as the rest of the site’s content, and surrounded by ads.
Josh Schachter, founder of Delicious, noted this behaviour in a Twitter message, and Metafilter founder Matt Haughey said that “apparently The Wall Street Journal’s All Things D does a reblogging thing. I sure wish they asked me first though. That’s a hell of a lot of ads on my ‘excerpt’.” Merlin Mann, who blogs at 43folders, said on Twitter that “republishing online work without consent and wrapping it in ads is often called ‘feed scraping.’ At AllThingsD, it’s called ‘a compliment.”
(please read the rest of this post at the Nieman Journalism Lab)
As many people probably know by now, Google came out with another of its Google Labs features on Monday: a Google News timeline view, which gives users the ability to see and scroll through headlines, photos and news excerpts by day/week/month/year. The sources of this data can also be customized to include not just traditional news sources but also Wikipedia, sports scores, blogs, etc. It’s a fascinating way of interpreting the news — not something that is likely going to replace a regular old Google News headline view, but an additional way of looking at things.
One question kept nagging at me as I was looking at this latest Google effort at delivering the news, and that was: Why couldn’t a news organization have done this? Why not a newspaper, or even a collective like Associated Press (which seems to prefer threats to creativity)? Isn’t delivering the news in creative and interesting ways that appeal to readers what we are supposed to be doing? Apparently not. Even the most progressive of newspaper sites still looks very much like a traditional newspaper — not that there’s anything wrong with that, of course. But is it too much to ask for a little variety? Why not have some alternative display possibilities available? Who knows, it might even con some people into reading more.
(please read the rest of this post at the Nieman Journalism Lab blog)
As the newspaper industry has grown weaker and weaker, there has been a steady stream of articles and blog posts blaming Google for some or all of this decline. I’m not going to link to them all, because there are simply too many, and they are easy enough to find. The standard allegation is that the search engine, and other similar engines such as Yahoo and MSN, hijack readers by aggregating content, and then monetize those eyeballs by posting ads near the content. Newspapers get traffic, but Google critics argue that this traffic is essentially worthless — or at least can’t make up for the value that Google has siphoned off.
One of the most recent articles to take this tack appeared in the Guardian and quoted Sly Bailey, the chief executive office of newspaper publisher Trinity Mirror. Among other things, Ms. Bailey said that:
“By creating gargantuan national newspaper websites designed to harness users by the tens of millions, by performing well on search engines like Google, we have eroded the value of news. News has become ubiquitous. Completely commoditised. Without value to anyone.”
This argument is almost too absurd to be taken seriously. In a nutshell, Ms. Bailey is claiming that by expanding their readership and making it easier for people to find their content, newspapers have shot themselves in the foot, and should do their best to avoid being found by new readers. It’s particularly ironic that the Mirror CEO is making these comments in a story in The Guardian, which has built up an impressive readership outside the UK thanks to its excellent content.
(read the rest of this post at the Nieman Journalism Lab)
The standard response from many people on Twitter this week to the news that Ashton Kutcher wanted to get a million followers was thinly veiled (or not-so-thinly veiled) disgust. Long-time Twitter fans were outraged that anyone — let alone a two-bit TV actor — would be so blatantly egotistical, and trivialize such a great social-media tool in that way, just so he could get on the Oprah show. Shane Richmond said that it wasn’t clear who was the bigger “Twitter tool,” Ashton or Oprah. All of these comments, of course, ignored the fact that Kutcher was using his campaign to raise money for malaria relief efforts, and has in fact raised a total of almost $1-million, according to a recent tweet.
So Ashton is more or less using Twitter as the 21st-century version of Jerry Lewis’s telethon for muscular dystrophy. That isn’t the interesting thing about his use of the social network, at least as far as I’m concerned. Far from being just an egotist who wants to take advantage of a medium to promote himself — although there could well be an aspect of grandstanding to it, as there is for many people — it seems clear that the actor has thought fairly seriously about the implications of Twitter from a media-industry standpoint (my friend Andrew Cherwenka seems to agree). And as a celebrity who is in the public eye almost all the time, he also has a somewhat unique take on the media industry and how it is being transformed.
(read the rest of this post at the Nieman Journalism Lab blog)