Back in September, I was honoured to be asked to be one of the featured presenters at the first TEDx Toronto, a kind of mini-version of the famous TED conference (whose videos are highly recommended). The video of that presentation,which was entitled “Five Ways New Media Can Save Old Media,” is now available on YouTube, and I have embedded a version of it in this post, or you can watch it in all its high-def glory here. Frankly, I wish that I had gotten to the point a little faster — and I’m sure the organizers wish I had as well, since I went over my time by quite a bit (no one seemed to mind though, which was quite nice of them). Thanks to Ryan Merkley and Becca Pace, and to the other speakers like Peter MacLeod and Gavin Sheppard, it was a pleasure to share the stage with you. Videos of all the speakers can be found on the Tedx YouTube page.
The Globe and Facebook
I did a workshop/presentation for Globe and Mail reporters about Facebook this week, and I’ve embedded a version of the PowerPoint here (it’s on Slideshare too if you want to view it there). It’s not very long, nor does it go into a lot of depth about the various issues that can arise when you use — or misuse — Facebook. It was really just an introduction to the topic, and an attempt to explain how we can use this massive social network for two broad purposes: 1) to allow us to find information, and also to reach out to people who might be involved in stories we are writing about and 2) to allow fans of our content to share, and thus to help promote, our news stories and content (the embedded presentation is actually even shorter than the one I gave at the workshop, because I removed a few slides that had proprietary numbers related to Globe traffic, Facebook metrics, etc.)
The basic impression I wanted to give reporters on the first point was that Facebook is a huge network filled with actual human beings, some of whom may want to help us with our reporting on a story, and/or talk to us about their experiences — which can improve our journalism, and help us fulfill our goal of making contact with real people, not just ones who work for advocacy groups or happen to live next door to a reporter. I tried to emphasize that it’s important to be polite when approaching people about a news event — in other words, to be human — rather than barging in with a microphone in hand, hassling people for a quote, and I also tried to make the point that simply becoming a member of a group doesn’t mean a person is deeply committed to a particular cause, since joining just takes a click.
On the second point, I talked about how we are using our newly-created fan page (which is here if you aren’t already a fan), and how the act of clicking “share” or “comment” or “like” effectively distributes that item — or a reference to it — into the user’s feed, where it can be seen by all of their friends, who might be exposed to a story that they wouldn’t otherwise read. And I also talked about how we are looking at integrating Facebook Connect so that users can connect their activity on the Globe and Mail website to their profile in Facebook, and so that theoretically we might be able to offer some of the same features that Huffington Post does, where readers can see what their Facebook friends have been reading.
Online collaboration tools like Mendeley are growing
The idea that the Internet might be used for scientific collaboration shouldn’t come as much of a surprise, since the Web’s predecessor was originally created as a way to connect researchers at different institutions so they could solve problems together. That said, however, collaboration has accelerated over the past several years, thanks in part to the increasing popularity of “social media” or Web 2.0 tools, which have collectively lowered the barriers to online interaction.
A number of social networks and services devoted specifically to scientific research have sprung up and are growing quickly, including one called Mendeley. An online collaboration tool, it allows scientists and researchers to upload research papers, which the software combs through looking for bibliographic data (author, title, etc.) which are then matched with any other research that already exists in the database.
“You can just drag and drop your collection of PDFs into the software and it’ll automatically extract all the bibliographic data – all of the stuff that you’d usually have to type in manually,” co-founder Victor Henning told the BBC. “What Mendeley is designed to do is give you recommendations which compliment your existing library.”
The software has become popular with some scientists at highly-ranked research institutions such as Stanford, Harvard and Cambridge, and Henning says the service has about 70,000 users, and is growing at a rate of 40 per cent every month.
Many scientists from different disciplines have also adopted the “open source” model favoured by the Linux free software movement and supporters of Wikipedia, the open-source encyclopedia. Project Polymath, for example, uses blogs and wikis to allow people to collaborate on solving complex mathematical problems.
In less than two months, Polymath participants “had worked out an elementary proof, and a manuscript describing the proof is currently being written,” Walter Jessen, a bioinformatician and cancer biologist at Cincinnati Children’s Hospital, told LinuxInsider. “The project demonstrated that many people could work together to solve difficult mathematical problems.”
Another open-source science project is Bizarro’s Bioinformatics Organization, which started in 1998 and uses wiki software to let researchers post models, questions, experiments and discoveries related to biology and informatics. Scientists were “looking for a central location for their open source projects,” founder Jeff Bizarro told LinuxInsider. Today, the organization has 27,000 members from all around the world.
If Bizarro is like Facebook or Wikipedia, a collaborative network called ResearchGate has aspects that are similar to LinkedIn, the corporate social network. While the service allows scientists to search for and connect research done by others to their own work in order to see patterns or relationships that are worth following, it also allows scientists to create profiles and search for or find relationships with other researchers in similar or related disciplines.
ResearchGate, which has 180,000 members, says it wants to create something called “Science 2.0” using social media tools. In this environment, “communication between scientists will accelerate the distribution of new knowledge. Without anonymous review processes, the concept of open-access journals will assure research quality. Science is collaboration, so scientific social networks will facilitate and improve the way scientists collaborate.”
Some scientists are using even newer tools to collaborate — including Google Wave, the new tool launched by the search giant that some describe as a combination between email, instant messaging and a wiki.
“Google Wave offers two specific things,” Cameron Neylon, senior scientist at Britain’s Science and Technology Facilities Council, told the BBC. “What it looks like is this cross of e-mail and instant-messaging, which is great fun. Where it really wins for science is that actually these documents or ‘Waves’ can be made automated so we can connect up documents and ideas with each other.” The power lies in allowing scientists to share a range of objects, he says, from pictures and text to raw data.
Will these new social tools help produce any penicillin or DNA-type breakthroughs? Scientists and researchers who use them say it’s just a matter of time.
McSweeney’s: Has Bell Invented the “Telegraph Killer?”
“We had difficulty reaching other users on the Bell apparatus, which Alexander Graham admits will have limited utility until they build a second Telephone. In comparison, the Telegraph network already has fifteen machines connecting backwaters like Los Angeles to metropolises like Cincinnati, a support gap that should only widen in the coming months. Leaked reports from Morse reveal plans to suspend a line between New York and London using kites by January, a scheme insiders predict to be a terrific success.”
“While the technology behind the Telephone is new, the design is reassuringly old-fashioned, reminiscent of a phrenologist’s horn or ear-candle in form. We found the experience far more comfortable than the one we had with the Telegraph, though fatigue from magnetic waves is inevitable in the use of each. This is a minor complaint, however, as we could scarcely imagine using such a device for more than a few minutes a day.”
Union Station
– Posted using MobyPicture.com
Has the WaPo chosen paper over web?
The recent cuts at the Washington Post — as reported by Politico and Washington’s City Paper — have once again brought to the surface a culture clash that has been going on in mainstream newsrooms for most of the last decade, and one that shows no sign of ending any time soon. If anything, the economic upheaval and advertising-revenue tsunami that has hit the media industry over the past year or so has amplified it. It’s the clash between print-heads and Web-heads, or “real” journalists (as some choose to call them) and the “web-first” crowd, and the fear expressed by some — including former WaPo online staffer Derek Willis and former online executive editor Jim Brady — is that the printies are gaining the upper hand.
You can see the fault lines of this snaking through the comments on the City Paper piece, where one commenter talks about how the website “was doing nothing more than posting the print articles, and hosting some online chats,” while the “much-despised MSM reporters and editors were crammed together into an old, crappy space while actually doing the business of obtaining information and writing it.” Another talks about how “All this bla bla bla about presentation, aggregation and innovation will be all that’s left once there are no more reporters churning out actual stories.”
Toward the end of the exchange, former WaPo online staffer Robert MacMillan (@bobbymacReuters) says: “I worked there and did reporting just like it’s done at any other news outlet. Saying otherwise reveals gross ignorance and demeans what I and the good people there have been doing for years” (MacMillan reported on the layoffs here). And in his post at True/Slant, former WaPo online executive editor Brady says “It’s the attitude of Stone Age commenters like these that still pervades far too many print newsrooms. Instead of attempting to adapt to what is clearly a digital future, they complain about the world collapsing around them, yet demean anyone who tries to do anything differently. And they wonder why so many people have stopped listening to them.”
This kind of us-vs-them animosity has likely been exacerbated at the Washington Post by the fact that until recently, the online operation was a completely separate entity from the paper, with its own management and executive and building — across the river from the newspaper itself. Many people both inside and outside the Post saw this structure as a positive thing, because it allowed each to focus on their core business. Others, however, saw it as prolonging the inevitable — the time when the two would have to function as one, which is exactly what the Washington Post is trying to engineer right now. And some, like Steve Yelvington, are afraid that this will wind up with the “printies” on top.
It may have been amplified at the Post by the company’s physical and corporate structure (and there has been speculation that Web staff were let go because otherwise they would have had to be unionized), but you can bet this same battle is going on at virtually every major newspaper in North America. Why? Because they are caught between two worlds. The reality is that the print side continues to provide the bulk of the revenue (although it is falling), and it also consumes the majority of resources — which means there are a lot of senior management involved, and to be blunt, many of them have empires to protect. Others have simply been slow to grasp the magnitude of the changes going on around them. And on the other side is the Web, which is growing quickly but is still a far smaller — and less profitable — operation.
How best to join these two things together? The fear about the Washington Post is that creative online and multimedia journalists have been cut loose in favour of newspaper loyalists who may have little or no clue about what working online really involves. Is it possible for print journalists to understand and adapt to the Web? Of course it is. I’d like to think that I and other former print journalists are proof of that. But you can’t just dump all the responsibilities of understanding digital media on someone who has spent their life making the newspaper work. That is a recipe for disaster.
Comment behaviour: How far is too far?
Updated:
Kurt Greenbaum has apologized for overreacting in his original response to this incident, although he doesn’t explicitly say that he is sorry for calling the school and indirectly causing someone to lose their job.
As someone whose job involves thinking about our social-media policies and our approach to comment behaviour, I’m always looking at what other newspapers and media outlets are doing, and today I came across a case that crossed a line — for me, at least — in terms of how to deal with problem commenters. It involved a vulgar comment made by a user at the St. Louis Post-Dispatch’s website, and the response by the site’s director of social media, Kurt Greenbaum.
According to Greenbaum’s blog post (which was mirrored on his personal blog), someone posted a comment on a story in which they used a colloquial or slang term for female genitalia. It was deleted, but then was reposted. Greenbaum says he noticed that the comment alert from WordPress showed that it came from a nearby school. So Greenbaum called the school, and they asked him to send them the email with the comment, which he apparently did. About six hours later, he says, the school called and said that an employee had been confronted and that he had resigned.
Am I the only one who thinks that doing this goes way beyond the normal course of editorial behaviour? Continue reading “Comment behaviour: How far is too far?”
Is Rupert Murdoch stupid like a fox?
There’s been plenty of recent discussion about Rupert Murdoch and his “I’m taking my sites out of Google” campaign (which I mentioned in this post), and much of the debate centers around whether he is serious or just blustering. Jack Schafer at Slate seems to lean towards the latter, saying:
Murdoch is simply jawboning. Three months ago he promised that News Corp. would start charging for its newspapers by June 2010. Now he doubts that the company will hit that mark. In typical Murdochian fashion, he’s sowing confusion and harvesting bewilderment.
and
If it were in News Corp.’s economic interests to dig an Internet moat around its newspaper properties, Murdoch would have already done it rather than talk about it. Instead, he’s shouting about it to signal to his competitors 1) where he’d like to take News Corp. and 2) his desperate desire for them to follow.
Mark Cuban is convinced that it’s worth it for Murdoch to at least try to do without Google, since there’s the chance that it might actually pay off, and if it doesn’t then he can just re-enter the index and things will go back to normal (I’m not sure that’s the case, but then I’m not a media mogul like Mark). But Mike Arrington at TechCrunch does the best job of laying out what might be at the core of Rupert’s strategy (assuming he isn’t just blustering).
In a nutshell, the idea is that Rupert cuts a deal with either Microsoft or Yahoo to index his sites (similar to the deal he cut with Google to index MySpace), and hopes that this encourages other major media outlets to do the same. If he can get enough to jump on board — and it sounds like Associated Press is halfway there already — the thinking is he could put pressure on Google to pay up as well. Mike Butcher at TechCrunch Europe has some more ammunition for this view, with reports of secret negotiations between Microsoft and some of the major publishers.
Erick Schonfeld has compared this “Come on, boys — let’s give Google what for!” strategy to the final scene in the movie Gallipoli, and to a military strategy from Blackadder (I’ve chosen General George Armstrong Custer). But whether it’s Custer’s Last Stand at Little Big Horn or Gallipoli or Don Quixote tilting at windmills, the underlying point is that Murdoch’s approach seems futile. Will other media outlets join his crusade? Perhaps — but I doubt enough of them to make a difference.
Will people switch search engines in order to get specific content from specific media outlets? I highly doubt it. Of course, all Rupert has to do is convince Microsoft or Yahoo that they will do so, and then get them to pay him. Even in failing, the old bugger could still wind up winning.
Update:
Jeff Jarvis explains why there is approximately zero chance of anyone important joining Murdoch’s anti-Google crusade.
When a blog beats a NYT story
It may have gotten lost amid the back-and-forth in the comments on her piece at the Columbia Journalism Review — many of which take her to task for criticizing “crowdfunding” startup Spot.us and its role in the Garbage Patch story the New York Times published recently — but I thought Megan Garber made an excellent point in her critique of the piece: namely, that freelance reporter Lindsey Hoshaw’s personal blog was a far better presentation of the trip and the fascinating story behind it than the New York Times story was.
Whose fault is that? Probably the Times, for forcing the story into the standard format rather than trying something different, but assigning blame is hardly the point. And in any case, the NYT should be given all kinds of credit for experimenting with the Spot.us partnership, and for being so flexible that Spot.us founder and all-around smart guy David “Digidave” Cohn — whom I respect and I admire — said the Grey Old Lady “interfaced with Spot.Us as if they were a lean and mean startup.” High praise indeed.
But to get back to my main point, if you look at the NYT story you see (or at least I saw) exactly what Megan describes in her post at CJR: a story that repeats a lot of known information about the Great Garbage Patch, with very little of the human side of Lindsay’s story. I found her personal blog far more interesting, and I bet I’m not the only one. She talks about — and shows photos of — the Mahi Mahi the crew ate so much of, the cramped quarters that the crew inhabited, the gourmet meals whipped up by the ship’s cook, and the garbage the ship came across along the way.
Obviously, not every news story deserves the blog treatment, but I think this one certainly did. I got far more out of it, was far more engaged with it, cared more about it and identified more with the reporter at the centre of it. A great job by Lindsay, and despite the criticisms of the outcome, a great effort by Spot.us as well. Dave Cohn describes the genesis of the project and the process it went through, as well as some of the lessons learned.
Your readers are paying you — with attention
Rupert Murdoch, that sly old rascal, caused a minor Twitter-storm recently, with an interview in which he suggested that News Corp. might remove its websites from Google, which he has described in the past as a “thief” that takes content without asking (Google, for its part, said that it would be more than happy to oblige Rupert’s whims in this regard). As Mike Masnick at Techdirt also noted, Murdoch even went so far as to argue that “fair use” principles were likely illegal, and would eventually be proven so. You have to give the guy credit for knowing a soundbite when he sees one.
Mark Cuban, another crusty old billionaire (although just a pup compared to Rupe), used these remarks as a jumping-off point for his own flight of rhetorical fancy, in which he argued that social-recommendation networks such as Twitter and Facebook were far more important than Google, and that therefore Rupert was right and all the “information-must-be-free bigots” who criticized him must be wrong. But as Steve Rhodes (@tigerbeat) pointed out on Twitter after I posted a link to Cuban’s rant, all the social-recommendations in the world aren’t going to help Rupert if he insists on putting his content behind pay walls.
David Santori made a similar point in a comment on one of my paywall-related posts at the Nieman Journalism Lab. As he put it:
“overlooked in all this is the social aspect: any web item that interests or amuses or intrigues me, I want to share. And if I can’t share it promptly and easily — in an email link or on my blog or Facebook “wall” or in a tweet — I will be frustrated and irked just in proportion to the degree of interest I felt in the item.”
and
“The NYT registration barrier was in fact a micropayment system, one in which the payment was extracted in the form of the reader’s time and keystrokes to log in whenever they got a link to a useful story.”
I think both David and Steve make an excellent point, one which publishers ignore at their peril. Readers online may not pay you directly with currency, but they pay you with their time and attention (the foundation of the so-called “attention economy”) and it’s in your interest to make things as easy for them as possible — which is just one strike amongst many against pay walls. And if Mark Cuban is right (which I think he is) about social recommendations becoming increasingly important as a way to find valuable content, what happens when someone shares a link to your pay-walled content?
What happens is a potential reader runs headfirst into that wall, or has to jump through all sorts of hoops to read it (i.e., check to see if there is a Google News loophole), and that is a significant disincentive to a) read anything further, or b) share any links themselves. It’s the classic cutting-off-your-nose-to-spite-your-face problem: you try to generate incremental revenue through restricted access, but by doing so you deprive your content of even more valuable re-distribution through recommendation networks, which in the long run reduces your traffic and thus your revenue.
Citizen journalism: I’ll take it, flaws and all
Paul Carr, who started writing for TechCrunch not long ago, is an entertaining writer, and he often puts his finger on issues that others tend to avoid in their headlong rush towards whatever is shiny and new, which is why I’m glad Mike Arrington hired him. But I think his latest rant against “citizen journalism” is misplaced. In the piece, which is entitled “After Fort Hood, another example of how ‘citizen journalists’ can’t handle the truth,” Carr talks about how a soldier on the base where the shootings occurred last week was posting to Twitter throughout the ordeal.
Tearah Moore, who recently returned from Iraq, posted a number of comments about what was happening, including the fact that stretchers were being brought in, that one person had allegedly been shot in the testicles, and that the shooter had died. Among other things, Carr notes that Moore’s tweet about the shooter being dead was wrong (although she didn’t say that she knew this, she just commented on it). But his main complaint seems to be that her tweets about someone being shot in the testicles, etc. had no redeeming value and were therefore “entertainment or tragi-porn.”
As he puts it, her behaviour had nothing to do with getting the word out but was a case of “look at me looking at this.” He then goes on to say that the tweeting of events during protests in Iran did nothing to actually change events in that country, and that all of this so-called “citizen journalism” is merely selfish and egotistical. And finally, he argues that this applies to the shocking video footage of Neda Agha Soltan’s death in Iran — that the person shooting the video didn’t try to help, but simply engaged in a cruel and unfeeling act of voyeurism.
The question of whether bystanders or observers should intervene in emergency situation is a worthwhile debate to have, but I don’t think Carr’s examples meet the test.
Continue reading “Citizen journalism: I’ll take it, flaws and all”