One of the things that Clay Shirky mentioned in the panel with Andrew Keen that I moderated at Ryerson University recently (my post with video here, tweet-stream here and live-blog here) was an idea that he has also written about before on his blog: namely, that one of the principal functions of a newspaper was to aggregate completely unrelated things, primarily because the newspaper company (and its advertisers) had to appeal to the widest possible group of potential readers, and couldn’t possibly know in advance which parts of the paper they were likely to be most interested in. As Clay described it in a recent talk he gave at Harvard:
“The idea that someone who is doing a crossword puzzle may also want news about the coup in Honduras or how the Lakers are doing — it doesn’t make any sense. It’s never made any sense, in terms of what the user wants. It’s what print is capable of as a bundle.”
In my desperate attempt to justify the continued existence of newspapers, I asked Clay whether that aggregation didn’t serve some kind of purpose, but he argued that it did not — that it was simply a holdover from the industrial process by which papers were created and distributed. But is it? I know that we increasingly believe that “if the news is important, it will find me” (I’m actually the number one result in Google for that phrase) and that aggregation of whatever kind we require can be performed by our friends, by service like Techmeme and Tweetmeme, by RSS feed readers, by Twitter, and so on. Heck, I use all of those things and have come to rely on them.
But are they enough? Is there a purpose in aggregating the horoscope and the weather and the news about the coup in Tegucigalpa? I think there is, and I think newspapers do a pretty good job of it.
It’s not just because they have to — although that’s part of it. Maybe I’ve just been trained as a newspaper reader for my whole life, but I like the serendipity of tripping over fascinating articles about things I would never have known even existed were it not for a newspaper. To take the Saturday Globe and Mail as an example, I read about an up-and-coming Muslim hockey player, a profile of Paul Shaffer, a review of the punk band Gossip, an article about contentious city council politics in Aurora and a great feature on retirees and their vanishing pensions.
Could links to those stories show up in my RSS reader? Possibly – but I doubt it. The mix is just too eclectic. And I would never have sought out the article about the Muslim hockey player, because I don’t particularly care about hockey and therefore I would likely never have come across it. Would the retirement piece ever make it to Techmeme or some similar aggregator? I doubt it. But it was still worth reading. And so were the half-dozen or so articles I can’t recall right now, which I tripped across as I read the paper. I would never have deliberately sought them out either.
This is what has come to be known as the “serendipity defence” for newspapers, which others have written about both positively and negatively (including at Ethan Zuckerman’s blog and in Shane Richmond’s column, which refers to a great piece by Steven Berlin Johnson on the topic, which I highly recommend). I realize that there is far more content — from a vast diversity of sources — available on the web than there is in a newspaper. But who will filter and condense and aggregate it for me the way a newspaper does? I still haven’t found something that does the job quite as well. Perhaps someday I will, but until then I will keep reading newspapers.