It’s become popular to make fun not just of the “bros” who run a lot of startups — the ones that Businessweek magazine chose to parody on the cover of its latest issue — but of the whole idea of having technology startups in the first place, since so many come up with useless things like Yo, an app that exists solely to send the single word “Yo” to other users. But Y Combinator head Sam Altman argues that out of silliness and irrelevance, sometimes great things are made — and anyone who has followed even the recent history of technology would have a hard time disagreeing.
I confess that I’ve had my own share of fun ridiculing the idea behind Yo, as well as some recent startups such as ReservationHop, which was designed to corner the market in restaurant reservations by mass-booking them under assumed names and then selling them to the highest bidder. But what Altman said in a blog post he wrote in response to the Businessweek story still rings true:
“People often accuse people in Silicon Valley of working on things that don’t matter. Often they’re right. But many very important things start out looking as if they don’t matter, and so it’s a very bad mistake to dismiss everything that looks trivial…. Facebook, Twitter, Reddit, the Internet itself, the iPhone, and on and on and on — most people dismissed these things as incremental or trivial when they first came out.”
Sometimes toys grow up into services
I’ve made the same point before about Twitter, and how it seemed so inconsequential when it first appeared on the scene that I and many others (including our founder Om) ridiculed it as a massive waste of time. What possible purpose could there be in sending 140-character messages to people? It made no sense. After I got finished making fun of Yo, that’s the first thing that occurred to me: I totally failed to see any potential in Twitter — and not just when it launched, but for at least a year after that. Who am I to judge what is worthy?
Chris Dixon, an entrepreneur who is now a partner at Andreessen Horowitz, pointed out in a blog post in 2010 that “the next big thing always starts out looking like a toy,” which is a kind of one-sentence paraphrase of disruption guru Clay Christensen’s theory from The Innovator’s Dilemma. Everything from Japanese cars to cheap disk drives started out looking like something no one in their right mind would take seriously — which is why it was so hard for their competitors to see them coming even when it should have been obvious.
Even the phone looked like a toy
Altman pulled his list of toy-turned-big-deal examples from the fairly recent past, presumably because he knew they would resonate with more people (and perhaps because he is under 30). But there are plenty of others, including the telephone — which many believed was an irritating plaything with little or no business application, a view the telegraph industry was happy to promote — and the television, both of which were seen primarily as entertainment devices rather than things that would ultimately transform the world. As Dixon noted:
“Disruptive technologies are dismissed as toys because when they are first launched they ‘undershoot’ user needs. The first telephone could only carry voices a mile or two. The leading telco of the time, Western Union, passed on acquiring the phone because they didn’t see how it could possibly be useful to businesses and railroads – their primary customers. What they failed to anticipate was how rapidly telephone technology and infrastructure would improve.”
Is Yo going to be listed in that kind of pantheon of global success stories? I’m going to go out on a limb and say probably not. But most people thought Mark Zuckerberg’s idea of a site where university students could post photos and personal details about themselves was a waste of time too, and Facebook recently passed IBM in market capitalization with a value of $190 billion and more than a billion users worldwide. Not bad for a toy.
Sometimes I try to remember what it was like to be bored — not the boredom of a less-than-thrilling job assignment or a forced conversation with someone dull, but the mind-numbing, interminable boredom I remember from before the web. The hours spent in a car or bus with nothing to do, standing in line at the bank, sleep-walking through a university class, or killing time waiting for a friend. Strange as it may sound, these kinds of moments seem almost exotic to me now.
I was talking to a friend recently who doesn’t have a smartphone, and they asked me what was so great about it. That’s easy, I said — you’ll never be bored again. And it’s true, of course. As a smartphone user, we have an almost infinite array of time-wasting apps to help us fill those moments: we can read Twitter, look at Instagram or Facebook, play 2048 or Candy Crush, or do dozens of other things.
In effect, boredom has been more or less eradicated, like smallpox or scurvy. If I’m standing in line, waiting for a friend, or just not particularly interested the person I’m sitting with or the TV show I’m watching, I can flick open one of a hundred different apps and be transported somewhere else. Every spare moment can be filled with activity, from the time I open my eyes in the morning until I close them at night.
“Neither humanities nor science offers courses in boredom. At best, they may acquaint you with the sensation by incurring it. But what is a casual contact to an incurable malaise? The worst monotonous drone coming from a lectern or the eye-splitting textbook in turgid English is nothing in comparison to the psychological Sahara that starts right in your bedroom and spurns the horizon.” — Joseph Brodsky, 1995
Finding value in doing nothing
Of course, this is a hugely positive thing in many ways. Who wants to be bored? It feels so wasteful. Much better to feel as though we’re accomplishing something, even if it’s just pushing a virtual rock up a metaphorical hill in some video game. But now and then I feel like I am missing something — namely, the opportunity to let my thoughts wander, with no particular goal in mind. Artists in particular often talk about the benefits of “lateral thinking,” the kind that only comes when we are busy thinking about something else. And when I do get the chance to spend some time without a phone, I’m reminded of how liberating it can be to just daydream.
I’ve written before about struggling to deal with an overload of notifications and alerts on my phone, and how I solved it in part by switching to Android from the iPhone, which at the time had relatively poor notification management. That helped me get the notification problem under control, but it didn’t help with an even larger problem: namely, how to stop picking up my phone even when there isn’t a notification. That turns out to be a lot harder to do.
But more and more, I’m starting to think that those tiny empty moments I fill by checking Twitter or browsing Instagram are a lot more important than they might appear at first. Even if spending that time staring off into space makes it feel like I’m not accomplishing anything worthwhile, I think I probably am — and there’s research that suggests I’m right: boredom has a lot of positive qualities.
Losing the fear of missing out
Don’t get me wrong, I’m not agreeing with sociologist Sherry Turkle, who believes that technology is making us gadget-addled hermits with no social skills. I don’t want to suddenly get rid of all my devices, or do what Verge writer Paul Miller did and go without the internet for a year. I don’t have any grand ambitions — I just want to try and find a better balance between being on my phone all the time and having some time to think, or maybe even interact with others face-to-face.
“Lately I’ve started worrying that I’m not getting enough boredom in my life. If I’m watching TV, I can fast-forward through commercials. If I’m standing in line at the store, I can check email or play “Angry Birds.” When I run on the treadmill, I listen to my iPod while reading the closed captions on the TV. I’ve eliminated boredom from my life.” — cartoonist Scott Adams
The biggest hurdle that there’s just so much interesting content out there — and I don’t mean BuzzFeed cat GIFs or Reddit threads. I’m talking about the links that get shared by the thousands of people I follow on Twitter, or the conversations and debates that are occurring around topics I’m interested in. I have no problem putting away 2048 or Reddit, but Twitter is more difficult because I feel like I’m missing out on something potentially fascinating. Why would I choose to be bored instead of reading about something that interests me?
What I’m trying to do a bit more is to remind myself is that this isn’t actually the choice that confronts me when I think about checking my phone for the fourteenth time. The choice is between spending a few moments reading through a stream or checking out someone’s photos vs. using those moments to recharge my brain and maybe even stimulate the creative process a bit. Even if it somehow seems less fulfilling, in the long run I think it is probably a better choice.
After a couple of quarters that had analysts and investors concerned about its growth potential, Twitter managed to turn in a fairly strong performance in the most recent quarter — with more than 120-percent growth in revenue. Some power Twitter users, however, were more interested in something Twitter CEO Dick Costolo mentioned during the conference call: namely, the idea that the company might introduce an algorithmically-filtered feed like Facebook’s.
What Costolo actually said was he “isn’t ruling out” an algorithmic approach — and he also said the company is considering ways of “surfacing the kinds of great conversations that pop up in peoples’ timelines.” That doesn’t mean Twitter is suddenly going to convert its stream into a Facebook-style curated feed, but it was enough to make some users nervous, especially those who have come to dislike the Facebook experience because the social network keeps tweaking its algorithm.
Facebook has managed its newsfeed this way from the beginning, but it seems to have gotten more irritating for some, especially since the changes seem to be designed to appeal to advertisers rather than actual users — and because some say they have lost much of the reach they used to have (a problem Facebook is happy to solve if you pay to promote your content). Is that the kind of future that Twitter has in mind? And will it ruin the experience?
[tweet 494240846015782912 hide_thread=’true’]
[tweet 494240286558932992 hide_thread=’true’]
When I asked the question (on Twitter, naturally) after the company’s earnings report, a number of users said they would either quit the service altogether or dramatically scale back their usage if Twitter implemented something like the Facebook newsfeed, with a black-box algorithm determining what they saw or didn’t see. Several said that a big part of the appeal of Twitter was that it showed them everything their friends and social connections posted — even if the volume of those posts was sometimes overwhelming.
[tweet 494480221190770688 hide_thread=’true’]
[tweet 494251082621943808 hide_thread=’true’]
Just because it implements some kind of algorithmic curation or filtering doesn’t mean Twitter is going to turn into Facebook overnight, of course. The company might confine that kind of approach to an updated or improved version of the “Discover” tab — which is designed to appeal to new users and increase engagement, but so far doesn’t seem to have had much impact. Or it might use algorithms in order to create beginner streams for new users, as a way of helping with “on-boarding,” while allowing existing users to remain unaffected.
The impetus for using algorithms is fairly obvious: while its user-growth and engagement numbers may have assuaged investors’ concerns for the most recent quarter, Twitter is still behind some of the targets that Costolo has reportedly set in the past — including the one where he said the network would have 400 million users by the end of last year (it has about 250 million now). And if it is ever going to reach those levels, it’s going to have to make the service a lot more intuitive and a lot less work. Algorithms are one way of doing that, because they do the heavy lifting, instead of forcing users to spend time pruning their streams.
As Facebook has shown, however, the algorithm is a double-edged sword: for every new user it appeals to, it is going to irritate — and potentially drive away — some indeterminate number of existing users. And as Twitter itself has acknowledged, those users are the ones who create and post the majority of the content that spurs engagement by the rest of the network. Pissing them off could leave Twitter with nothing but a resting place next to MySpace in the social networking Hall of Shame.
The private vs. public divide used to be relatively straightforward: things remained private unless you disclosed them to someone, either deliberately or accidentally — but even in the case of accidental disclosure, there was no way for your information to reach the entire planet unless it appeared on the evening news. Now, a tweet or a photo or a status update could suddenly appear on a news website, or be retweeted thousands of times, or be used as evidence of some pernicious social phenomenon you may never even have heard of before.
But you posted those things, so they must be public, right? And because they are public, any use of them is permitted, right?
A universe filled with nuance and slippery ethical slopes is contained in those questions. And while many of us have gotten used to the back-and-forth with Facebook (s fb) over what is private and what is public — a line that has remained fluid throughout the company’s history, and still continues to shift — it’s more than just Facebook. If this was a war, the entire web would be the battleground.
In a recent post on Medium, blogging veteran and ThinkUp co-founder Anil Dash did a good job of describing the shifting terrain around what’s private and what’s public. Although we may be convinced that we appreciate the difference between those two, and that there is some kind of hard dividing line, Dash notes: “In reality, the vast majority of what we do exists on a continuum that is neither clearly public nor strictly private.” And that makes it much harder to decide how to treat it:
“Ultimately, we rely on a set of unspoken social agreements to make it possible to live in public and semi-public spaces. If we vent about our bosses to a friend at a coffee shop, we’re trusting that no one will run in with a camera crew and put that conversation on national TV.”
Twitter: Private, public, or in between?
We’ve seen ample evidence of this tension in recent months with a number of Twitter-related debates. In March, a Twitter discussion got started among women who had suffered sexual abuse, and they used the hashtag #yesallwomen to share their stories. A number of sites, including BuzzFeed, collected these tweets and embedded them in a news story about the topic, something that has become fairly standard behavior — but some of those who participated in the discussion were outraged that this was done without their permission.
debate tonight about what qualifies as being a public figure today in the eyes of the media. Simple: If you use social *media* you opted in.
Should the authors of those articles have had to get permission from the users whose tweets they embedded? After all, Twitter is a public network by default — as Gawker writer Hamilton Nolan pointed out — and so those messages were designed to be publicly available. From a legal standpoint, posting things to networks such as Twitter and Facebook without using the various privacy features built into those networks makes them public. But some of the participants in the #yesallwomen discussion seemed to see their tweets as being more like a conversation with friends in a public place, not something designed to be broadcast.
“The things you write on Twitter are public. They are published on the world wide web. They can be read almost instantly by anyone with an internet connection on the planet Earth. This is not a bug in Twitter; it is a feature. Twitter is a thing that allows you to publish things, quickly, to the public.” — Hamilton Nolan
In another case, high-school students who posted racist comments on Twitter after President Barack Obama was re-elected in 2012 were singled out and identified by Gawker in a news article that included their tweets, as well as their full names and what schools they attended. Was that an appropriate response to messages that were clearly designed for a small group of friends, as unpleasant as they might be, or was it a form of bullying? What about the response to a single tweet from Justine Sacco that many took to be racist?
Blurring the line between personal and public
As sociologist danah boyd has pointed out during the endless debates about Facebook and privacy, we all have different facets of ourselves that we present in different contexts online — a work identity, a personal identity we display to our friends and family, and so on. The problem is that so many apps and services like Twitter and Facebook encourage us to blur the lines between those different personas (and benefit financially from us doing so, as Dash points out). And so information and behavior that belongs in one sphere slides into another.
The response from Gawker and others to the #yesallwomen incident was to argue that the participants in that discussion simply don’t understand how Twitter works, or were being deliberately naive about how public their comments were — the same kind of response that users get when their embarrassing Facebook posts become more public than they intended. “If you don’t want people to see it, don’t put it on the internet” is the usual refrain. But as Dash points out, there is a whole spectrum of behavior that exists in the nether world between private and public:
“What if the public speech on Facebook and Twitter is more akin to a conversation happening between two people at a restaurant? Or two people speaking quietly at home, albeit near a window that happens to be open to the street? And if more than a billion people are active on various social networking applications each week, are we saying that there are now a billion public figures?”
The right to remain obscure
In some ways, this debate is similar to the one around search engines and the so-called “right to be forgotten,” a right that is in the process of being enshrined in legislation in the European Union. While advocates of free speech and freedom of information are upset that such legislation will allow certain kinds of data to be removed from view (as Google has now done with some news articles involving public figures), supporters of the law say ordinary individuals shouldn’t be forever tarred by comments or behavior that were intended to be ephemeral, but are now preserved for eternity for everyone to see.
[pullquote person=”” attribution=””]To what extent do we have a right to keep certain content obscure?[/pullquote]
In a piece they wrote for The Atlantic last year, Evan Selinger and Woodrow Hartzog argued that instead of privacy or a right to be forgotten, what we are really talking about is obscurity: so certain information may technically be public — gun-registry data, for example — but is usually difficult to find. Search engines like Google have removed the barriers to that kind of obscurity, and that’s great when the information is of significant public interest. But what about when it’s just high-level gossip or digital rubbernecking at the scene of a social accident? To what extent do we have a right to keep certain content obscure?
As Dash points out in his post, media companies and technology platforms like Facebook have a vested interest in keeping the definition of “public” as broad as possible, and our laws are woefully behind when it comes to protecting users. At the same time, however, some attempts to bridge that gap — including the right to be forgotten, and restrictions on free speech and freedom of information in places such as Britain and Germany — arguably go too far in the other direction.
In many ways, what we’re talking about are things that are difficult (perhaps even impossible) to enshrine in law properly, in the same way we don’t look for the law to codify whether we should be allowed to use our cellphones at the dinner table. Some kinds of behavior may benefit from being defined as illegal — posting revealing photos of people without their knowledge, for example, or audio/video recordings they haven’t agreed to — but the rest of it is mostly a quicksand of etiquette and judgment where laws won’t help, and can actually make things worse. We are going to have to figure out the boundaries of behavior ourselves.
We’ve been writing for a long time at Gigaom about the ways in which the web and social media have changed the practice of journalism, so it’s nice to see the New York Times recognizing some of that. In a recent piece, media writer David Carr notes that real-time social tools like Twitter (s twtr) and YouTube (s goog) have altered the way many of us experience events like the civil war in Ukraine or the violence in Gaza. He doesn’t really address whether this is positive or negative, but it’s easy to make the case that we are much better off now.
If Israeli rockets had hit Gaza or Ukrainian rebels had shot down a commercial airliner before the arrival of the social web, most of us would have been forced to rely on reports from traditional journalists working for a handful of mainstream media sources — some of whom would have been parachuted into the region with little to no advance warning, and in some cases with just a sketchy grasp of the context behind the latest incident — and the news would be filtered through the lens of a CNN anchor or NYT editor. But as Carr points out:
“In the current news ecosystem, we don’t have to wait for the stentorian anchor to arrive and set up shop. Even as some traditional media organizations have pulled back, new players like Vice and BuzzFeed have stepped in to sometimes remarkable effect. Citizen reports from the scene are quickly augmented by journalists. And those journalists on the ground begin writing about what they see, often via Twitter, before consulting with headquarters.”
More personal, and more chaotic
There are downsides to this approach, obviously: In some cases, journalists say things in the heat of the moment that draw negative attention from readers and viewers — or managers and owners of the media outlets they work for — and there are repercussions, as there were for NBC reporter Ayman Mohyeldin and CNN journalist Diana Magnay after they both made comments about the attacks in Gaza. Two years ago, the Jerusalem bureau chief for the New York Times was called on the carpet for remarks she made on Twitter and for a time was assigned a social-media editor to check her tweets before they were published.
Although Carr doesn’t get into it, the other downside that some have mentioned is that the news environment has become much more chaotic, now that everyone with a smartphone can upload photos and report on what is happening around them — including the terrorist groups and armies that are involved in the conflict that is being reported on, and the ultimate victims of their behavior. Hoaxes and misinformation fly just as quickly as the news does, and in some cases are harder to detect, and those mistakes can have real repercussions.
The democratization of news is good
At the same time, however, there are some fairly obvious benefits to the kind of reporting we get now, and I would argue that they outweigh the disadvantages. For one thing, as Carr notes, we get journalism that is much more personal — and while that personal aspect can cause trouble for reporters like Mohyeldin and Magnay when they stray over editorial lines, in the end we get something that is much more moving than mainstream news has typically been. As Carr says:
“It has made for a more visceral, more emotional approach to reporting. War correspondents arriving in a hot zone now provide an on-the-spot moral and physical inventory that seems different from times past. That emotional content, so noticeable when Anderson Cooper was reporting from the Gulf Coast during Hurricane Katrina in 2005, has now become routine, part of the real-time picture all over the web.”
The other major benefit of having so many sources of news is that the process of reporting has become much more democratized, and that has allowed a whole new ecosystem of journalism to evolve — one that includes British blogger Brown Moses, who has become the poster child for crowdsourced journalism about Syria, as well as Storyful’s Open Newsroom and efforts like Grasswire and Checkdesk (I collected some other resources in a recent post abut fact-checking).
In the end, things have definitely become much more confusing — and not just for news consumers but for journalists as well — with the explosion of pro and amateur sources and the sheer speed with which reports flow by in our various social streams. But I would argue that the fact we no longer have to rely on a handful of mainstream outlets for our news and analysis is ultimately a good thing.
Media theorist Clay Shirky isn’t the only one telling newspaper companies and print-oriented journalists that they need to wake up and pay attention to the decline of their industry before they run out of time. Former Seattle Times editor David Boardman — who also happens to be president of the American Society of News Editors — wrote in a recent essay that the newspaper business spends too much of its time sugar-coating the reality of what’s happening.
Boardman described listening to a presentation that the president of the Newspaper Association of America gave at the World Newspaper Congress in Turin, Italy. In her speech, Caroline Little painted an uplifting picture of the state of affairs in her industry, a picture that Boardman called “a fiction where papers could invent a new future while holding on tightly to the past” — something similar to what Shirky called “newspaper nostalgia,” in a piece he wrote recently.
In his post, Boardman took each statement made by Little and presented the opposite viewpoint, or at least put each in a little more context: for example, the NAA president noted that total revenue for the U.S. newspaper industry was about $38 billion in 2013 — but what she didn’t mention is that this is about $12 billion or 35 percent lower than it was just seven years ago:
“What she said: The printed newspaper continues to reach more than half of the U.S. adult population. What she didn’t say: But the percentage of Americans who routinely read a printed paper daily continues its dramatic decline, and is somewhere down around 25 percent. ‘Reaching’ in Little’s reference can mean those people read one issue in the past week; it doesn’t mean they are regular daily readers of the printed paper.”
Should newspapers stop printing?
In a separate post, Allan Mutter — also a longtime newspaper editor who writes a blog called The Newsosaur — collected some of the depressing statistics about the decline of print, most of which were also apparently never mentioned by Little, including the fact that combined print and digital revenues have fallen by more than 55 percent in the past decade, and the industry’s share of the digital advertising market has been cut in half over the same period.
What’s Boardman’s solution? It’s not one that most newspapers will like: He suggests that most should consider giving up their weekday print editions altogether at some point over the next few years, and focus all of their efforts on a single print version on Saturday or Sunday, while pouring all of their resources into digital and mobile. Weekend papers account for a large proportion — in some cases a majority — of the advertising revenue that newspapers bring in, so giving up everything but the Saturday paper wouldn’t be as much of a loss, he argues.
In a recent piece at the Columbia Journalism Review about the New York Times, writer Ryan Chittum argued that the newspaper can’t afford to simply stop printing because the physical version brings in so much revenue. But could it stop printing everything but the Sunday paper? Chittum thinks it might be able to, and so does long-time online journalism watcher Steve Outing. Perhaps new digital-strategy head Arthur Gregg Sulzberger — a co-author of the paper’s much-publicized “innovation report” — is already crunching those numbers for a presentation to his father, the publisher, whose family controls the company’s stock.
Social networks and platforms like Facebook (s fb), Twitter (s twtr) and YouTube (s goog) have given everyone a megaphone they can use to share their views with the world, but what happens — or what should happen — when their views are violent, racist and/or offensive? This is a dilemma that is only growing more intense, especially as militant and terrorist groups in places like Iraq use these platforms to spread messages of hate, including graphic imagery and calls to violence against specific groups of people. How much free speech is too much?
That debate flared up again following an opinion piece that appeared in the Washington Post, written by Ronan Farrow, an MSNBC host and former State Department staffer. In it, Farrow called on social networks like Twitter and Facebook to “do more to stop terrorists from inciting violence,” and argued that if these platforms screen for things like child porn, they should do the same for material that “drives ethnic conflict,” such as calls for violence from Abu Bakr al-Baghdadi, the leader of the Jihadist group known as ISIS.
“Every major social media network employs algorithms that automatically detect and prevent the posting of child pornography. Many, including YouTube, use a similar technique to prevent copyrighted material from hitting the web. Why not, in those overt cases of beheading videos and calls for blood, employ a similar system?”
Free speech vs. hate speech — who wins?
In his piece, Farrow acknowledges that there are free-speech issues involved in what he’s suggesting, but argues that “those grey areas don’t excuse a lack of enforcement against direct calls for murder.” And he draws a direct comparison — as others have — between what ISIS and other groups are doing and what happened in Rwanda in the mid-1990s, where the massacre of hundreds of thousands of Tutsis was driven in part by radio broadcasts calling for violence.
In fact, both Twitter and Facebook already do some of what Farrow wants them to do: for example, Twitter’s terms of use specifically forbid threats of violence, and the company has removed recent tweets from ISIS and blocked accounts in what appeared to be retaliation for the posting of beheading videos and other content (Twitter has a policy of not commenting on actions that it takes related to specific accounts, so we don’t know for sure why).
The hard part, however, is drawing a line between egregious threats of violence and political rhetoric, and/or picking sides in a specific conflict. As an unnamed executive at one of the social networks told Farrow: “One person’s terrorist is another person’s freedom fighter.”
In a response to Farrow’s piece, Jillian York — the director for international freedom of expression at the Electronic Frontier Foundation — argues that making an impassioned call for some kind of action by social networks is a lot easier than trying to sort out what specific content to remove. Maybe we could agree on beheading videos, but what about other types of rhetoric? And what about the journalistic value of having these groups posting information, which has become a crucial tool for fact-checking journalists like British blogger Brown Moses?
“It seemed pretty simple for Twitter to take down Al-Shabaab’s account following the Westgate Mall massacre, because there was consistent glorification of violence… but they’ve clearly had a harder time determining whether to take down some of ISIS’ accounts, because many of them simply don’t incite violence. Like them or not… their function seems to be reporting on their land grabs, which does have a certain utility for reporters and other actors.”
Twitter and the free-speech party
As the debate over Farrow’s piece expanded on Twitter, sociologist Zeynep Tufekci — an expert in the impact of social-media on conflicts such as the Arab Spring revolutions in Egypt and the more recent demonstrations in Turkey — argued that even free-speech considerations have to be tempered by the potential for inciting actual violence against identifiable groups:
It’s easy to sympathize with this viewpoint, especially after seeing some of terrible images coming out of Iraq. But at what point does protecting a specific group from theoretical acts of violence win out over the right to free speech? It’s not clear where to draw that line. When the militant Palestinian group Hamas made threats towards Israel during an attack on the Gaza Strip in 2012, should Twitter have blocked the account or removed the tweet? What about the tweets from the official account of the Israeli military that triggered those threats?
What makes this difficult for Twitter in particular is that the company has talked a lot about how it wants to be the “free-speech wing of the free-speech party,” and has fought for the rights of its users on a number of occasions, including an attempt to resist demands that it hand over information about French users who posted homophobic and anti-Semitic comments, and another case in which it tried to resist handing over information about supporters of WikiLeaks to the State Department.
Despite this, even Twitter has been caught between a rock and a hard place, with countries like Russia and Pakistan pressuring the company to remove accounts and use its “country withheld content” tool to block access to tweets that are deemed to be illegal — in some cases merely because they involve opinions that the authorities don’t want distributed. In other words, the company already engages in censorship, although it tries hard not to do so.
Who decides what content should disappear?
Facebook, meanwhile, routinely removes content and accounts for a variety of reasons, and has been criticized by many free-speech advocates and journalists — including Brown Moses — for making crucial evidence of chemical-weapon attacks in Syria vanish by deleting accounts, and for doing so without explanation. Google also removes content, such as the infamous “Innocence of Muslims” video, which sparked a similar debate about the risks of trying to hide inflammatory content.
[tweet 487569374300360704 hide_thread=’true’]
What Farrow and others don’t address is the question of who should be left to make the decision about what content to delete in order to comply with his desire to banish violent imagery. Should we just leave it up to unnamed executives to remove whatever they wish, and to arrive at their own definitions of what is appropriate speech and what isn’t? Handing over such an important principle to the private sector — with virtually no transparency about their decision-making, nor any court of appeal — seems unwise, to put it mildly.
What if there were tools that we could use as individuals to remove or block certain types of content ourselves, the way Chrome extensions like HerpDerp do for YouTube comments? Would that make it better or worse? To be honest, I have no idea. What happens if we use these and other similar kinds of tools to forget a genocide? What I think is pretty clear is that handing over even more of that kind of decision making to faceless executives at Twitter and Facebook is not the right way to go, no matter how troubling that content might be.
It’s been a number of months since there were any new revelations based on the massive trove of top-secret NSA surveillance documents that former security contractor Edward Snowden took with him when he left the service, but the Washington Post came out with a big one on Saturday: according to files that Snowden provided to the newspaper, NSA agents recorded and retained the private information of tens of thousands of ordinary Americans — including online chats and emails — even though they were not the target of an official investigation.
According to the Post‘s story, nine out of 10 account holders who were found in a large cache of intercepted conversations were not the actual surveillance target sought by the NSA, but in effect were electronic bystanders caught in a net that the agency had cast in an attempt to catch someone else. Many were Americans, the newspaper said, and nearly half of the files contained names, email addresses and other details. Although many had been redacted or “minimized,” almost 900 files still contained unmasked email addresses.
“Many other files, described as useless by the analysts but nonetheless retained, have a startlingly intimate, even voyeuristic quality. They tell stories of love and heartbreak, illicit sexual liaisons, mental-health crises, political and religious conversions, financial anxieties and disappointed hopes. The daily lives of more than 10,000 account holders who were not targeted are catalogued and recorded nevertheless.”
As the paper explains, the NSA is only legally allowed to target foreign nationals located overseas unless it obtains a warrant from a special surveillance court — a warrant that must be based on a reasonable belief that the target has information about a foreign government or terrorist operations. The government has admitted that American citizens are often swept up in these dragnets, but the scale with which ordinary people are included was not known until now. The NSA also appears to keep this information even though it has little strategic value and compromises the privacy of the users whose data is kept on file.
Are you an American who writes emails in a language other than English? You are a foreigner to the NSA w/o rights. http://t.co/Xl9VpoAnKZ
The Post story describes how loosely NSA agents seem to treat the theoretical restriction on collecting information about American citizens: participants in email threads and chat conversations are considered foreign if they use a language other than English, or if they appear to be using an IP address that is located outside the U.S. And there is little to no attempt to minimize the number of unrelated individuals who have their information collected:
“If a target entered an online chat room, the NSA collected the words and identities of every person who posted there, regardless of subject, as well as every person who simply ‘lurked,’ reading passively what other people wrote. In other cases, the NSA designated as its target the Internet protocol, or IP, address of a computer server used by hundreds of people. The NSA treats all content intercepted incidentally from third parties as permissible to retain, store, search and distribute to its government customers.”
The Snowden documents come from a cache of retained information that was gathered under the Foreign Intelligence Surveillance Act — despite the fact that for more than a year, government officials have stated that FISA records were beyond the reach of the rogue NSA contractor, according to the Post. The paper said it reviewed about 160,000 intercepted e-mail and instant-message conversations, some of them hundreds of pages long, and 7,900 documents taken from more than 11,000 online accounts.
[tweet 485602315223564289 hide_thread=’true’]
Post and thumbnail images courtesy of Flickr user Thomas Leuthard
The New York Times has been gradually shutting down some of its blogs over the past year or so, including its environmentally-focused Green blog, and this week the newspaper company confirmed that it plans to shut down or absorb at least half of its existing blogs, including its highly-regarded breaking news blog, The Lede. As the Times describes it, the plan is not to get rid of blogging altogether but rather to absorb and even expand blogging-related skills and approaches within the paper as a whole. But will something important be lost in the process?
Assistant managing editor Ian Fisher told Poynter’s Andrew Beaujon that the newspaper is going to continue to provide what he called “bloggy content with a more conversational tone,” but that it will appear throughout the paper’s website, rather than in specific locations called blogs. While high-profile brands like Bits and DealBook will remain, other smaller blogs will be shut down or absorbed into the sections of the paper that fit their topic — although Fisher wouldn’t say which specific blogs were destined for the boneyard.
A blog is just an “artificial container”
As far as the reasoning behind the move is concerned, Fisher mentioned a number of things in his Poynter interview, including one technical reason: namely, the fact that the Times‘ blog software doesn’t work well with the paper’s redesigned article pages — and Times staffer Derek Willis suggested there were other technical benefits in a discussion on Twitter. But Fisher also said that many of the blogs didn’t get a lot of traffic, and that not having to fill a specific “container” with content would free up writers to spend their time doing other things:
“[Some blogs] got very, very little traffic, and they required an enormous amount of resources, because a blog is an animal that is always famished… [and the] quality of our items will go up now, now that readers don’t expect us to be filling the artificial container of a blog.”
As Willis pointed out during our Twitter conversation, blogs are — from a technical perspective at least — just one specific kind of publishing format, with posts that appear in reverse chronological order. But for me at least, this is a little like saying that a sonnet is just a specific way of ordering text, featuring iambic pentameter and an offset rhyming scheme. Obviously not every blog post is a poem, but there is something inherent in the practice of blogging (if it is done well) that makes it different from a story or news article.
Blogging pioneer Dave Winer once said that the essence of a blog is “the unedited voice of a person,” and I still subscribe to that view. Blogging has grown up to the point where even something like The Huffington Post is described by some as “a blog,” which effectively stretches the meaning of the term beyond all comprehension. But it’s more than just a reverse-chronological method of publishing, or the fact that you include embedded tweets or a Storify, or even that you link to other sites — although it includes all of those things.
Absorbing can also mean weakening
When it’s done properly, as Lede writer Robert Mackey often did, it’s a combination of original reporting, curation and aggregation, synthesis and analysis, and an individual voice or tone — and all of that done quickly, and in most cases briefly. As Brian Ries of Mashable argued during a discussion of the Times‘ decision, the problem with trying to absorb the blogging ethos into the paper as a whole is that not all of those skills are going to be present in every writer.
[tweet 481940828685107200 hide_thread=’true’]
This reminds me of when newspapers started to absorb their web units into the larger editorial structure. In the early days, the web was a separate operation — in some cases even in a different building, as it was with the Washington Post. The best part about this arrangement was that it allowed those who worked online to develop their own practices and to some extent their own ethos. When those units were absorbed, some of that was watered down or even lost completely, as editors and writers more focused on print took precedence. That arguably retarded the progress of those papers towards a more digital-first future.
In the end, I think that while the motivation behind killing off blogs might be the correct one — that is, a desire to get away from the format as a specific destination and find a way to get everyone to experiment with blog-style writing and reporting, regardless of where they work — the risk is that the latter simply won’t happen. In other words, some of the momentum that having a blog gives to the skills I mentioned above will be lost, and along with it some of the innovation that blogging has brought to the Times.
A shockwave hit the media industry in May, when an internal “innovation report” prepared for New York Times executives leaked to BuzzFeed. The report makes for fascinating reading, in part because it is a snapshot of a massive media entity that is caught in the throes of wrenching change, unsure how to proceed. But while it contained many things of value, it glossed over one of the most important factors for the paper’s success — and that is whether the content itself, the journalism that the New York Times produces, needs to change.
This question came up recently in a post by Thomas Baekdal, an author and media analyst. In it, Baekdal made the point that the “quality journalism” the innovation report continually refers to — the bedrock, foundational value of the New York Times — is never questioned. In other words, it is assumed that the journalism itself is fine as is, and all that needs to happen is that the paper has to do a better job of marketing it and engaging with readers around it. But is that true? Baekdal says:
“This is something I hear from every single newspaper that I talk with. They are saying the same thing, which is that their journalistic work is top of the line and amazing. The problem is ‘only’ with the secondary thing of how it is presented to the reader. And we have been hearing this for the past five to ten years, and yet the problem still remains. There is a complete and total blind spot in the newspaper industry that part of the problem is also the journalism itself.”
Not just what kind of journalism, but how
Baekdal’s point isn’t that the New York Times produces bad or low-quality content, but just that the paper should be questioning how it reports and writes that content, and whether it meets the needs of the market — just as it is questioning whether its current business model and/or industrialized printing process meets the needs of the market. It’s not a trivial question, but it doesn’t really appear anywhere in the innovation report, at least not in any depth.
This argument got some support this week from an interesting participant: Martin Nisenholtz, the former head of digital operations for the Times — the man who not only started the paper’s website in 1996, but later drove the acquisition of About.com and other innovative efforts on the digital side. In a blog post, Nisenholtz defended Baekdal, and also provided a fascinating glimpse into what could have been an alternate future for the New York Times.
Nisenholtz, now a consultant and journalism professor, describes an interview that Henry Blodget gave to the creators of the Digital Riptide project (a group that included Nisenholtz). The former NYT executive said that one of the things he liked the most about Blodget’s interview was how optimistic he was about the future of journalism in the digital age — in large part because there is so much more of it than ever before, and much of it is of fairly high quality:
“We are awash in news from an almost infinite number of global sources, much of it of very high quality. For this reason, news providers can no longer force their readers to “eat spinach.” Instead, they need to work hard to entice readers with relevant and interesting content, structured for easy access. In a world of almost unlimited choice, the reader is king.”
The Times is no longer alone
As Nisenholtz suggests, that reality is the primary challenge the New York Times is facing: not just that it has to de-emphasize print and adapt to digital, or do a better job of engaging with readers around its content (although it very much has to do all of those things) but that it has to somehow grapple with the fact that it is no longer one of a privileged few — a tiny number of exalted media and journalism producers with a one-way pipe directly into the homes of readers, and therefore a large share of a kind of information oligopoly.New York Times building logo, photo by Rani Molla
Now, the Times is just one player in a vast and differentiated media landscape — one that makes the previous era look like the Pleistocene Age. Not only does every traditional publisher now have access to the exact same market that the NYT does, but there are a host of new and more nimble players with the same access: dedicated news apps like Circa or Yahoo’s news digest, mobile readers like Flipboard and Zite, and digital-only publishers like BuzzFeed and more recent entrants such as Vox. Many of them do journalism in a completely different way. Nisenholtz’s view from 20 years ago is even more appropriate now:
“My feeling at that time (and today) was that ‘quality’ was – in large part – a function of the user experience, and that – particularly in the dial-up world of the mid-90s – Yahoo was doing that best for exactly the reasons that Baekdal outlines. Putting a newspaper on the web seemed very limiting.”
The competing product that is good enough
Many of those who work at the New York Times (and other legacy media organizations) no doubt console themselves by thinking that while their newer, digital-only competitors may be more technologically savvy, their product — i.e., their journalism — is inferior. And that may even be true in some cases. But as any student of disruption theory knows, the most dangerous competitor isn’t the one whose product is better than yours, it’s the one whose product is good enough.
For many readers — especially those who only want to get a brief update about what is happening in the world, or who want news that is tailored to them in some way, or news that has more of a point of view — will likely look to other outlets, even if the objective “quality” of the Times‘ journalism is arguably better. This is the point I think Baekdal is making when he says that newspapers like the Times take more of a supermarket approach to journalism than their competitors. The market’s needs have changed, and it’s not clear whether the Times can change quickly enough to meet them (although apps like NYTNow and features like The Upshot are interesting experiments, and the Times deserves credit for trying them).
In addition to his thoughts on the state of digital media, Nisenholtz also describes a fascinating moment 20 years ago that could have changed the face of online media: as he describes it, when his digital team asked for financial resources to start the website, he also asked for a small sum to finance a “skunk works” research lab to experiment with the web — but his request was ultimately denied. At one point, Nisenholtz says, one member of the team even suggested that the Times should buy Yahoo (he says “we would probably have screwed it up,” but I’m not sure he could have done a worse job than a series of a Yahoo CEOs have).
Imagine what might have happened if the Times had started that lab when the web was young — what innovations could it have developed? What new directions could it have found for all that high-quality journalism? And now, the paper struggles to catch up to a market for digital news that may be permanently out of reach.
Post and thumbnail images courtesy of Getty Images / Mario Tama, as well as Rani Molla and Flickr user Abysim
Twitter (s twtr) hasn’t been having a very good time of it lately: turmoil in the company’s executive ranks — including the recent departure of the chief operating officer and the head of Twitter’s media unit — has raised concerns about deeper issues and the service’s lackluster growth. But the real-time information network has other fires to put out as well, including a fear that the company’s global and financial ambitions may be stifling its previous commitment to free speech.
Twitter recently suspended the account belonging to the Islamic State in Iraq and Syria (ISIS) after the group — which claims to represent radical Sunni militants — posted photographs of its activities, including what appeared to be a mass execution in Iraq. The service has also suspended other accounts related to the group for what seem to be similar reasons, including one that live-tweeted the group’s advance into the city of Mosul.
So far, the company hasn’t commented on why it has taken these steps, but the violent imagery contained in them could well be part of the reason — that and specific threats of violence, which are a breach of Twitter’s terms of use. Others have suggested that the company might also be concerned about a U.S. law that forbids any U.S. person or entity from providing “material support or resources to” an organization that appears on the official list of terrorist groups.
It’s not as though the action against ISIS comes in a vacuum either: in recent months, Twitter has removed or “geo-censored” tweets in Turkey, Ukraine and Russia at the request of governments in those countries. Twitter obviously has to deal with the law in the countries in which it does business — but every time it takes such a step, it engages in a little more censorship, and each time it loses a little bit of the “free-speech wing of the free-speech party” goodwill it built up during the Arab Spring.
(Twitter does sometimes restore the content it blocks: on Tuesday, the service restored access to tweets and accounts in Pakistan that it blocked at the request of the government there, saying: “We have reexamined the requests and, in the absence of additional clarifying information from Pakistani authorities, have determined that restoration of the previously withheld content is warranted”).
Who decides which accounts to censor?
Part of Twitter’s problem is that it doesn’t want to be seen as a tool for terrorist groups, and yet its decision to police this kind of behavior forces it to make choices about whose speech is appropriate and whose isn’t — so the al-Shabaab account has to go, but the Taliban can continue to have an account, and Hamas (which is categorized as a terrorist organization by many groups and governments) was able to post what many saw as a specific threat of violence directed towards Israel during the attacks on the Gaza Strip last year, and Twitter didn’t appear to mind.
But the larger issue is that whether or not accounts like ISIS are posting troubling or disturbing — or even politically sensitive — images and other information, there’s arguably a public interest in having them continue to do so. As Self-trained British journalist and weapons expert Brown Moses has pointed out a number of times, images and videos posted by such militant or even terrorist groups provide an important physical record of what is happening in these countries, and also allow journalists like Moses to verify events. Removing them, as Facebook has done with pages related to Syrian chemical-weapon attacks, makes it harder to do that.
Anthropologist Sarah Kendzior noted in a piece she wrote for Al Jazeera last year — about a similar move to suspend an account belonging to the Somali militant group al-Shabaab — that one of the other frustrating things about Twitter’s moves in these kinds of cases is that the company provides very little transparency about what it is doing or why. For the most part, the only response is a standard disclaimer about how Twitter doesn’t comment on specific accounts or users.
Twitter may be more focused on building up its user base and satisfying the desires of the financial community or the investors in its stock, but that doesn’t mean it can ignore the other elements of its business — and that includes its alleged commitment to maintaining an environment for free speech.
Fans of Silicon Valley’s version of “Game of Thrones” got a front-row seat to a shake-up in Twitter’s executive suite this week, in which the company’s chief operating officer Ali Rowghani was ousted and Chloe Sladden — head of the media unit that has been a big driver of Twitter’s success with TV networks — also left. Somewhere between the backroom intrigue and the cheerful public-facing tweets of support for those departed executives is the source of Twitter’s real challenge: Namely, what does the company want Twitter to be?
But we already know what Twitter is, you protest! It’s a lightweight, real-time information network or platform that allows users anywhere to post things of interest and reach a potential audience of millions. Within that description, however, lies a multitude of experiences — a hall of mirrors in which my version of Twitter is nothing like your version, and nothing like that of the person sitting next to you on the train or the airplane, or at the basketball game.
Is Twitter for connecting dissidents in Ukraine or Turkey with their supporters in other countries, and for speaking truth to power? Yes. Is it for people who want to live-tweet their dissatisfaction with the Oscars or House of Cards or Game of Thrones or the World Cup? Yes. Is it for celebrities who want to reach out to their fans to correct some horrible rumor? Yes. And it is many other things in between.
Who is Twitter intended to serve?
Even those descriptions fail to capture the variations of Twitter usage: some users — in fact, close to a majority of users — never tweet at all, or have tweeted only once. For them, it is a consumption mechanism, or maybe just another source of noise. A smaller group of users (many of them in the media or marketing field) create the vast majority of the content on Twitter, and use tools like Tweetdeck to manage the streams, and complain bitterly (as I have) about the lack of filters and features to help them tame the ocean of information.
Which of these markets is the one that Twitter needs to focus on or amplify? It’s not clear that anyone at Twitter even knows the answer to that question — and I can’t blame them, because it’s a difficult one. As freelance tech analyst Ben Thompson noted in a recent post at his blog Stratechery, a big part of Twitter’s problem is that it was too successful too quickly, before it even realized what it was:
“The initial concept was so good, and so perfectly fit such a large market, that they never needed to go through the process of achieving product market fit… the problem, though, was that by skipping over the wrenching process of finding a market, Twitter still has no idea what their market actually is, and how they might expand it. Twitter is the company equivalent of a lottery winner who never actually learns how to make money.”
According to a number of reports, one of the reasons Ali Rowghani was ejected (and won’t be replaced) is that CEO Dick Costolo wanted to bring control of the product under his purview, rather than the COO’s. Twitter also recently hired a new director of product, former Google Maps executive Daniel Graf, presumably to try and get some traction with users and improve the lackluster growth numbers that investors seem concerned about. Last year, Costolo projected Twitter would have 400 million users by the end of 2013, and it has about 250 million.
A revolving door of product chiefs
As Thompson and others have pointed out, one of the most crucial factors for a tech or consumer-facing company is product-market fit. Twitter has spent years now trying to get that right, and in some ways it seems to be farther from its goal than it has ever been. Co-founder and former CEO Evan Williams tried to shape the product and was ousted, then co-founder Jack Dorsey was supposed to help, then came Michael Sippey. Along the way there have been aborted features like the “Dick bar” and multiple redesigns that are supposed to appeal to new users but appear to be simply irritating the loyal and not attracting anyone.
And while Twitter’s numbers fail to impress, newer services that connect people quickly and easily and focus on short messaging — from WhatsApp and Instagram to Snapchat and Whisper — are rocketing skyward growth-wise. This is not lost on Costolo, one source told Business Insider: “When you talk to Dick about messaging, he’s like, ‘Sigh, that should have been us.’”
The media team that Chloe Sladden built up was supposed to be the savior of Twitter, because it brought in large media companies as partners for second-screen type deals like the Olympics with NBC or the Oscars. And reaching out to celebrities to get them to tweet was designed to appeal to users who just want to follow a few high-profile accounts and see what they are doing. But many of the things that were done in the name of both of those efforts — large images, auto-play videos, and so on — have made the service less appealing for others.
Stranded between many worlds
So at this point, Twitter is caught between two (or more) worlds: The catering to media entities and celebs doesn’t seem to have produced enough traction compared to other players like Facebook to make it worthwhile, and there hasn’t been enough of a focus on tools or design features for hard-core users to keep them loyal. In some ways, the company is failing to serve any of its theoretical markets very well — and that includes advertisers, at least until acquisitions like MoPub start to show that they can help solve that particular problem.
As a longtime fan of Saturday Night Live, I can’t help but think of an ancient skit in which a husband and wife are arguing over whether a new product is a floor wax or a dessert topping. “It’s both!” the cheerful salesman (played by Chevy Chase) exclaims. The joke, of course, is that if it’s a good floor wax, it’s probably not going to be a very good dessert topping, and vice versa.
In the same sense, the things that make Twitter useful to advertisers and large media companies and celebrities aren’t necessarily the things that are going to appeal to Turkish dissidents or free-speech advocates or even just fans of the kind of quiet link-sharing that Twitter used to be known for, rather than the stream of frenzied hashtag and multiple-photo blasting that it has become.
Increasing the pressure is the fact that Twitter is a public company, and it has to show the kinds of growth in both users and revenue that can justify its vast market value — something it has so far failed to do — and the public markets are not known for their patience. Not only that, but as previous social-media superstars like MySpace have shown us, the road to short-term market acceptance can also be the road to long-term irrelevance. Best of luck, Dick.
The term “citizen journalism” gets thrown around a lot, used to refer to everything from people tweeting in crisis zones to high-school students covering city-council meetings. But for me at least, one of the people who best epitomizes that term is the blogger Eliot Higgins, better known by his nom de plume Brown Moses — a man who took an aptitude for painstaking research and used it to turn himself into one of the leading sources of information about the conflict in Syria.
I’ve written about Higgins before, and described his somewhat miraculous transformation over the past couple of years, from an unemployed accountant to a pioneering war blogger — one whose research is relied on not just by aid groups and government agencies in Syria but is praised by established journalists like New York Times war reporter CJ Chivers and others. But I was reminded again of how amazing his story is when I interviewed him on a panel at the International Journalism Festival in Perugia, Italy last week.
A case study in citizen journalism
Before we started the interview, Eliot — a fairly unassuming-looking man of 35 who lives in Leicester, England — described how he started blogging about Libya and Syria when violent attacks against innocent citizens flared up in both countries. And as information about those attacks, including the use of banned chemical weapons and other devices, swept through the blogosphere and through social media, Higgins decided to focus on proving or disproving these reports. So he began to accumulate as much physical data as he could about the attacks.
Some people — even trained journalists — might have looked at a few newsgroups or Facebook pages or YouTube videos, but Higgins went much further: at one point he was watching and cataloging information from as many as 150 YouTube videos every night, posted by eyewitnesses to attacks as well as by militant groups themselves. His presentation at the journalism conference showed how he isolated landmarks and compared them to Google Earth imagery (something Andy Carvin also did during the Arab Spring demonstrations and their aftermath) and also how he verified weaponry based on serial numbers and other markings, working with a rapidly expanding group of fellow investigators and bloggers.
The YouTube ID of KGS5X36LloY?rel=0 is invalid.
Over the course of a year or so, Eliot was able to prove not only that certain weapons were being used — including chemical weapons and what are called “barrel bombs” — but he also used his mapping and calculation skills to show that in some cases rebel groups were in control of much more sensitive areas than had been reported either by government agencies or the mainstream press. In other words, he didn’t just prove or disprove facts or information that were already in the public domain, he broke news about the conflict. And all from the couch in his flat.
Higgins told the audience in Perugia that he is working on setting up a company or foundation that he hopes to launch soon, which will specialize in the kind of open-source research he has been doing — much of which has been recently done in partnership with Storyful, a user-generated content verification service, and its Google Plus-based “open newsroom.” He has also been working with a number of media outlets and journalistic entities to help reporters and editors become better at the kind of skills he uses in his research.
All open source and publicly available
For me at least, one of the biggest strengths of what Higgins does is that it is all effectively open source — he publishes or makes available all of the videos and facts and assumptions that his conclusions are based on so that anyone can check them, unlike some traditional media organizations who rely anonymous government or military sources in the region and often don’t provide much objective evidence for their conclusions so others can verify them.
But more than anything, Eliot is living proof not only of the idea that the tools of journalism are now available to anyone, but that the skills and functions that used to be included in that term are effectively being disaggregated or unbundled. Just as the eyewitness reporting part of a journalist’s job can be done by anyone, the fact-checking or research function that backs up this reporting can be quite easily done by someone who is smart, methodical and motivated like Eliot Higgins — or like the staff at Storyful, or Andy Carvin (who is now at First Look Media).
In other words, the barriers to entry have effectively been demolished. And just as we have new entities like Vox or 538 aimed at explaining the news, we now have people like Higgins creating new verification engines for proving or disproving the facts behind some of the news. The media ecosystem is growing and adapting.
This doesn’t mean that traditional reporting is no longer valuable, obviously, or that existing media entities with their foreign-reporting staff should be replaced by unemployed accountants working from their flats. What it means is that the practice of journalism is being expanded and broadened — and in some cases that is creating valuable new ways of doing the same things we have always done, but cheaper and more quickly. In my opinion at least, traditional media outlets and journalists shouldn’t see that as a threat, but rather as an opportunity.
I should probably mention up front that this is going to sound like one of those “things were better in my day, young fella!” kind of discussions that old people like myself are fond of having, so if that isn’t your cup of tea, feel free to move on. The subject at hand is what us geezers used to call the “blogosphere” — which is now just known as the internet, or online media, or whatever you want to call it. On the one hand, it’s good that blogging has more or less become mainstream, but part of me still misses what the old blogosphere had to offer.
I’ve been thinking about this for awhile, but especially at those times when Dave Winer, one of the original fathers of blogging, writes about the necessity of having your own home on the social web — instead of a parcel of land given to you by one of the big silos — or when someone like blogging veteran Anil Dash writes a post like “The Web We Lost,” which I highly recommend. But it was a post from another long-time blogger, Dan Gillmor, that got me thinking about it this time.
Dan wrote about how some independent developers are working on tools that allow anyone to cross-post from their own blog to another site — such as Slate, where his post also appeared — and to pull comments from Twitter and other networks back to their site and display them along with local comments. These kinds of tools and their support for the “IndieWeb” is important, Dan argues, because:
“We’re in danger of losing what’s made the Internet the most important medium in history – a decentralized platform where the people at the edges of the networks (that would be you and me) don’t need permission to communicate, create and innovate… when we use centralized services like social media sites, however helpful and convenient they may be, we are handing over ultimate control to third parties that profit from our work.”
Blogging grew up — and changed
It isn’t until I see a post like Dan’s that I remember just how much has changed. When I started writing online in the early 2000s, individual blogs were the norm — blogs by people like Justin Hall and Doc Searls and Meg Hourihan of Blogger, and people like my friend and Gigaom founder Om Malik and TechCrunch founder Mike Arrington. At the time, Gigaom was just Om’s thoughts about broadband, and TechCrunch was mostly about Mike meeting (and in some cases offering a couch to) struggling entrepreneurs at his house in Atherton.
Part of what was so great about those early years of blogging was how chaotic it was — a flurry of posts linking to other bloggers (remember linking?), comment flame-wars, and endless discussion about the value of blog widgets like MyBlogLog or your Technorati ranking, or how to set up your RSS feed. Everyone was tinkering with their WordPress or Typepad to embed some new thing or try out a new theme, and there was a natural (if occasionally tense) camaraderie about it.
So what changed? Blogging grew up, for one thing — Om turned his blog into a business, and quite a successful one at that, and Arrington did the same and sold it to AOL. VentureBeat and Mashable and Read/Write and all the others did something similar, and gradually the line between blogging and regular media started to blur, although there are still flare-ups of the old “bloggers vs. journalists” dynamic from time to time. Meanwhile, plenty of individual bloggers got sucked into Twitter or Facebook and stopped blogging altogether.
Obviously, it’s good that more people have social tools with which to express themselves without having to set up their own blog and learn HTML, and there are still independent voices blogging on Medium and other sites. There’s also no question that the social element of Twitter and Facebook is powerful, and getting even more so. But I think we’ve given over much of the conversation to proprietary platforms that remove content at will, and control the data underlying the content we provide — and that is very much a Faustian bargain.
The unedited voice of a single author
Before I start sounding like a World War II veteran who has had a few too many, the other thing that I liked about the blogosphere was just how personal it was. Yes, that often meant someone was up in arms or foaming at the mouth about something — often topics that perhaps didn’t justify the level of outrage being displayed (yes, I’m looking at you, Mike) — but there was still that quintessential element of blogging as defined by Winer: namely, the unedited voice of a person, for better or worse.
That point came back to me when I was speaking with Ben Thompson, a tech analyst who recently launched his own membership-funded blog called Stratechery — written and edited and built solely by him, a kind of throwback to early bloggers like John Gruber of Daring Fireball and Jason Kottke, or Andy Baio of Waxpancake. Ben talked about how “there’s something really powerful about single-author sites that you don’t get anywhere else.”
This is also what appeals to me most about the approach that I think First Look Media is trying to take with its “magazines,” each powered by strong voices with expertise and opinions. But will they be diluted in the same way that Ben argues Nate Silver’s voice has been at the new FiveThirtyEight? Will Glenn Greenwald be as effective or compelling when he is managing a team of other writers? I don’t know. But that’s what I feel like we have lost from the old blogosphere days — that personal connection between a blogger and their readers.
I think (as I argued in a post yesterday) that this kind of connection is the most powerful thing, and potentially also the most valuable thing that digital media provides — I think it’s why we gravitate towards people like Greenwald, or Ezra Klein, or dozens of other brand names, and it’s why using social tools to connect with a community of readers is so important.
We’ve definitely gained a lot as blogs and other forms of digital media have become more commonplace: there are a lot more voices, and that’s good — and they are being listened to by more people. I don’t want to downplay that fact at all. But it feels as though we have lost the personal element, as everyone tries to build businesses, and we’ve allowed proprietary platforms to take over a huge amount of our interaction. So forgive me if I get a little wistful.
If the launches of various new-media entities over the past year — from Beacon’s crowdfunding efforts and Syria Deeply’s topic-focused site to Ezra Klein’s Vox project and Jessica Lessin’s The Information — it’s that there is no end of experimentation going on when it comes to business models. But can a not very well-known blogger with no team behind them turn their writing into a successful freemium business? Technology analyst Ben Thompson is determined to try: he launched a new membership-based model on his blog Stratechery this week and I talked with him about what he is trying to do and why.
Thompson is a former business development and marketing manager with Automattic, the company behind the WordPress blog platform, and has also worked for Microsoft in a similar capacity. Over the past year, he has developed a following for his long and thoughtful posts about technology companies such as Box and Apple, and the strategic thinking (or lack of it) behind their businesses — and it’s that following that he is now trying to monetize.
Membership instead of just donations
Instead of a simple donation-style paywall, similar to what Andrew Sullivan has done with his site The Daily Dish (which has raised close to $1 million over the past year), Thompson has a series of membership tiers that are designed to offer different levels of experience and content, on top of the daily and weekly articles he writes for the site (which remain free). The tier that is $3 a month or $30 a year includes the ability to comment, a full RSS feed and a T-shirt, while $10 a month gives readers all of that plus a poster and access to a daily email of article links.
The ultimate tier of membership, which is $30 a month or $300 a year, gives readers all the things they get on the other levels, but also adds a private messaging function through an app called Glassboard, as well as email access to Thompson and “virtual and in-person meetups” — and a book of the drawings that he does for some of his posts. Thompson says he thinks one of the reasons he will succeed where others haven’t is that he has a better business model:
“Most of the ones that writers have set up have been terrible — they’re just leaky paywalls, and so they wind up being basically just donation-based. The thing I like about Andrew’s model is the focus on the individual… I think that’s right. But the business model basically devolves into a donation model.”
Reward tiers instead of just a paywall
By giving readers a series of rewards targeted to specific use cases — whether they are content-based or more community or interaction-based — Thompson said he hopes to get around some of the problems of paywalls. “The thing thing that bothers me about paywalls is that they punish your best readers, your biggest fans. I think freemium is a much better way to think about it…. the vast majority of people can consume it and never pay, but for those who really like what I have to say, they can pay and they get access to more.”
Thompson said he is also a big believer in the single-voice blog, and he is concerned that some of the newer entrants in the new-media world — such as Nate Silver’s FiveThirtyEight site — have lost sight of what made them successful. Whereas every post and link that Silver used to publish had his voice and carried a certain brand expectation, Thompson said that identity is no longer as powerful because the site has broadened out into so many different topics.
“You see all these sites coming out that are basically just recreating the old newspaper or magazine model. It used to be when I saw a 538 link I would click on every time, because I knew what to expect — but that’s been diluted now. There’s something really powerful about single-author sites that you don’t get anywhere else.”
Less than a thousand true fans
Thompson, who said he has been thinking about this project for years, said that much of his inspiration for Stratechery came from John Gruber’s Daring Fireball site, which is run more or less single-handedly by Gruber, and has become extremely successful with only a relatively small amount of advertising and sponsored content (Thompson points out that Gruber was one of the unsung pioneers of sponsored content in new media with his sponsored RSS feeds, which he introduced a number of years ago).
While Gruber has a big enough following that he can survive solely on advertising and doesn’t need to offer memberships, Thompson said he is trying to balance his new venture out by using a number of different monetization approaches: one is membership, another is sponsored content (each post has a sponsor mention at the bottom), he is launching a podcast that will contain advertising, and is also accepting speaking engagements and may do other personal events.
And while Kevin Kelly has written about the concept of “A thousand true fans” being all an independent artist needs to survive, Thompson said that based on his calculations about the combination of advertising — he says he is currently getting about 40,000 unique visitors a week — and memberships, he needs “significantly less” than a thousand subscribers in order to consider his site a success.
Other sites that have taken a membership approach include Techdirt, which started as the personal blog of founder Mike Masnick and has become a business — with much of the value derived from the commenting community on the blog, which businesses can tap into for market intelligence. Techdirt’s membership layer includes things like early access to posts and the ability to take part in special forum discussions, as well as personal time with Masnick.