Crowd-powered journalism becomes crucial when traditional media is unwilling or unable

Amid all the trolling and celebrity hoo-ha that takes place on Twitter (s twtr) and other social-media platforms, occasionally there are events that remind us just how transformative a real-time, crowdsourced information platform can be, and the violent response by local police to civil protests in Ferguson, Missouri on Wednesday is a great example. Just as the world was able to see the impact of riots in Tahrir Square in Egypt during the Arab Spring, or military action against civilians in Ukraine, so Twitter provided a gripping window into the events in Ferguson as they were occurring, like a citizen-powered version of CNN.

The unrest began after police shot and killed an unarmed black man, 18-year-old Michael Brown, in the middle of the afternoon, after what some reported was a scuffle of some kind. Mourners gathered, and so did those protesting what they saw as police racism, and there was apparently some vandalism. The response from the authorities was to send in armored personnel carriers and heavily-armed riot squads, who fired tear gas and rubber bullets into the crowds.

Just as it did in Egypt and Ukraine, the stream of updates from Ferguson — both from amateur or non-journalists, eyewitnesses and professional reporters for various outlets — turned into a feed of breaking news unlike anything that non-Twitter users were getting from the major news networks and cable channels. Most of the latter continued with their regular programming, just as media outlets in Turkey and Ukraine avoided mentioning the growing demonstrations in their cities. In a very real sense, citizen-powered journalism filled the gap left by traditional media, which were either incapable or unwilling to cover the news.

Lines blur between citizen and journalist

Eventually, several reporters from mainstream news outlets — including @WesleyLowery from the Washington Post and @RyanJReilly from the Huffington Post — were detained or arrested by police while they worked in a local McDonald’s franchise, and that sparked the attention of not just the Posts but other news entities as well (the two journalists were later released without any formal charges). Up until that point, however, Twitter was one of the few places where you could get real-time coverage of the incident, including the attacks on the media.

Especially in cases like Ferguson, the ability to have those real-time news reports — both verified and unverified — available for free to any user of the network is important not just because it allows us to see what is happening to the protesters and their civil rights, but also because it reveals First Amendment abuses like the dismantling of cameras and other equipment used by media outlets, or the arrest of people for recording the activities of police, which as my colleague Jeff Roberts points out is legal, despite what police forces across the country seem to believe (or want to believe).

Although he didn’t specifically mention Twitter as a tool for reporting, First Circuit Appeals Court judge Kermit Lipez gave one of the best defenses of citizen journalism and why it must be protected by the First Amendment in a decision he handed down in 2011 that found the police in Boston guilty of infringing on the rights of a man who video-taped them assaulting a protester:

“Changes in technology and society have made the lines between private citizen and journalist exceedingly difficult to draw. The proliferation of electronic devices with video-recording capability means that many of our images of current events come from bystanders [and] and news stories are now just as likely to be broken by a blogger at her computer as a reporter at a major newspaper. Such developments make clear why the news-gathering protections of the First Amendment cannot turn on professional credentials or status.”

Citizen media reporting attacks on media

In Ferguson, Twitter users were able to see photos and video clips of Al Jazeera’ cameras and other equipment being removed after police fired a tear gas canister towards the news crew (police have since said they were just relocating the media to a safer area) , and they were able to see Lowery being detained by police, and follow along in real time as he described having his head slammed into a soda machine, and reported how his requests to get the names and badge numbers of the police were repeatedly denied. In the absence of any other witnesses to that kind of behavior, Twitter becomes a crucial check on the power of the authorities.

In 2014, in a protest, there are cameras. Filming other cameras. You cannot stop the images from flowing. #Ferguson pic.twitter.com/JjzSUhQghG

— Laurent Dubois (@Soccerpolitics) August 14, 2014

In a blog post about the power of social and citizen media, former hedge-fund analyst Conor Sen gave a fairly plausible description of what might have happened in Ferguson before Twitter: namely, anchors and celebrity reporters from the major cable networks would have shown up long after the news was out, and would have gotten a fairly restricted view of what was happening, since their access to the area and to witnesses would be made as difficult as possible:

“Anderson Cooper flies in on Monday. The Ferguson police department and local government know the rules of television — keep cameras away from the bad stuff, let Anderson do his report with a police cruiser in the background. Anderson does some interviews, gets a segment on Monday night cable news… the public loses interest, the cameras go away, the police secure the town and the story’s dead in 3 days.”

As sociologist and social-media expert Zeynep Tufekci has written about social-media powered protests and other activity in Turkey, the fact that Twitter allows such information to circulate — and theoretically makes it easier for those outside of a given conflict to know that the authorities are misbehaving, and to collaborate on a response — doesn’t necessarily mean that anything substantive will happen as a result (she has also noted the impact of algorithms on determining what we see and don’t see through social platforms like Facebook).

But regardless of the probability of some larger impact, getting a live perspective on such events is certainly better than not having that information in the first place — or not getting it until much later — and at the moment Twitter (and social media-powered tools like Grasswire and Storfyul) are about the best equipment we have for making that happen.

Oh, and then a sniper on a tank aimed at me when I tried to ask a question about what roads were open. That happened. #Ferguson

— Elon James White (@elonjames) August 14, 2014

Post and thumbnail images courtesy of Getty Images / Scott Olson

Is an ad-based business model the original sin of the web?

Ethan Zuckerman, director of the Center for Civic Media at MIT and co-founder of the blog network Global Voices, argues in a fascinating post at The Atlantic that the “original sin” of the internet was that almost every web business defaulted to an advertising-based business model — and that this in turn led to the privacy-invading, data-collecting policies that are the foundation of companies like Facebook and Google. But is that true? And if so, what should we do about it?

Zuckerman says his thoughts around advertising and its effects were shaped in part by a presentation that developer Maciej Ceglowski gave at a conference in Germany earlier this year. Ceglowski is the founder of Pinboard, a site that allows users to bookmark and store webpages, and someone who has argued in the past that free, ad-supported services are bad for users, since they usually wind up having to sell the company to someone who will ultimately shut it down.

Ceglowski describes the arrival of Google as a turning point, since the company — which started out as a kind of science project with no business model whatsoever — eventually created what became AdSense, and showed that advertising could be a huge revenue generator for a web business:

“The whole industry climbed on this life raft, and remains there to this day. Advertising, or the promise of advertising, is the economic foundation of the world wide web. Let me talk about that second formulation a little bit, because we don’t pay enough attention to it. It sounds like advertising, but it’s really something different that doesn’t have a proper name yet. So I’m going to call it: Investor Storytime.”

A fairy tale of advertising revenue

By “investor storytime,” what Ceglowski means is the fairy tale that most web and social companies tell their venture-capital investors and other shareholders — about how much money they will be able to generate once they add advertising to their site or service or app, or aggregate enough user data to make it worth selling that information to someone. Ceglowski calls this process “the motor destroying our online privacy,” the reason why you see facial detection at store shelves and checkout counters, and “garbage cans in London are talking to your cellphone.”

Nest-advertising

Zuckerman notes that he played a rather critical role in making this future a reality, something he says he regrets, by coding the first “pop-up” ad while he was working at Tripod, an early online portal/community web-hosting company, in the late 1990s (a solution he says was offered to an advertiser because they were concerned about having their advertisement appear on a page that also referred to anal sex). And as advertising has become more ubiquitous, companies have had to come up with more inventive ways of selling ads — and that means using big data:

“Demonstrating that you’re going to target more and better than Facebook requires moving deeper into the world of surveillance—tracking users’ mobile devices as they move through the physical world, assembling more complex user profiles by trading information between data brokers. Once we’ve assumed that advertising is the default model to support the Internet, the next step is obvious: We need more data so we can make our targeted ads appear to be more effective.”

In his post, Zuckerman admits that free or ad-supported content and services have many benefits as well, including the fact that they make the web more widely available — especially to those who couldn’t afford to pay if everything had paywalls — and that being based on advertising probably helped the web spread much more quickly. But he also says that advertising online inevitably means surveillance, since the only important thing is tracking who has actually looked at or clicked on an ad, and knowing as much as possible about them.

security cameras

Micro-payments, or find a way to fix ads?

So what should we do to solve this problem? Zuckerman’s proposed solution is to implement micro-payments, using Bitcoin or some other method — something that wasn’t possible when the web first arrived. In that way, he says, users will be able to support the things they wish, and won’t have to worry about paying with their personal information instead of cash. He asks: “What would it cost to subscribe to an ad-free Facebook and receive a verifiable promise that your content and metadata wasn’t being resold, and would be deleted within a fixed window?”

In a response to Zuckerman’s post, Jeff Jarvis argues that instead of throwing our hands up and declaring that advertising as a model doesn’t work any more, we should be re-thinking how advertising works and trying to improve it. Although he doesn’t mention it, this seems to be part of what interested VC firm Andreessen Horowitz about BuzzFeed, and caused it to give the company $50 million, valuing the company at close to $1 billion. AH partner Chris Dixon has talked about the benefits of BuzzFeed’s version of “native advertising” or sponsored content — content that is so appealing and/or useful that it ceases to be advertising.

[tweet 499873329546010625 hide_thread=’true’]

For my part, I think Zuckerman has a point to a certain extent: an ad-based model does encourage companies to try and find out as much about their users as possible, and that often causes them to cross various ethical boundaries. But this isn’t something the internet invented — newspapers and magazines and political campaigns have been doing that kind of data collection for decades. The web just makes it orders of magnitude easier. In other words, it probably would have happened even if advertising wasn’t the foundation for everything.

One of the big flaws in Zuckerman’s proposal is that it would still make large parts of the web unavailable to people without the means to pay, either in Bitcoin or something else. And like Jarvis, I think advertising could become something better — if native advertising is useful or interesting enough, and it meets the needs of its users, then it should work much better than search keywords or pop-ups. That’s not to say we shouldn’t force companies like Facebook to be more transparent about their data collection — we should do that as well, not just let them off the hook by allowing them to charge us directly.

Post and thumbnail images courtesy of Flickr user Thomas Leuthard and Shutterstock / F.Schmidt

Me: What kinds of shows do you like to watch on TV? Daughter: What’s a TV?

The fact that television viewing is changing dramatically — being disrupted by the web, by YouTube (s goog) and other factors — isn’t breaking news. It’s something we report on a lot at Gigaom, and almost daily there is some announcement that helps reinforce that trend, like the fact that Netflix now has more subscription revenue than HBO, or a recent survey reported by Variety that shows YouTube stars are more popular with young internet users than Hollywood stars.

That last piece of news really hit home for me, because it got me thinking again about how my own family consumes what used to be called television, and how much has changed in only a single generation.

I’m old. Let’s get that out of the way right off the bat. I was born a few years before the moon landing, and I remember us all watching it as a family, my brothers and I lying on the carpet staring at the giant black-and-white TV set with the rotary knob for changing channels — something that we kids were required to do before the advent of remote controls. We had a total of about five channels then, as I recall (and we walked five miles to school every day, uphill both ways).

It’s all about Vine and YouTube

Now there’s a whole generation of cord-cutters, something my colleague Janko has written about extensively, and I have one daughter firmly in that camp: when she and her boyfriend got an apartment together, they chose to get high-speed internet and either download everything they want to watch or stream it via an Android set-top box. But my two youngest daughters — one teenager, one in her 20s — are even further down the curve: like the kids surveyed by Variety, names like PewDiePie and Smosh are more relevant to them than than most Hollywood actors.

469993293

Neither of them actually admits to liking PewDiePie, a Swedish man who talks about video games and has 29 million subscribers. But they certainly know who he is, and are intimately familiar with his work. And they are unabashed fans of other YouTube creators and also of a growing group of Vine artists — whose work is in some ways more fascinating, because each clip is just seven seconds long.

For them, the stars worth knowing about are YouTubers like Olan Rogers, or Vine artists like Thomas Sanders, who has 3.7 million followers. At this point, I would say 70 percent of their video consumption involves YouTube and Vine.

This method of consuming video has crossed over into other areas as well — so, for example, they both devoured the book The Fault In Our Stars and waited eagerly for the movie because they were already fans of author John Green, one-half of the group known as the Vlog Brothers, who got their start on YouTube and then branched out. Green’s novel hit on the best-seller list at Amazon before he had even finished writing it, in part because of his established social following.

It’s not just those kinds of names either, the ones that have already broken through to the mainstream. Both of our younger daughters would rather spend hours of their time with content from someone like Rooster Teeth — another social-web media conglomerate that started with voiced-over Halo game videos — than any regular broadcast TV show, even the ones that are trying desperately to use Twitter and other social media to drive attention to their programs.

The future of TV is social

Rooster Teeth is a fascinating story of a media entity that has reached a significant size without many people ever having even heard of it, and is now a kind of mini-studio for various kinds of mobile and social content. And then there’s the YouTube star known only as Disney Collector, who appears to be a fairly anonymous woman living in Florida, and makes anywhere from $1.6 million to $13 million a year doing short videos in which she reviews children’s toys.

redvsblue

Until recently, you probably could have put Twitch in that category as well: an offshoot of Justin.tv, it grew exponentially by focusing on gameplay videos, and anyone who wasn’t already part of that community likely didn’t notice until reports emerged that Google was going to buy it for $1 billion. I remember someone on This Week in Tech asking me why anyone would pay so much for such a thing, and I said: “Obviously you don’t have young kids.” By that point, my daughters were already spending hours watching video clips of people playing Minecraft.

The girls do watch what might be called “normal” TV, but in almost every case they are programs that have a heavy social component — shows like Doctor Who and Teen Wolf — and in almost every case they discovered them via Tumblr. A group of fans discussing one show will mention another, and they will move to that show and download whatever they can find. Shows often involve live-tweeting or live-blogging the episode, and one daughter maintains not just her own Twitter account but a fan-fiction style account based on a character from the show.

I’m sure not everyone is as deep into this kind of thing as my daughters are, but I find it hard to believe their behavior is that abnormal, and I think smart artists, creators, producers and others in the TV industry are already playing to that kind of emergent behavior — the way Teen Wolf has engaged in a back-and-forth with its online fans. Studios are looking for “crossover stars” like John Green, who can bring their social following with them to books and movies or TV shows. And the evolution of what we call TV continues to accelerate.

Post and thumbnail images courtesy of Thinkstock / Joanna Zieliska

Making fun of Silicon Valley is easy, but the next big thing always looks like a toy

It’s become popular to make fun not just of the “bros” who run a lot of startups — the ones that Businessweek magazine chose to parody on the cover of its latest issue — but of the whole idea of having technology startups in the first place, since so many come up with useless things like Yo, an app that exists solely to send the single word “Yo” to other users. But Y Combinator head Sam Altman argues that out of silliness and irrelevance, sometimes great things are made — and anyone who has followed even the recent history of technology would have a hard time disagreeing.

I confess that I’ve had my own share of fun ridiculing the idea behind Yo, as well as some recent startups such as ReservationHop, which was designed to corner the market in restaurant reservations by mass-booking them under assumed names and then selling them to the highest bidder. But what Altman said in a blog post he wrote in response to the Businessweek story still rings true:

“People often accuse people in Silicon Valley of working on things that don’t matter. Often they’re right. But many very important things start out looking as if they don’t matter, and so it’s a very bad mistake to dismiss everything that looks trivial…. Facebook, Twitter, Reddit, the Internet itself, the iPhone, and on and on and on — most people dismissed these things as incremental or trivial when they first came out.”

Sometimes toys grow up into services

I’ve made the same point before about Twitter, and how it seemed so inconsequential when it first appeared on the scene that I and many others (including our founder Om) ridiculed it as a massive waste of time. What possible purpose could there be in sending 140-character messages to people? It made no sense. After I got finished making fun of Yo, that’s the first thing that occurred to me: I totally failed to see any potential in Twitter — and not just when it launched, but for at least a year after that. Who am I to judge what is worthy?

Twitter NYSE generic

Chris Dixon, an entrepreneur who is now a partner at Andreessen Horowitz, pointed out in a blog post in 2010 that “the next big thing always starts out looking like a toy,” which is a kind of one-sentence paraphrase of disruption guru Clay Christensen’s theory from The Innovator’s Dilemma. Everything from Japanese cars to cheap disk drives started out looking like something no one in their right mind would take seriously — which is why it was so hard for their competitors to see them coming even when it should have been obvious.

Even the phone looked like a toy

Altman pulled his list of toy-turned-big-deal examples from the fairly recent past, presumably because he knew they would resonate with more people (and perhaps because he is under 30). But there are plenty of others, including the telephone — which many believed was an irritating plaything with little or no business application, a view the telegraph industry was happy to promote — and the television, both of which were seen primarily as entertainment devices rather than things that would ultimately transform the world. As Dixon noted:

“Disruptive technologies are dismissed as toys because when they are first launched they ‘undershoot’ user needs. The first telephone could only carry voices a mile or two. The leading telco of the time, Western Union, passed on acquiring the phone because they didn’t see how it could possibly be useful to businesses and railroads – their primary customers. What they failed to anticipate was how rapidly telephone technology and infrastructure would improve.”

Is Yo going to be listed in that kind of pantheon of global success stories? I’m going to go out on a limb and say probably not. But most people thought Mark Zuckerberg’s idea of a site where university students could post photos and personal details about themselves was a waste of time too, and Facebook recently passed IBM in market capitalization with a value of $190 billion and more than a billion users worldwide. Not bad for a toy.

Post and thumbnail images courtesy of Thinkstock / Yaruta and Shutterstock / Anthony Corella

Wrestling with the always-on social web, and trying to relearn the value of boredom

Sometimes I try to remember what it was like to be bored — not the boredom of a less-than-thrilling job assignment or a forced conversation with someone dull, but the mind-numbing, interminable boredom I remember from before the web. The hours spent in a car or bus with nothing to do, standing in line at the bank, sleep-walking through a university class, or killing time waiting for a friend. Strange as it may sound, these kinds of moments seem almost exotic to me now.

I was talking to a friend recently who doesn’t have a smartphone, and they asked me what was so great about it. That’s easy, I said — you’ll never be bored again. And it’s true, of course. As a smartphone user, we have an almost infinite array of time-wasting apps to help us fill those moments: we can read Twitter, look at Instagram or Facebook, play 2048 or Candy Crush, or do dozens of other things.

In effect, boredom has been more or less eradicated, like smallpox or scurvy. If I’m standing in line, waiting for a friend, or just not particularly interested the person I’m sitting with or the TV show I’m watching, I can flick open one of a hundred different apps and be transported somewhere else. Every spare moment can be filled with activity, from the time I open my eyes in the morning until I close them at night.

“Neither humanities nor science offers courses in boredom. At best, they may acquaint you with the sensation by incurring it. But what is a casual contact to an incurable malaise? The worst monotonous drone coming from a lectern or the eye-splitting textbook in turgid English is nothing in comparison to the psychological Sahara that starts right in your bedroom and spurns the horizon.” — Joseph Brodsky, 1995

Finding value in doing nothing

Of course, this is a hugely positive thing in many ways. Who wants to be bored? It feels so wasteful. Much better to feel as though we’re accomplishing something, even if it’s just pushing a virtual rock up a metaphorical hill in some video game. But now and then I feel like I am missing something — namely, the opportunity to let my thoughts wander, with no particular goal in mind. Artists in particular often talk about the benefits of “lateral thinking,” the kind that only comes when we are busy thinking about something else. And when I do get the chance to spend some time without a phone, I’m reminded of how liberating it can be to just daydream.

1-gQnhvKe7-33t1XePMrXUHw

I’ve written before about struggling to deal with an overload of notifications and alerts on my phone, and how I solved it in part by switching to Android from the iPhone, which at the time had relatively poor notification management. That helped me get the notification problem under control, but it didn’t help with an even larger problem: namely, how to stop picking up my phone even when there isn’t a notification. That turns out to be a lot harder to do.

But more and more, I’m starting to think that those tiny empty moments I fill by checking Twitter or browsing Instagram are a lot more important than they might appear at first. Even if spending that time staring off into space makes it feel like I’m not accomplishing anything worthwhile, I think I probably am — and there’s research that suggests I’m right: boredom has a lot of positive qualities.

Losing the fear of missing out

Don’t get me wrong, I’m not agreeing with sociologist Sherry Turkle, who believes that technology is making us gadget-addled hermits with no social skills. I don’t want to suddenly get rid of all my devices, or do what Verge writer Paul Miller did and go without the internet for a year. I don’t have any grand ambitions — I just want to try and find a better balance between being on my phone all the time and having some time to think, or maybe even interact with others face-to-face.

“Lately I’ve started worrying that I’m not getting enough boredom in my life. If I’m watching TV, I can fast-forward through commercials. If I’m standing in line at the store, I can check email or play “Angry Birds.” When I run on the treadmill, I listen to my iPod while reading the closed captions on the TV. I’ve eliminated boredom from my life.” — cartoonist Scott Adams

The biggest hurdle that there’s just so much interesting content out there — and I don’t mean BuzzFeed cat GIFs or Reddit threads. I’m talking about the links that get shared by the thousands of people I follow on Twitter, or the conversations and debates that are occurring around topics I’m interested in. I have no problem putting away 2048 or Reddit, but Twitter is more difficult because I feel like I’m missing out on something potentially fascinating. Why would I choose to be bored instead of reading about something that interests me?

What I’m trying to do a bit more is to remind myself is that this isn’t actually the choice that confronts me when I think about checking my phone for the fourteenth time. The choice is between spending a few moments reading through a stream or checking out someone’s photos vs. using those moments to recharge my brain and maybe even stimulate the creative process a bit. Even if it somehow seems less fulfilling, in the long run I think it is probably a better choice.

Post and thumbnail images courtesy of Thinkstock / Chalabala

Twitter may be considering a Facebook-style feed — but would that help its growth or derail it?

After a couple of quarters that had analysts and investors concerned about its growth potential, Twitter managed to turn in a fairly strong performance in the most recent quarter — with more than 120-percent growth in revenue. Some power Twitter users, however, were more interested in something Twitter CEO Dick Costolo mentioned during the conference call: namely, the idea that the company might introduce an algorithmically-filtered feed like Facebook’s.

What Costolo actually said was he “isn’t ruling out” an algorithmic approach — and he also said the company is considering ways of “surfacing the kinds of great conversations that pop up in peoples’ timelines.” That doesn’t mean Twitter is suddenly going to convert its stream into a Facebook-style curated feed, but it was enough to make some users nervous, especially those who have come to dislike the Facebook experience because the social network keeps tweaking its algorithm.

Facebook has managed its newsfeed this way from the beginning, but it seems to have gotten more irritating for some, especially since the changes seem to be designed to appeal to advertisers rather than actual users — and because some say they have lost much of the reach they used to have (a problem Facebook is happy to solve if you pay to promote your content). Is that the kind of future that Twitter has in mind? And will it ruin the experience?

[tweet 494240846015782912 hide_thread=’true’]

[tweet 494240286558932992 hide_thread=’true’]

When I asked the question (on Twitter, naturally) after the company’s earnings report, a number of users said they would either quit the service altogether or dramatically scale back their usage if Twitter implemented something like the Facebook newsfeed, with a black-box algorithm determining what they saw or didn’t see. Several said that a big part of the appeal of Twitter was that it showed them everything their friends and social connections posted — even if the volume of those posts was sometimes overwhelming.

[tweet 494480221190770688 hide_thread=’true’]

[tweet 494251082621943808 hide_thread=’true’]

Just because it implements some kind of algorithmic curation or filtering doesn’t mean Twitter is going to turn into Facebook overnight, of course. The company might confine that kind of approach to an updated or improved version of the “Discover” tab — which is designed to appeal to new users and increase engagement, but so far doesn’t seem to have had much impact. Or it might use algorithms in order to create beginner streams for new users, as a way of helping with “on-boarding,” while allowing existing users to remain unaffected.

The impetus for using algorithms is fairly obvious: while its user-growth and engagement numbers may have assuaged investors’ concerns for the most recent quarter, Twitter is still behind some of the targets that Costolo has reportedly set in the past — including the one where he said the network would have 400 million users by the end of last year (it has about 250 million now). And if it is ever going to reach those levels, it’s going to have to make the service a lot more intuitive and a lot less work. Algorithms are one way of doing that, because they do the heavy lifting, instead of forcing users to spend time pruning their streams.

As Facebook has shown, however, the algorithm is a double-edged sword: for every new user it appeals to, it is going to irritate — and potentially drive away — some indeterminate number of existing users. And as Twitter itself has acknowledged, those users are the ones who create and post the majority of the content that spurs engagement by the rest of the network. Pissing them off could leave Twitter with nothing but a resting place next to MySpace in the social networking Hall of Shame.

Post and thumbnail images courtesy of Thinkstock / rvlsoft

It’s complicated: Why we need a new etiquette for handling what’s private and what’s public

The private vs. public divide used to be relatively straightforward: things remained private unless you disclosed them to someone, either deliberately or accidentally — but even in the case of accidental disclosure, there was no way for your information to reach the entire planet unless it appeared on the evening news. Now, a tweet or a photo or a status update could suddenly appear on a news website, or be retweeted thousands of times, or be used as evidence of some pernicious social phenomenon you may never even have heard of before.

But you posted those things, so they must be public, right? And because they are public, any use of them is permitted, right?

A universe filled with nuance and slippery ethical slopes is contained in those questions. And while many of us have gotten used to the back-and-forth with Facebook (s fb) over what is private and what is public — a line that has remained fluid throughout the company’s history, and still continues to shift — it’s more than just Facebook. If this was a war, the entire web would be the battleground.

In a recent post on Medium, blogging veteran and ThinkUp co-founder Anil Dash did a good job of describing the shifting terrain around what’s private and what’s public. Although we may be convinced that we appreciate the difference between those two, and that there is some kind of hard dividing line, Dash notes: “In reality, the vast majority of what we do exists on a continuum that is neither clearly public nor strictly private.” And that makes it much harder to decide how to treat it:

“Ultimately, we rely on a set of unspoken social agreements to make it possible to live in public and semi-public spaces. If we vent about our bosses to a friend at a coffee shop, we’re trusting that no one will run in with a camera crew and put that conversation on national TV.”

Twitter: Private, public, or in between?

We’ve seen ample evidence of this tension in recent months with a number of Twitter-related debates. In March, a Twitter discussion got started among women who had suffered sexual abuse, and they used the hashtag to share their stories. A number of sites, including BuzzFeed, collected these tweets and embedded them in a news story about the topic, something that has become fairly standard behavior — but some of those who participated in the discussion were outraged that this was done without their permission.

debate tonight about what qualifies as being a public figure today in the eyes of the media. Simple: If you use social *media* you opted in.

Should the authors of those articles have had to get permission from the users whose tweets they embedded? After all, Twitter is a public network by default — as Gawker writer Hamilton Nolan pointed out — and so those messages were designed to be publicly available. From a legal standpoint, posting things to networks such as Twitter and Facebook without using the various privacy features built into those networks makes them public. But some of the participants in the discussion seemed to see their tweets as being more like a conversation with friends in a public place, not something designed to be broadcast.

“The things you write on Twitter are public. They are published on the world wide web. They can be read almost instantly by anyone with an internet connection on the planet Earth. This is not a bug in Twitter; it is a feature. Twitter is a thing that allows you to publish things, quickly, to the public.” — Hamilton Nolan

In another case, high-school students who posted racist comments on Twitter after President Barack Obama was re-elected in 2012 were singled out and identified by Gawker in a news article that included their tweets, as well as their full names and what schools they attended. Was that an appropriate response to messages that were clearly designed for a small group of friends, as unpleasant as they might be, or was it a form of bullying? What about the response to a single tweet from Justine Sacco that many took to be racist?

Blurring the line between personal and public

As sociologist danah boyd has pointed out during the endless debates about Facebook and privacy, we all have different facets of ourselves that we present in different contexts online — a work identity, a personal identity we display to our friends and family, and so on. The problem is that so many apps and services like Twitter and Facebook encourage us to blur the lines between those different personas (and benefit financially from us doing so, as Dash points out). And so information and behavior that belongs in one sphere slides into another.

identity

The response from Gawker and others to the incident was to argue that the participants in that discussion simply don’t understand how Twitter works, or were being deliberately naive about how public their comments were — the same kind of response that users get when their embarrassing Facebook posts become more public than they intended. “If you don’t want people to see it, don’t put it on the internet” is the usual refrain. But as Dash points out, there is a whole spectrum of behavior that exists in the nether world between private and public:

“What if the public speech on Facebook and Twitter is more akin to a conversation happening between two people at a restaurant? Or two people speaking quietly at home, albeit near a window that happens to be open to the street? And if more than a billion people are active on various social networking applications each week, are we saying that there are now a billion public figures?”

The right to remain obscure

In some ways, this debate is similar to the one around search engines and the so-called “right to be forgotten,” a right that is in the process of being enshrined in legislation in the European Union. While advocates of free speech and freedom of information are upset that such legislation will allow certain kinds of data to be removed from view (as Google has now done with some news articles involving public figures), supporters of the law say ordinary individuals shouldn’t be forever tarred by comments or behavior that were intended to be ephemeral, but are now preserved for eternity for everyone to see.

[pullquote person=”” attribution=””]To what extent do we have a right to keep certain content obscure?[/pullquote]

In a piece they wrote for The Atlantic last year, Evan Selinger and Woodrow Hartzog argued that instead of privacy or a right to be forgotten, what we are really talking about is obscurity: so certain information may technically be public — gun-registry data, for example — but is usually difficult to find. Search engines like Google have removed the barriers to that kind of obscurity, and that’s great when the information is of significant public interest. But what about when it’s just high-level gossip or digital rubbernecking at the scene of a social accident? To what extent do we have a right to keep certain content obscure?

As Dash points out in his post, media companies and technology platforms like Facebook have a vested interest in keeping the definition of “public” as broad as possible, and our laws are woefully behind when it comes to protecting users. At the same time, however, some attempts to bridge that gap — including the right to be forgotten, and restrictions on free speech and freedom of information in places such as Britain and Germany — arguably go too far in the other direction.

In many ways, what we’re talking about are things that are difficult (perhaps even impossible) to enshrine in law properly, in the same way we don’t look for the law to codify whether we should be allowed to use our cellphones at the dinner table. Some kinds of behavior may benefit from being defined as illegal — posting revealing photos of people without their knowledge, for example, or audio/video recordings they haven’t agreed to — but the rest of it is mostly a quicksand of etiquette and judgment where laws won’t help, and can actually make things worse. We are going to have to figure out the boundaries of behavior ourselves.

Post and thumbnail images courtesy of Flickr user Alexandre Vialle and Thinkstock / rvlsoft as well as Shutterstock / Andrea Michele Piacquadio

Social media has changed the way that war reporting works — and that’s a good thing

We’ve been writing for a long time at Gigaom about the ways in which the web and social media have changed the practice of journalism, so it’s nice to see the New York Times recognizing some of that. In a recent piece, media writer David Carr notes that real-time social tools like Twitter (s twtr) and YouTube (s goog) have altered the way many of us experience events like the civil war in Ukraine or the violence in Gaza. He doesn’t really address whether this is positive or negative, but it’s easy to make the case that we are much better off now.

If Israeli rockets had hit Gaza or Ukrainian rebels had shot down a commercial airliner before the arrival of the social web, most of us would have been forced to rely on reports from traditional journalists working for a handful of mainstream media sources — some of whom would have been parachuted into the region with little to no advance warning, and in some cases with just a sketchy grasp of the context behind the latest incident — and the news would be filtered through the lens of a CNN anchor or NYT editor. But as Carr points out:

“In the current news ecosystem, we don’t have to wait for the stentorian anchor to arrive and set up shop. Even as some traditional media organizations have pulled back, new players like Vice and BuzzFeed have stepped in to sometimes remarkable effect. Citizen reports from the scene are quickly augmented by journalists. And those journalists on the ground begin writing about what they see, often via Twitter, before consulting with headquarters.”

More personal, and more chaotic

There are downsides to this approach, obviously: In some cases, journalists say things in the heat of the moment that draw negative attention from readers and viewers — or managers and owners of the media outlets they work for — and there are repercussions, as there were for NBC reporter Ayman Mohyeldin and CNN journalist Diana Magnay after they both made comments about the attacks in Gaza. Two years ago, the Jerusalem bureau chief for the New York Times was called on the carpet for remarks she made on Twitter and for a time was assigned a social-media editor to check her tweets before they were published.

Reporter's notebook

Although Carr doesn’t get into it, the other downside that some have mentioned is that the news environment has become much more chaotic, now that everyone with a smartphone can upload photos and report on what is happening around them — including the terrorist groups and armies that are involved in the conflict that is being reported on, and the ultimate victims of their behavior. Hoaxes and misinformation fly just as quickly as the news does, and in some cases are harder to detect, and those mistakes can have real repercussions.

The democratization of news is good

At the same time, however, there are some fairly obvious benefits to the kind of reporting we get now, and I would argue that they outweigh the disadvantages. For one thing, as Carr notes, we get journalism that is much more personal — and while that personal aspect can cause trouble for reporters like Mohyeldin and Magnay when they stray over editorial lines, in the end we get something that is much more moving than mainstream news has typically been. As Carr says:

“It has made for a more visceral, more emotional approach to reporting. War correspondents arriving in a hot zone now provide an on-the-spot moral and physical inventory that seems different from times past. That emotional content, so noticeable when Anderson Cooper was reporting from the Gulf Coast during Hurricane Katrina in 2005, has now become routine, part of the real-time picture all over the web.”

The other major benefit of having so many sources of news is that the process of reporting has become much more democratized, and that has allowed a whole new ecosystem of journalism to evolve — one that includes British blogger Brown Moses, who has become the poster child for crowdsourced journalism about Syria, as well as Storyful’s Open Newsroom and efforts like Grasswire and Checkdesk (I collected some other resources in a recent post abut fact-checking).

In the end, things have definitely become much more confusing — and not just for news consumers but for journalists as well — with the explosion of pro and amateur sources and the sheer speed with which reports flow by in our various social streams. But I would argue that the fact we no longer have to rely on a handful of mainstream outlets for our news and analysis is ultimately a good thing.

Post and thumbnail images courtesy of Flickr users Petteri Sulonen and sskennel

Newspaper companies need to stop lying to themselves, says longtime newspaper editor

Media theorist Clay Shirky isn’t the only one telling newspaper companies and print-oriented journalists that they need to wake up and pay attention to the decline of their industry before they run out of time. Former Seattle Times editor David Boardman — who also happens to be president of the American Society of News Editors — wrote in a recent essay that the newspaper business spends too much of its time sugar-coating the reality of what’s happening.

Boardman described listening to a presentation that the president of the Newspaper Association of America gave at the World Newspaper Congress in Turin, Italy. In her speech, Caroline Little painted an uplifting picture of the state of affairs in her industry, a picture that Boardman called “a fiction where papers could invent a new future while holding on tightly to the past” — something similar to what Shirky called “newspaper nostalgia,” in a piece he wrote recently.

In his post, Boardman took each statement made by Little and presented the opposite viewpoint, or at least put each in a little more context: for example, the NAA president noted that total revenue for the U.S. newspaper industry was about $38 billion in 2013 — but what she didn’t mention is that this is about $12 billion or 35 percent lower than it was just seven years ago:

“What she said: The printed newspaper continues to reach more than half of the U.S. adult population. What she didn’t say: But the percentage of Americans who routinely read a printed paper daily continues its dramatic decline, and is somewhere down around 25 percent. ‘Reaching’ in Little’s reference can mean those people read one issue in the past week; it doesn’t mean they are regular daily readers of the printed paper.”

Should newspapers stop printing?

In a separate post, Allan Mutter — also a longtime newspaper editor who writes a blog called The Newsosaur — collected some of the depressing statistics about the decline of print, most of which were also apparently never mentioned by Little, including the fact that combined print and digital revenues have fallen by more than 55 percent in the past decade, and the industry’s share of the digital advertising market has been cut in half over the same period.

What’s Boardman’s solution? It’s not one that most newspapers will like: He suggests that most should consider giving up their weekday print editions altogether at some point over the next few years, and focus all of their efforts on a single print version on Saturday or Sunday, while pouring all of their resources into digital and mobile. Weekend papers account for a large proportion — in some cases a majority — of the advertising revenue that newspapers bring in, so giving up everything but the Saturday paper wouldn’t be as much of a loss, he argues.

In a recent piece at the Columbia Journalism Review about the New York Times, writer Ryan Chittum argued that the newspaper can’t afford to simply stop printing because the physical version brings in so much revenue. But could it stop printing everything but the Sunday paper? Chittum thinks it might be able to, and so does long-time online journalism watcher Steve Outing. Perhaps new digital-strategy head Arthur Gregg Sulzberger — a co-author of the paper’s much-publicized “innovation report” — is already crunching those numbers for a presentation to his father, the publisher, whose family controls the company’s stock.

What happens when free-speech engines like Twitter and Facebook become megaphones for violence?

Social networks and platforms like Facebook (s fb), Twitter (s twtr) and YouTube (s goog) have given everyone a megaphone they can use to share their views with the world, but what happens — or what should happen — when their views are violent, racist and/or offensive? This is a dilemma that is only growing more intense, especially as militant and terrorist groups in places like Iraq use these platforms to spread messages of hate, including graphic imagery and calls to violence against specific groups of people. How much free speech is too much?

That debate flared up again following an opinion piece that appeared in the Washington Post, written by Ronan Farrow, an MSNBC host and former State Department staffer. In it, Farrow called on social networks like Twitter and Facebook to “do more to stop terrorists from inciting violence,” and argued that if these platforms screen for things like child porn, they should do the same for material that “drives ethnic conflict,” such as calls for violence from Abu Bakr al-Baghdadi, the leader of the Jihadist group known as ISIS.

“Every major social media network employs algorithms that automatically detect and prevent the posting of child pornography. Many, including YouTube, use a similar technique to prevent copyrighted material from hitting the web. Why not, in those overt cases of beheading videos and calls for blood, employ a similar system?”

Free speech vs. hate speech — who wins?

In his piece, Farrow acknowledges that there are free-speech issues involved in what he’s suggesting, but argues that “those grey areas don’t excuse a lack of enforcement against direct calls for murder.” And he draws a direct comparison — as others have — between what ISIS and other groups are doing and what happened in Rwanda in the mid-1990s, where the massacre of hundreds of thousands of Tutsis was driven in part by radio broadcasts calling for violence.

In fact, both Twitter and Facebook already do some of what Farrow wants them to do: for example, Twitter’s terms of use specifically forbid threats of violence, and the company has removed recent tweets from ISIS and blocked accounts in what appeared to be retaliation for the posting of beheading videos and other content (Twitter has a policy of not commenting on actions that it takes related to specific accounts, so we don’t know for sure why).

_75635188_isisnew

The hard part, however, is drawing a line between egregious threats of violence and political rhetoric, and/or picking sides in a specific conflict. As an unnamed executive at one of the social networks told Farrow: “One person’s terrorist is another person’s freedom fighter.”

In a response to Farrow’s piece, Jillian York — the director for international freedom of expression at the Electronic Frontier Foundation — argues that making an impassioned call for some kind of action by social networks is a lot easier than trying to sort out what specific content to remove. Maybe we could agree on beheading videos, but what about other types of rhetoric? And what about the journalistic value of having these groups posting information, which has become a crucial tool for fact-checking journalists like British blogger Brown Moses?

“It seemed pretty simple for Twitter to take down Al-Shabaab’s account following the Westgate Mall massacre, because there was consistent glorification of violence… but they’ve clearly had a harder time determining whether to take down some of ISIS’ accounts, because many of them simply don’t incite violence. Like them or not… their function seems to be reporting on their land grabs, which does have a certain utility for reporters and other actors.”

Twitter and the free-speech party

As the debate over Farrow’s piece expanded on Twitter, sociologist Zeynep Tufekci — an expert in the impact of social-media on conflicts such as the Arab Spring revolutions in Egypt and the more recent demonstrations in Turkey — argued that even free-speech considerations have to be tempered by the potential for inciting actual violence against identifiable groups:

It’s easy to sympathize with this viewpoint, especially after seeing some of terrible images coming out of Iraq. But at what point does protecting a specific group from theoretical acts of violence win out over the right to free speech? It’s not clear where to draw that line. When the militant Palestinian group Hamas made threats towards Israel during an attack on the Gaza Strip in 2012, should Twitter have blocked the account or removed the tweet? What about the tweets from the official account of the Israeli military that triggered those threats?

What makes this difficult for Twitter in particular is that the company has talked a lot about how it wants to be the “free-speech wing of the free-speech party,” and has fought for the rights of its users on a number of occasions, including an attempt to resist demands that it hand over information about French users who posted homophobic and anti-Semitic comments, and another case in which it tried to resist handing over information about supporters of WikiLeaks to the State Department.

Despite this, even Twitter has been caught between a rock and a hard place, with countries like Russia and Pakistan pressuring the company to remove accounts and use its “country withheld content” tool to block access to tweets that are deemed to be illegal — in some cases merely because they involve opinions that the authorities don’t want distributed. In other words, the company already engages in censorship, although it tries hard not to do so.

Who decides what content should disappear?

Facebook, meanwhile, routinely removes content and accounts for a variety of reasons, and has been criticized by many free-speech advocates and journalists — including Brown Moses — for making crucial evidence of chemical-weapon attacks in Syria vanish by deleting accounts, and for doing so without explanation. Google also removes content, such as the infamous “Innocence of Muslims” video, which sparked a similar debate about the risks of trying to hide inflammatory content.

[tweet 487569374300360704 hide_thread=’true’]

What Farrow and others don’t address is the question of who should be left to make the decision about what content to delete in order to comply with his desire to banish violent imagery. Should we just leave it up to unnamed executives to remove whatever they wish, and to arrive at their own definitions of what is appropriate speech and what isn’t? Handing over such an important principle to the private sector — with virtually no transparency about their decision-making, nor any court of appeal — seems unwise, to put it mildly.

What if there were tools that we could use as individuals to remove or block certain types of content ourselves, the way Chrome extensions like HerpDerp do for YouTube comments? Would that make it better or worse? To be honest, I have no idea. What happens if we use these and other similar kinds of tools to forget a genocide? What I think is pretty clear is that handing over even more of that kind of decision making to faceless executives at Twitter and Facebook is not the right way to go, no matter how troubling that content might be.

Post and thumbnail images courtesy of Shutterstock / Aaron Amat

New Snowden leaks show NSA collected the private data of tens of thousands of Americans

It’s been a number of months since there were any new revelations based on the massive trove of top-secret NSA surveillance documents that former security contractor Edward Snowden took with him when he left the service, but the Washington Post came out with a big one on Saturday: according to files that Snowden provided to the newspaper, NSA agents recorded and retained the private information of tens of thousands of ordinary Americans — including online chats and emails — even though they were not the target of an official investigation.

According to the Post‘s story, nine out of 10 account holders who were found in a large cache of intercepted conversations were not the actual surveillance target sought by the NSA, but in effect were electronic bystanders caught in a net that the agency had cast in an attempt to catch someone else. Many were Americans, the newspaper said, and nearly half of the files contained names, email addresses and other details. Although many had been redacted or “minimized,” almost 900 files still contained unmasked email addresses.

“Many other files, described as useless by the analysts but nonetheless retained, have a startlingly intimate, even voyeuristic quality. They tell stories of love and heartbreak, illicit sexual liaisons, mental-health crises, political and religious conversions, financial anxieties and disappointed hopes. The daily lives of more than 10,000 account holders who were not targeted are catalogued and recorded nevertheless.”

As the paper explains, the NSA is only legally allowed to target foreign nationals located overseas unless it obtains a warrant from a special surveillance court — a warrant that must be based on a reasonable belief that the target has information about a foreign government or terrorist operations. The government has admitted that American citizens are often swept up in these dragnets, but the scale with which ordinary people are included was not known until now. The NSA also appears to keep this information even though it has little strategic value and compromises the privacy of the users whose data is kept on file.

Are you an American who writes emails in a language other than English? You are a foreigner to the NSA w/o rights. http://t.co/Xl9VpoAnKZ

— Christopher Soghoian (@csoghoian) July 6, 2014

The Post story describes how loosely NSA agents seem to treat the theoretical restriction on collecting information about American citizens: participants in email threads and chat conversations are considered foreign if they use a language other than English, or if they appear to be using an IP address that is located outside the U.S. And there is little to no attempt to minimize the number of unrelated individuals who have their information collected:

“If a target entered an online chat room, the NSA collected the words and identities of every person who posted there, regardless of subject, as well as every person who simply ‘lurked,’ reading passively what other people wrote. In other cases, the NSA designated as its target the Internet protocol, or IP, address of a computer server used by hundreds of people. The NSA treats all content intercepted incidentally from third parties as permissible to retain, store, search and distribute to its government customers.”

The Snowden documents come from a cache of retained information that was gathered under the Foreign Intelligence Surveillance Act — despite the fact that for more than a year, government officials have stated that FISA records were beyond the reach of the rogue NSA contractor, according to the PostThe paper said it reviewed about 160,000 intercepted e-mail and instant-message conversations, some of them hundreds of pages long, and 7,900 documents taken from more than 11,000 online accounts.

[tweet 485602315223564289 hide_thread=’true’]

Post and thumbnail images courtesy of Flickr user Thomas Leuthard

Can the New York Times kill its blogs without losing the soul of blogging in the process?

The New York Times has been gradually shutting down some of its blogs over the past year or so, including its environmentally-focused Green blog, and this week the newspaper company confirmed that it plans to shut down or absorb at least half of its existing blogs, including its highly-regarded breaking news blog, The Lede. As the Times describes it, the plan is not to get rid of blogging altogether but rather to absorb and even expand blogging-related skills and approaches within the paper as a whole. But will something important be lost in the process?

Assistant managing editor Ian Fisher told Poynter’s Andrew Beaujon that the newspaper is going to continue to provide what he called “bloggy content with a more conversational tone,” but that it will appear throughout the paper’s website, rather than in specific locations called blogs. While high-profile brands like Bits and DealBook will remain, other smaller blogs will be shut down or absorbed into the sections of the paper that fit their topic — although Fisher wouldn’t say which specific blogs were destined for the boneyard.

A blog is just an “artificial container”

As far as the reasoning behind the move is concerned, Fisher mentioned a number of things in his Poynter interview, including one technical reason: namely, the fact that the Times‘ blog software doesn’t work well with the paper’s redesigned article pages — and Times staffer Derek Willis suggested there were other technical benefits in a discussion on Twitter. But Fisher also said that many of the blogs didn’t get a lot of traffic, and that not having to fill a specific “container” with content would free up writers to spend their time doing other things:

“[Some blogs] got very, very little traffic, and they required an enormous amount of resources, because a blog is an animal that is always famished… [and the] quality of our items will go up now, now that readers don’t expect us to be filling the artificial container of a blog.”

As Willis pointed out during our Twitter conversation, blogs are — from a technical perspective at least — just one specific kind of publishing format, with posts that appear in reverse chronological order. But for me at least, this is a little like saying that a sonnet is just a specific way of ordering text, featuring iambic pentameter and an offset rhyming scheme. Obviously not every blog post is a poem, but there is something inherent in the practice of blogging (if it is done well) that makes it different from a story or news article.

New York Times building logo, photo by Rani Molla
New York Times building logo, photo by Rani Molla

Blogging pioneer Dave Winer once said that the essence of a blog is “the unedited voice of a person,” and I still subscribe to that view. Blogging has grown up to the point where even something like The Huffington Post is described by some as “a blog,” which effectively stretches the meaning of the term beyond all comprehension. But it’s more than just a reverse-chronological method of publishing, or the fact that you include embedded tweets or a Storify, or even that you link to other sites — although it includes all of those things.

Absorbing can also mean weakening

When it’s done properly, as Lede writer Robert Mackey often did, it’s a combination of original reporting, curation and aggregation, synthesis and analysis, and an individual voice or tone — and all of that done quickly, and in most cases briefly. As Brian Ries of Mashable argued during a discussion of the Times‘ decision, the problem with trying to absorb the blogging ethos into the paper as a whole is that not all of those skills are going to be present in every writer.

[tweet 481940828685107200 hide_thread=’true’]

This reminds me of when newspapers started to absorb their web units into the larger editorial structure. In the early days, the web was a separate operation — in some cases even in a different building, as it was with the Washington Post. The best part about this arrangement was that it allowed those who worked online to develop their own practices and to some extent their own ethos. When those units were absorbed, some of that was watered down or even lost completely, as editors and writers more focused on print took precedence. That arguably retarded the progress of those papers towards a more digital-first future.

In the end, I think that while the motivation behind killing off blogs might be the correct one — that is, a desire to get away from the format as a specific destination and find a way to get everyone to experiment with blog-style writing and reporting, regardless of where they work — the risk is that the latter simply won’t happen. In other words, some of the momentum that having a blog gives to the skills I mentioned above will be lost, and along with it some of the innovation that blogging has brought to the Times.

Post and thumbnail images courtesy of Flickr user Shutterstock / Alex Kopje and Rani Molla

The New York Times innovation report is great, but it left out one very important thing

A shockwave hit the media industry in May, when an internal “innovation report” prepared for New York Times executives leaked to BuzzFeed. The report makes for fascinating reading, in part because it is a snapshot of a massive media entity that is caught in the throes of wrenching change, unsure how to proceed. But while it contained many things of value, it glossed over one of the most important factors for the paper’s success — and that is whether the content itself, the journalism that the New York Times produces, needs to change.

This question came up recently in a post by Thomas Baekdal, an author and media analyst. In it, Baekdal made the point that the “quality journalism” the innovation report continually refers to — the bedrock, foundational value of the New York Times — is never questioned. In other words, it is assumed that the journalism itself is fine as is, and all that needs to happen is that the paper has to do a better job of marketing it and engaging with readers around it. But is that true? Baekdal says:

“This is something I hear from every single newspaper that I talk with. They are saying the same thing, which is that their journalistic work is top of the line and amazing. The problem is ‘only’ with the secondary thing of how it is presented to the reader. And we have been hearing this for the past five to ten years, and yet the problem still remains. There is a complete and total blind spot in the newspaper industry that part of the problem is also the journalism itself.”

Not just what kind of journalism, but how

Baekdal’s point isn’t that the New York Times produces bad or low-quality content, but just that the paper should be questioning how it reports and writes that content, and whether it meets the needs of the market — just as it is questioning whether its current business model and/or industrialized printing process meets the needs of the market. It’s not a trivial question, but it doesn’t really appear anywhere in the innovation report, at least not in any depth.

New York Times innovation report

This argument got some support this week from an interesting participant: Martin Nisenholtz, the former head of digital operations for the Times — the man who not only started the paper’s website in 1996, but later drove the acquisition of About.com and other innovative efforts on the digital side. In a blog post, Nisenholtz defended Baekdal, and also provided a fascinating glimpse into what could have been an alternate future for the New York Times.

Nisenholtz, now a consultant and journalism professor, describes an interview that Henry Blodget gave to the creators of the Digital Riptide project (a group that included Nisenholtz). The former NYT executive said that one of the things he liked the most about Blodget’s interview was how optimistic he was about the future of journalism in the digital age — in large part because there is so much more of it than ever before, and much of it is of fairly high quality:

“We are awash in news from an almost infinite number of global sources, much of it of very high quality. For this reason, news providers can no longer force their readers to “eat spinach.” Instead, they need to work hard to entice readers with relevant and interesting content, structured for easy access. In a world of almost unlimited choice, the reader is king.”

The Times is no longer alone

As Nisenholtz suggests, that reality is the primary challenge the New York Times is facing: not just that it has to de-emphasize print and adapt to digital, or do a better job of engaging with readers around its content (although it very much has to do all of those things) but that it has to somehow grapple with the fact that it is no longer one of a privileged few — a tiny number of exalted media and journalism producers with a one-way pipe directly into the homes of readers, and therefore a large share of a kind of information oligopoly.New York Times building logo, photo by Rani Molla

Now, the Times is just one player in a vast and differentiated media landscape — one that makes the previous era look like the Pleistocene Age. Not only does every traditional publisher now have access to the exact same market that the NYT does, but there are a host of new and more nimble players with the same access: dedicated news apps like Circa or Yahoo’s news digest, mobile readers like Flipboard and Zite, and digital-only publishers like BuzzFeed and more recent entrants such as Vox. Many of them do journalism in a completely different way. Nisenholtz’s view from 20 years ago is even more appropriate now:

“My feeling at that time (and today) was that ‘quality’ was – in large part – a function of the user experience, and that – particularly in the dial-up world of the mid-90s – Yahoo was doing that best for exactly the reasons that Baekdal outlines. Putting a newspaper on the web seemed very limiting.”

The competing product that is good enough

Many of those who work at the New York Times (and other legacy media organizations) no doubt console themselves by thinking that while their newer, digital-only competitors may be more technologically savvy, their product — i.e., their journalism — is inferior. And that may even be true in some cases. But as any student of disruption theory knows, the most dangerous competitor isn’t the one whose product is better than yours, it’s the one whose product is good enough.

tigers attacking

For many readers — especially those who only want to get a brief update about what is happening in the world, or who want news that is tailored to them in some way, or news that has more of a point of view — will likely look to other outlets, even if the objective “quality” of the Times‘ journalism is arguably better. This is the point I think Baekdal is making when he says that newspapers like the Times take more of a supermarket approach to journalism than their competitors. The market’s needs have changed, and it’s not clear whether the Times can change quickly enough to meet them (although apps like NYTNow and features like The Upshot are interesting experiments, and the Times deserves credit for trying them).

In addition to his thoughts on the state of digital media, Nisenholtz also describes a fascinating moment 20 years ago that could have changed the face of online media: as he describes it, when his digital team asked for financial resources to start the website, he also asked for a small sum to finance a “skunk works” research lab to experiment with the web — but his request was ultimately denied. At one point, Nisenholtz says, one member of the team even suggested that the Times should buy Yahoo (he says “we would probably have screwed it up,” but I’m not sure he could have done a worse job than a series of a Yahoo CEOs have).

Imagine what might have happened if the Times had started that lab when the web was young — what innovations could it have developed? What new directions could it have found for all that high-quality journalism? And now, the paper struggles to catch up to a market for digital news that may be permanently out of reach.

Post and thumbnail images courtesy of Getty Images / Mario Tama, as well as Rani Molla and Flickr user Abysim

Twitter struggles to remain the free-speech wing of the free-speech party as it suspends terrorist accounts

Twitter (s twtr) hasn’t been having a very good time of it lately: turmoil in the company’s executive ranks — including the recent departure of the chief operating officer and the head of Twitter’s media unit — has raised concerns about deeper issues and the service’s lackluster growth. But the real-time information network has other fires to put out as well, including a fear that the company’s global and financial ambitions may be stifling its previous commitment to free speech.

Twitter recently suspended the account belonging to the Islamic State in Iraq and Syria (ISIS) after the group — which claims to represent radical Sunni militants — posted photographs of its activities, including what appeared to be a mass execution in Iraq. The service has also suspended other accounts related to the group for what seem to be similar reasons, including one that live-tweeted the group’s advance into the city of Mosul.

So far, the company hasn’t commented on why it has taken these steps, but the violent imagery contained in them could well be part of the reason — that and specific threats of violence, which are a breach of Twitter’s terms of use. Others have suggested that the company might also be concerned about a U.S. law that forbids any U.S. person or entity from providing “material support or resources to” an organization that appears on the official list of terrorist groups.

It’s not as though the action against ISIS comes in a vacuum either: in recent months, Twitter has removed or “geo-censored” tweets in Turkey, Ukraine and Russia at the request of governments in those countries. Twitter obviously has to deal with the law in the countries in which it does business — but every time it takes such a step, it engages in a little more censorship, and each time it loses a little bit of the “free-speech wing of the free-speech party” goodwill it built up during the Arab Spring.

(Twitter does sometimes restore the content it blocks: on Tuesday, the service restored access to tweets and accounts in Pakistan that it blocked at the request of the government there, saying: “We have reexamined the requests and, in the absence of additional clarifying information from Pakistani authorities, have determined that restoration of the previously withheld content is warranted”).

Who decides which accounts to censor?

Part of Twitter’s problem is that it doesn’t want to be seen as a tool for terrorist groups, and yet its decision to police this kind of behavior forces it to make choices about whose speech is appropriate and whose isn’t — so the al-Shabaab account has to go, but the Taliban can continue to have an account, and Hamas (which is categorized as a terrorist organization by many groups and governments) was able to post what many saw as a specific threat of violence directed towards Israel during the attacks on the Gaza Strip last year, and Twitter didn’t appear to mind.

But the larger issue is that whether or not accounts like ISIS are posting troubling or disturbing — or even politically sensitive — images and other information, there’s arguably a public interest in having them continue to do so. As Self-trained British journalist and weapons expert Brown Moses has pointed out a number of times, images and videos posted by such militant or even terrorist groups provide an important physical record of what is happening in these countries, and also allow journalists like Moses to verify events. Removing them, as Facebook has done with pages related to Syrian chemical-weapon attacks, makes it harder to do that.

Anthropologist Sarah Kendzior noted in a piece she wrote for Al Jazeera last year — about a similar move to suspend an account belonging to the Somali militant group al-Shabaab — that one of the other frustrating things about Twitter’s moves in these kinds of cases is that the company provides very little transparency about what it is doing or why. For the most part, the only response is a standard disclaimer about how Twitter doesn’t comment on specific accounts or users.

Twitter may be more focused on building up its user base and satisfying the desires of the financial community or the investors in its stock, but that doesn’t mean it can ignore the other elements of its business — and that includes its alleged commitment to maintaining an environment for free speech.

Twitter’s executive turmoil masks a deeper problem: Confusion over what Twitter wants to be

Fans of Silicon Valley’s version of “Game of Thrones” got a front-row seat to a shake-up in Twitter’s executive suite this week, in which the company’s chief operating officer Ali Rowghani was ousted and Chloe Sladden — head of the media unit that has been a big driver of Twitter’s success with TV networks — also left. Somewhere between the backroom intrigue and the cheerful public-facing tweets of support for those departed executives is the source of Twitter’s real challenge: Namely, what does the company want Twitter to be?

But we already know what Twitter is, you protest! It’s a lightweight, real-time information network or platform that allows users anywhere to post things of interest and reach a potential audience of millions. Within that description, however, lies a multitude of experiences — a hall of mirrors in which my version of Twitter is nothing like your version, and nothing like that of the person sitting next to you on the train or the airplane, or at the basketball game.

Is Twitter for connecting dissidents in Ukraine or Turkey with their supporters in other countries, and for speaking truth to power? Yes. Is it for people who want to live-tweet their dissatisfaction with the Oscars or House of Cards or Game of Thrones or the World Cup? Yes. Is it for celebrities who want to reach out to their fans to correct some horrible rumor? Yes. And it is many other things in between.

Who is Twitter intended to serve?

Even those descriptions fail to capture the variations of Twitter usage: some users — in fact, close to a majority of users — never tweet at all, or have tweeted only once. For them, it is a consumption mechanism, or maybe just another source of noise. A smaller group of users (many of them in the media or marketing field) create the vast majority of the content on Twitter, and use tools like Tweetdeck to manage the streams, and complain bitterly (as I have) about the lack of filters and features to help them tame the ocean of information.

social media

Which of these markets is the one that Twitter needs to focus on or amplify? It’s not clear that anyone at Twitter even knows the answer to that question — and I can’t blame them, because it’s a difficult one. As freelance tech analyst Ben Thompson noted in a recent post at his blog Stratechery, a big part of Twitter’s problem is that it was too successful too quickly, before it even realized what it was:

“The initial concept was so good, and so perfectly fit such a large market, that they never needed to go through the process of achieving product market fit… the problem, though, was that by skipping over the wrenching process of finding a market, Twitter still has no idea what their market actually is, and how they might expand it. Twitter is the company equivalent of a lottery winner who never actually learns how to make money.”

According to a number of reports, one of the reasons Ali Rowghani was ejected (and won’t be replaced) is that CEO Dick Costolo wanted to bring control of the product under his purview, rather than the COO’s. Twitter also recently hired a new director of product, former Google Maps executive Daniel Graf, presumably to try and get some traction with users and improve the lackluster growth numbers that investors seem concerned about. Last year, Costolo projected Twitter would have 400 million users by the end of 2013, and it has about 250 million.

A revolving door of product chiefs

As Thompson and others have pointed out, one of the most crucial factors for a tech or consumer-facing company is product-market fit. Twitter has spent years now trying to get that right, and in some ways it seems to be farther from its goal than it has ever been. Co-founder and former CEO Evan Williams tried to shape the product and was ousted, then co-founder Jack Dorsey was supposed to help, then came Michael Sippey. Along the way there have been aborted features like the “Dick bar” and multiple redesigns that are supposed to appeal to new users but appear to be simply irritating the loyal and not attracting anyone.

Photo from Shutterstock/Anthony Corrella

And while Twitter’s numbers fail to impress, newer services that connect people quickly and easily and focus on short messaging — from WhatsApp and Instagram to Snapchat and Whisper — are rocketing skyward growth-wise. This is not lost on Costolo, one source told Business Insider: “When you talk to Dick about messaging, he’s like, ‘Sigh, that should have been us.’”

The media team that Chloe Sladden built up was supposed to be the savior of Twitter, because it brought in large media companies as partners for second-screen type deals like the Olympics with NBC or the Oscars. And reaching out to celebrities to get them to tweet was designed to appeal to users who just want to follow a few high-profile accounts and see what they are doing. But many of the things that were done in the name of both of those efforts — large images, auto-play videos, and so on — have made the service less appealing for others.

Stranded between many worlds

So at this point, Twitter is caught between two (or more) worlds: The catering to media entities and celebs doesn’t seem to have produced enough traction compared to other players like Facebook to make it worthwhile, and there hasn’t been enough of a focus on tools or design features for hard-core users to keep them loyal. In some ways, the company is failing to serve any of its theoretical markets very well — and that includes advertisers, at least until acquisitions like MoPub start to show that they can help solve that particular problem.

As a longtime fan of Saturday Night Live, I can’t help but think of an ancient skit in which a husband and wife are arguing over whether a new product is a floor wax or a dessert topping. “It’s both!” the cheerful salesman (played by Chevy Chase) exclaims. The joke, of course, is that if it’s a good floor wax, it’s probably not going to be a very good dessert topping, and vice versa.

In the same sense, the things that make Twitter useful to advertisers and large media companies and celebrities aren’t necessarily the things that are going to appeal to Turkish dissidents or free-speech advocates or even just fans of the kind of quiet link-sharing that Twitter used to be known for, rather than the stream of frenzied hashtag and multiple-photo blasting that it has become.

Increasing the pressure is the fact that Twitter is a public company, and it has to show the kinds of growth in both users and revenue that can justify its vast market value — something it has so far failed to do — and the public markets are not known for their patience. Not only that, but as previous social-media superstars like MySpace have shown us, the road to short-term market acceptance can also be the road to long-term irrelevance. Best of luck, Dick.

Post and thumbnail images courtesy of Flickr user Mark Strozier as well as Shutterstock / noporn and Shutterstock / Anthony Corella