Journalism and the internet: Is it the best of times? No — but it’s not the worst of times either

Having just written what I consider a defense of the internet’s effect on journalism and the media industry, I didn’t expect to have to do it again so soon. But just after Andrew Leonard’s short-sighted piece in Salon about how the internet has crippled journalism, David Sessions wrote on the same topic in Patrol magazine, and arguably did an even worse job of describing the current state of journalism, calling it a morass of “cynical, unnecessary, mind-numbing, time-wasting content.”

It’s not just the over-riding pessimism of both of these pieces that bothers me. It’s the failure to appreciate that the complaints they have are the same ones that have been made about journalism for decades — combined with the unrestrained longing for some mythical golden age of journalism.

In his piece, Sessions says that he used to be an optimist about the internet, that he rarely read the printed paper or magazines and always felt more at home with digital media because of its “immediacy” and freedom, and a willingness to evolve. But the promise of the web has turned sour, he argues, and the forces unleashed by the rise of Google and Facebook have turned a once-innovative marketplace into what the former writer (now doing his doctorate in modern European history) calls an undifferentiated mass of clickbait and me-too journalism:

Where once the internet media landscape was populated with publications that all had unique visual styles, traffic models, and editorial voices, each one has mission-creeped its way into a version of the same thing: everybody has to cover everything, regardless of whether not they can add any value to the story, and has to scream at you to stand out in the avalanche of “content” gushing out of your feeds.

The internet didn’t invent clickbait

Sessions’ piece has been tweeted approvingly by many online journalists, who seem to share his feeling that they are “actively making the world a dumber place” (or perhaps they just feel that everyone else is doing that). The internet is bad for writers, Sessions argues later in his essay, because it turns “qualities that should be valued — effort, reflection, revision, editing — into hindrances, and makes the resulting product worth little, both qualitatively and financially.” Good writing is difficult, takes time, and is expensive, he says.

Writing

I’m not saying the Patrol magazine co-founder or his fellow critics are wrong. Is there a lot of noise and low-quality writing on the internet? Definitely. Does much of it come from sites that claim to be doing journalism? You bet. Is any of this unique to the internet age? Not even close. Pick any time period within recent human history — especially the ones that were supposed to be a golden age for journalism — and you will find similar complaints.

Newspapers in particular have always been filled with huge quantities of “cynical, unnecessary, mind-numbing, time-wasting content.” As Annalee Nevitz of Gawker’s io9 recently described, newspapers at the turn of the century routinely indulged in shameless clickbait of the highest order, including front-page stories about violent gangs of thieves stealing people’s genitals. Headlines were salacious and in many cases flat-out wrong. Newspapers competed to see who would be the first to print a rumor or some bit of innuendo, especially if it involved a celebrity.

Technology is always seen as negative

Just as Twitter has been criticized by almost everyone (including Sessions) for encouraging a rush of speculation during events like the Boston bombing, and for overwhelming rational thought and reflection, the advent of the telegraph was also seen as a negative force for human understanding, because it transmitted the news too quickly, without giving people time to take the news in. You could quite easily read the excerpt below from an article in the New York Times from 1858 and replace the word “telegraph” with the word “internet.”

dd949929b

William Randolph Hearst, a giant in the modern media business, was a shameless publicity hound whose newspapers routinely printed half-baked theories and even outright falsehoods in an attempt to attract readers. As BuzzFeed founder Jonah Peretti is fond of pointing out — for obvious reasons — Henry Booth Luce’s burgeoning empire at Time Inc. started by aggregating the news posted by competitors in order to steal some of their traffic and posted every salacious bit of celebrity gossip or rumor it could get its hands on.

It’s not the worst of times

Even at the time when the Washington Post was producing what many see as the apotheosis of golden-age journalism — the Watergate investigation series by Woodward and Bernstein — it and other newspapers just like it were printing thousands of pages a day filled with trivia and ephemeral nonsense. I haven’t been able to find any, but I have no doubt that newspapers were being criticized for printing nothing but poorly-argued invective and cheap traffic-driving features when Benjamin Franklin was running the Pennsylvania Gazette in the 1700s.

[tweet 504916570939064320 hide_thread=’true’]

Criticizing BuzzFeed because it does listicles — or VICE News because it covers pop culture, or Gawker because it runs the occasional celebrity-bashing post, or Vox because it did an explainer on Gwyneth Paltrow — is like looking at a newspaper and complaining about the horoscopes, advice columns and comic strips. Where’s all the great journalism? The reality is that for most newspapers, those investigative stories and scoops everyone remembers are a fraction of a percent of the total output, and always have been.

Is this the best of times for journalism? No. But it’s hardly the worst of times either. The fact is that there was no “golden age of journalism.” Journalism has always been a messy and chaotic and venal undertaking in many ways — the internet didn’t invent that. All the web has done is provide us with more ways to produce and distribute both ephemeral nonsense and serious journalism in greater quantities. The good part is that it has also made it easier to find the things we care about. What we choose to do with that power, as always, is up to us.

Post and thumbnail images courtesy of Shutterstock / Everett Collection and Thinkstock / Anya Berkut

Journalism is doing just fine, thanks — it’s mass-media business models that are ailing

Is the internet destroying journalism? In a piece at Salon, writer Andrew Leonard argues that it is — primarily because “the economics of news gathering in the Internet age suck,” as he puts it. And it’s easy to see why someone would be drawn to that point of view, given the rapid decline of the print newspaper business and the waves of layoffs and closures that have affected that industry. But what Leonard is actually complaining about is the failure of a specific business model for funding journalism, not the decline of journalism itself.

Obviously, those two things are fairly closely related: Newspapers have represented the front lines of journalism for a generation or more, with deep benches of talent — including foreign correspondents in dozens of countries around the world, and special investigative-reporting teams. And what has funded all of that journalism has been print-advertising revenue, which has been falling off a cliff for the past decade or two: since 2000, more than $45 billion worth of revenue has effectively disappeared from the print newspaper business.

newspaper ad revenue

But while journalism and the print-newspaper or print-magazine industry have close ties to one another, and have since the 1950s or so, that doesn’t mean they are synonymous, or that because one is fatally ill the other must necessarily die. In fact, by some measures, journalism has never been healthier. And there’s every reason to believe that it is actually getting stronger because of the web, not weaker — regardless of what’s happening to print.

Journalism is more than just newspapers

Even Leonard admits that surveys repeatedly show people are reading more news than they ever have before, thanks in large part to the rise of mobile devices, and he agrees that the worst of the SEO-driven content farms have been vanquished. He also notes that a lot of money has been flowing into online content over the past year, including Amazon CEO Jeff Bezos buying the Washington Post for $250 million, eBay founder Pierre Omidyar funding First Look Media for a similar amount, and close to $100 million flowing to BuzzFeed and Vox.

One thing we know for sure: People still want to read the news, and where there is demand there will always be supply. And certainly, if you are a reader, you already are flourishing in a golden age, with access to more content of all kinds than you can possibly consume.

So if readers are being well served, and news reading has never been more popular, then why should we be concerned about the future of journalism? Leonard argues that while readers are getting what they want, “a golden age for readers doesn’t necessarily translate into a golden age for writers or publishers.” For one thing, he says, writers are having a hard time making a living because too many people are willing to work for free — a complaint about the internet’s effect on the media industry that comes up from time to time.

Whenever I write about this subject I get deluged by flame emails and Twitter responses, but I don’t see how more people writing journalism — even for free — is a problem. If what we care about is the future of journalism, then it’s actually a good thing, not a bad thing: the more people doing journalism, the better it gets. What Leonard seems concerned about is a particular economic model for producing and distributing that journalism. But who’s to say that the model whose death we are mourning was any better than a new or different model? Here’s Leonard again:

Yes, there are a handful of high-profile start-ups making waves, but it’s not at all clear that they’ve replaced the hundreds and thousands of metro and foreign desk reporter jobs that have vanished in the last decade… one 2011 study found 44.7 percent fewer reporters working in the [San Francisco] Bay area than a decade ago.

The economics have never been better

Here’s the question implied by Leonard’s argument: Should the internet, or new-media entities like BuzzFeed or Vice or Vox, be judged by whether they have been able to replace the thousands of reporter and editor jobs that have vanished in the last decade? I don’t think they should. That would be a little like judging the early years of the automotive industry based on how many horse or buggy-whip-related jobs it managed to replace. Obviously, Vice and Vox and First Look are not going to reconstruct the kind of print-based news industry that ruled the mass-media world of the 1950s and 1960s. But then why should they?

But for me, the most problematic sentence in Leonard’s piece is the one where he says that “the economics of news gathering in the internet age suck.” That couldn’t be further from the truth. As Henry Blodget of Business Insider argued in a post last year about why we are living in a golden age for journalism, the benefits of news-gathering and distribution in a digital age are numerous, and they arguably make both of those functions cheaper by orders of magnitude — to the point where many of the jobs Leonard is mourning are simply not needed any more.

Is the transition from an old model to a new one causing horrendous economic upheaval? Of course it is. And it’s not easy for editors or reporters or writers of any kind to make the transition from one way of doing things to another — but it can be done, and it will be done. And journalism will be just fine, even if print-based newspapers and magazines are not.

Post and thumbnail images courtesy of Shutterstock / Yeko Photo Studio and Getty Images / Mario Tama

Immersive journalism: What if you could experience a news event in 3D by using an Oculus Rift?

If you’ve heard of the Oculus Rift at all, you probably think of it as the off-the-charts geeky, facemask-style VR headset that’s designed for playing 3D video games. And that’s true — but virtual reality has other applications as well, including potentially journalistic ones: USC fellow and documentary filmmaker Nonny de la Peña, for example, is creating immersive experiences that give participants an inside look at a news story, such as the war in Syria, or the military prison in Guantanamo Bay.

As Wired explains, de la Peña talked about her work at a recent conference in Sweden, and how she got the idea from early versions of “documentary games” like JFK Reloaded, which put players in Dallas at the time of the president’s shooting. So much of journalism is about “capturing a moment in time,” said de la Peña, a former journalist who has written for Newsweek and the New York Times — what better way to do that than by doing it in three dimensions?

De la Peña’s first project was called “Gone Gitmo,” and it used documentary evidence about the detention center and the experiences of inmates there to create a life-like representation of what being imprisoned there would be like, including audio clips that recreated certain sounds, and diary entries that detailed the behavior of guards and other inmates. Another project de la Peña did for the World Economic Forum recreated what it was like to be a child refugee in Syria.

The YouTube ID of jN_nbHnHDi4?rel=0 is invalid.

What if journalists or documentarians could create realistic three-dimensional depictions of news events like the shooting of an unarmed black man by police in Ferguson, Missouri last week — would that help convey facts and impressions about the event that TV reports or newspaper stories and tweets couldn’t? Would it make it easier for those trying to understand the incident to appreciate how it happened?

One risk of using the kind of approach de la Peña is taking for current events is that there is so much about them that is in dispute: in Ferguson, for example, there is no consensus on how far away from the police officer Michael Brown was when he was shot, whether his back was turned, and whether there was a struggle before shots were fired. But at least with a Rift, those who wanted to explore the different scenarios would be able to do so in a much more realistic way.

Post and thumbnail images courtesy of Thinkstock / Oleksiy Mark

Should Twitter and YouTube remove images of James Foley’s beheading

Late Tuesday, the terrorist group known as ISIS released a video that appeared to show members of the group beheading freelance journalist James Foley, who was kidnapped almost two years ago while reporting in Iraq. As they so often do, screenshots and links to the video circulated rapidly through social media — even as some journalists begged others to stop sharing them — while Twitter and YouTube tried to remove them as quickly as possible. But as well-meaning as their behavior might be, do we really want those platforms to be the ones deciding what content we can see or not see? It’s not an easy question.

When I asked that question on Twitter, Nu Wexler — a member of Twitter’s public policy team — said the company removed screenshots from the video at the request of Foley’s relatives, in accordance with a new company policy, which states that the company will remove images at the request of the family, although it “will consider public interest factors such as the newsworthiness of the content.” A number of people had their accounts suspended after they shared the images, including Zaid Benjamin of Radio Sawa, but media outlets that posted photos did not.

[tweet 501928035793502209 hide_thread=’true’]

It’s easy to understand why the victim’s family and friends wouldn’t want the video or screenshots circulating, just as the family of Wall Street Journal reporter Daniel Pearl — who was beheaded on video by Al-Qaeda in 2002 — or businessman Nick Berg didn’t want their sons’ deaths broadcast across the internet. And it’s not surprising that many of those who knew Foley, including a number of journalists, would implore others not to share those images, especially since doing so could be seen as promoting (even involuntarily) the interests of ISIS.

Who decides what qualifies as violence?

For whatever it’s worth, I think we owe it to Foley — and others who risk their lives to report the news — to watch the video, out of respect for their commitment. But regardless, shouldn’t that be our choice to make? Should Twitter and YouTube be so quick to remove content because it happens to be violent? And who defines what violence is? What if it was a photo of a young Vietnamese girl who had burned by napalm, or a man being shot by police?

[tweet 502088678949552128 hide_thread=’true’]

Some of those who responded to my question argued that removing images of someone being beheaded is a fairly obvious case where censorship should be required, if only because they are shocking and repulsive — and because Twitter in particular shows users photos and videos automatically now, unlike in the past when you had to click on a link (a change Twitter ironically made to increase engagement with multimedia content). TV networks don’t show violent or graphic images, the argument goes, so why should Twitter or YouTube?

The difference, of course, is that while Twitter may seem more and more like TV all the time — as Zach Seward at Quartz describes it — it’s supposed to be a channel that we control, not one that is moderated by unseen editors somewhere. Twitter has become a global force in part because it is a source of real-time information about conflicts like the Arab Spring in Egypt or the police action in Ferguson, and the company has repeatedly staked its reputation on being the “free-speech wing of the free-speech party.”

Sad that after a year+ of incitement to genocide, jihadi stuff is now being mass scrubbed from Twitter/FB because an American was killed.

Twitter management have been struggling for some time to find a happy medium between censorship and free speech when it comes to ISIS, a group that is renowned for its use of social media to promote its cause — accounts associated with the group have been suspended a number of times, but more keep appearing. Some, including conservative commentator Ronan Farrow, have argued that the company and other social platforms should do a lot more to keep terrorist propaganda and other content out of their networks.

How does Twitter define free speech?

A source at Twitter said that ISIS is especially difficult, because the group is on a U.S. government list of terrorist organizations, and it’s considered a criminal offence to provide “aid or comfort” to such groups — something that could theoretically cover providing them with a platform on social media. But then the Palestinian group Hamas is defined by many as a terrorist group, and it posts on Twitter regularly, including an infamous exchange with the official Twitter account for the Israeli army in 2012.

I deleted the link to the Foley video, but what is the logic? We have been linking to hundreds of ISIS videos beheading FSA & other Syrians

After Ronan Farrow compared ISIS content to the radio broadcasts in Rwanda that many believe helped fuel a genocide in that country in the 1990s, sociologist Zeynep Tufekci argued that in some cases social platforms probably should remove violent content, because of the risk that distributing it will help fuel similar behavior. But others, including First Look Media’s Glenn Greenwald, said leaving those decisions up to corporations like Twitter or YouTube is the last thing that a free society should want to promote.

In some ways, it’s a lot easier to let Twitter or YouTube or Facebook decide what content we should see and not see, since it protects us from being exposed to violent imagery and repulsive behavior. But in some cases it can also prevent us from knowing things that need to be known, as investigative blogger Brown Moses says Facebook does when it removes content posted by dissident groups in Syria. Shouldn’t that be our decision as users?

Post and thumbnail images courtesy of Thinkstock / Yuriz

Twitter vs. Facebook as a news source: Ferguson shows the downsides of an algorithmic filter

While Twitter has been alive with breaking news about the events in Ferguson, Mo. after the shooting of an unarmed black man — video clips posted by participants, live-tweeting the arrest of journalists, and so on — many users say Facebook has been largely silent on the topic, with more info about ice-bucket challenges by various celebrities. Is this a sign of a fundamental difference between the two platforms? In a sense, yes. But it’s also a testament to the power of the algorithms that Facebook uses to filter what we see in our newsfeeds, and that has some potentially serious social implications.

Part of the reason why Twitter is more news-focused than Facebook has to do with the underlying mechanics of both sites, and the way user behavior has evolved as a result. Because of its brevity, and the ease with which updates can be shared, Twitter is a much more rapid-fire experience than Facebook, and that makes it well suited for quick blasts of information during a breaking-news event like Ferguson.

Flaws in the symmetrical follow model

Facebook has tried to emulate some of those aspects of Twitter, with the real-time activity feed that sits off to the right of the main newsfeed and shows you when someone has liked a post, or what they are listening to on Spotify, etc.. But even with that, it’s more difficult to follow a quickly-evolving news story easily. And while Twitter has added embedded images and other Facebook-style features over the past year or so, Facebook is still filled with a lot more content that makes it difficult to process a lot of information quickly.

https://twitter.com/7im/status/501242535360995328

Then there’s the nature of the community: although Facebook has tried to embrace Twitter-style following, which allows users to see updates from others even if they aren’t friends, in most cases people still use the platform the way it was originally designed — in other words, with a symmetrical follow model that requires two people to agree that they are friends before they can see each others’ updates. On Twitter, users decide to follow whomever they wish, and in most cases don’t have to ask for permission (unless someone has protected their account).

As tech-blogger Robert Scoble argued during a debate with Anthony De Rosa of Circa, there are ways to fine-tune your Facebook feed so that it becomes more of a news platform. Like Twitter, Facebook allows users to create topic-driven lists, but the site doesn’t spend much time promoting them, and they are difficult to manage (to be fair, Twitter doesn’t make its lists very prominent or easy to use either). Facebook has also tried to become more of a news source via the Newswire it launched along with Storyful earlier this year, and product manager Mike Hudack says the site is working on other ways of surfacing news better.

Better for friendships than news

In the end, Facebook’s model may be better suited for creating a network of actual friends and close relationships, and for keeping the conversation civil, but it isn’t nearly as conducive to following a breaking-news story like Ferguson, unless you have taken the time to construct lists of sources you follow for just such an occasion. And then there’s the other aspect of the Facebook environment that makes it more problematic as a news source: namely, the fact that Facebook’s newsfeed is filtered by the site’s powerful ranking algorithms.

As University of North Carolina sociologist Zeynep Tufekci pointed out in a recent piece on Medium, the Facebook algorithm makes it less likely we will see news like Ferguson, for a number of reasons. One is that the newsfeed is filtered based on our past activity — the things we have clicked “like” on, the things we have chosen to comment on or share, and so on. That keeps the newsfeed more relevant (or so Facebook would no doubt argue) but it makes it substantially less likely that a sudden or surprising event like Ferguson will make its way past the filters:

“I wonder: what if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure. Would Ferguson be buried in algorithmic censorship?”

A technical issue but also a social one

As the term “algorithmic censorship” implies, Tufekci sees this kind of filtering as a societal issue as well as a technical one, since it helps determine which topics we see as important and which we ignore — and David Holmes at Pando Daily has pointed out that if Twitter implements a similar kind of algorithm-driven filtering, which it is rumored to be considering as a way of improving user engagement, Twitter may also lose some of its strength as a news source.

In a sense, Facebook has become like a digital version of a newspaper, an information gatekeeper that dispenses the news it believes users or readers need to know, rather than allowing those readers to decide for themselves. Instead of a team of little-known editors who decide which uprisings to pay attention to and which to ignore, Facebook uses an algorithm whose inner-workings are a mystery. Theoretically, the newsfeed ranking is determined according to the desires of its users, but there’s no real way to confirm that this is true.

In the end, we all have to choose the news sources that we trust and the ones that work for us in whatever way we decide is important. And if we choose Facebook, that means we will likely miss certain things as a result of the filtering algorithm — things we may not even realize we are missing — unless the network changes the way it handles breaking news events like Ferguson.

Post and thumbnail images courtesy of Thinkstock / Oleksiy Mark

Crowd-powered journalism becomes crucial when traditional media is unwilling or unable

Amid all the trolling and celebrity hoo-ha that takes place on Twitter (s twtr) and other social-media platforms, occasionally there are events that remind us just how transformative a real-time, crowdsourced information platform can be, and the violent response by local police to civil protests in Ferguson, Missouri on Wednesday is a great example. Just as the world was able to see the impact of riots in Tahrir Square in Egypt during the Arab Spring, or military action against civilians in Ukraine, so Twitter provided a gripping window into the events in Ferguson as they were occurring, like a citizen-powered version of CNN.

The unrest began after police shot and killed an unarmed black man, 18-year-old Michael Brown, in the middle of the afternoon, after what some reported was a scuffle of some kind. Mourners gathered, and so did those protesting what they saw as police racism, and there was apparently some vandalism. The response from the authorities was to send in armored personnel carriers and heavily-armed riot squads, who fired tear gas and rubber bullets into the crowds.

Just as it did in Egypt and Ukraine, the stream of updates from Ferguson — both from amateur or non-journalists, eyewitnesses and professional reporters for various outlets — turned into a feed of breaking news unlike anything that non-Twitter users were getting from the major news networks and cable channels. Most of the latter continued with their regular programming, just as media outlets in Turkey and Ukraine avoided mentioning the growing demonstrations in their cities. In a very real sense, citizen-powered journalism filled the gap left by traditional media, which were either incapable or unwilling to cover the news.

Lines blur between citizen and journalist

Eventually, several reporters from mainstream news outlets — including @WesleyLowery from the Washington Post and @RyanJReilly from the Huffington Post — were detained or arrested by police while they worked in a local McDonald’s franchise, and that sparked the attention of not just the Posts but other news entities as well (the two journalists were later released without any formal charges). Up until that point, however, Twitter was one of the few places where you could get real-time coverage of the incident, including the attacks on the media.

Especially in cases like Ferguson, the ability to have those real-time news reports — both verified and unverified — available for free to any user of the network is important not just because it allows us to see what is happening to the protesters and their civil rights, but also because it reveals First Amendment abuses like the dismantling of cameras and other equipment used by media outlets, or the arrest of people for recording the activities of police, which as my colleague Jeff Roberts points out is legal, despite what police forces across the country seem to believe (or want to believe).

Although he didn’t specifically mention Twitter as a tool for reporting, First Circuit Appeals Court judge Kermit Lipez gave one of the best defenses of citizen journalism and why it must be protected by the First Amendment in a decision he handed down in 2011 that found the police in Boston guilty of infringing on the rights of a man who video-taped them assaulting a protester:

“Changes in technology and society have made the lines between private citizen and journalist exceedingly difficult to draw. The proliferation of electronic devices with video-recording capability means that many of our images of current events come from bystanders [and] and news stories are now just as likely to be broken by a blogger at her computer as a reporter at a major newspaper. Such developments make clear why the news-gathering protections of the First Amendment cannot turn on professional credentials or status.”

Citizen media reporting attacks on media

In Ferguson, Twitter users were able to see photos and video clips of Al Jazeera’ cameras and other equipment being removed after police fired a tear gas canister towards the news crew (police have since said they were just relocating the media to a safer area) , and they were able to see Lowery being detained by police, and follow along in real time as he described having his head slammed into a soda machine, and reported how his requests to get the names and badge numbers of the police were repeatedly denied. In the absence of any other witnesses to that kind of behavior, Twitter becomes a crucial check on the power of the authorities.

In 2014, in a protest, there are cameras. Filming other cameras. You cannot stop the images from flowing. #Ferguson pic.twitter.com/JjzSUhQghG

— Laurent Dubois (@Soccerpolitics) August 14, 2014

In a blog post about the power of social and citizen media, former hedge-fund analyst Conor Sen gave a fairly plausible description of what might have happened in Ferguson before Twitter: namely, anchors and celebrity reporters from the major cable networks would have shown up long after the news was out, and would have gotten a fairly restricted view of what was happening, since their access to the area and to witnesses would be made as difficult as possible:

“Anderson Cooper flies in on Monday. The Ferguson police department and local government know the rules of television — keep cameras away from the bad stuff, let Anderson do his report with a police cruiser in the background. Anderson does some interviews, gets a segment on Monday night cable news… the public loses interest, the cameras go away, the police secure the town and the story’s dead in 3 days.”

As sociologist and social-media expert Zeynep Tufekci has written about social-media powered protests and other activity in Turkey, the fact that Twitter allows such information to circulate — and theoretically makes it easier for those outside of a given conflict to know that the authorities are misbehaving, and to collaborate on a response — doesn’t necessarily mean that anything substantive will happen as a result (she has also noted the impact of algorithms on determining what we see and don’t see through social platforms like Facebook).

But regardless of the probability of some larger impact, getting a live perspective on such events is certainly better than not having that information in the first place — or not getting it until much later — and at the moment Twitter (and social media-powered tools like Grasswire and Storfyul) are about the best equipment we have for making that happen.

Oh, and then a sniper on a tank aimed at me when I tried to ask a question about what roads were open. That happened. #Ferguson

— Elon James White (@elonjames) August 14, 2014

Post and thumbnail images courtesy of Getty Images / Scott Olson

Is an ad-based business model the original sin of the web?

Ethan Zuckerman, director of the Center for Civic Media at MIT and co-founder of the blog network Global Voices, argues in a fascinating post at The Atlantic that the “original sin” of the internet was that almost every web business defaulted to an advertising-based business model — and that this in turn led to the privacy-invading, data-collecting policies that are the foundation of companies like Facebook and Google. But is that true? And if so, what should we do about it?

Zuckerman says his thoughts around advertising and its effects were shaped in part by a presentation that developer Maciej Ceglowski gave at a conference in Germany earlier this year. Ceglowski is the founder of Pinboard, a site that allows users to bookmark and store webpages, and someone who has argued in the past that free, ad-supported services are bad for users, since they usually wind up having to sell the company to someone who will ultimately shut it down.

Ceglowski describes the arrival of Google as a turning point, since the company — which started out as a kind of science project with no business model whatsoever — eventually created what became AdSense, and showed that advertising could be a huge revenue generator for a web business:

“The whole industry climbed on this life raft, and remains there to this day. Advertising, or the promise of advertising, is the economic foundation of the world wide web. Let me talk about that second formulation a little bit, because we don’t pay enough attention to it. It sounds like advertising, but it’s really something different that doesn’t have a proper name yet. So I’m going to call it: Investor Storytime.”

A fairy tale of advertising revenue

By “investor storytime,” what Ceglowski means is the fairy tale that most web and social companies tell their venture-capital investors and other shareholders — about how much money they will be able to generate once they add advertising to their site or service or app, or aggregate enough user data to make it worth selling that information to someone. Ceglowski calls this process “the motor destroying our online privacy,” the reason why you see facial detection at store shelves and checkout counters, and “garbage cans in London are talking to your cellphone.”

Nest-advertising

Zuckerman notes that he played a rather critical role in making this future a reality, something he says he regrets, by coding the first “pop-up” ad while he was working at Tripod, an early online portal/community web-hosting company, in the late 1990s (a solution he says was offered to an advertiser because they were concerned about having their advertisement appear on a page that also referred to anal sex). And as advertising has become more ubiquitous, companies have had to come up with more inventive ways of selling ads — and that means using big data:

“Demonstrating that you’re going to target more and better than Facebook requires moving deeper into the world of surveillance—tracking users’ mobile devices as they move through the physical world, assembling more complex user profiles by trading information between data brokers. Once we’ve assumed that advertising is the default model to support the Internet, the next step is obvious: We need more data so we can make our targeted ads appear to be more effective.”

In his post, Zuckerman admits that free or ad-supported content and services have many benefits as well, including the fact that they make the web more widely available — especially to those who couldn’t afford to pay if everything had paywalls — and that being based on advertising probably helped the web spread much more quickly. But he also says that advertising online inevitably means surveillance, since the only important thing is tracking who has actually looked at or clicked on an ad, and knowing as much as possible about them.

security cameras

Micro-payments, or find a way to fix ads?

So what should we do to solve this problem? Zuckerman’s proposed solution is to implement micro-payments, using Bitcoin or some other method — something that wasn’t possible when the web first arrived. In that way, he says, users will be able to support the things they wish, and won’t have to worry about paying with their personal information instead of cash. He asks: “What would it cost to subscribe to an ad-free Facebook and receive a verifiable promise that your content and metadata wasn’t being resold, and would be deleted within a fixed window?”

In a response to Zuckerman’s post, Jeff Jarvis argues that instead of throwing our hands up and declaring that advertising as a model doesn’t work any more, we should be re-thinking how advertising works and trying to improve it. Although he doesn’t mention it, this seems to be part of what interested VC firm Andreessen Horowitz about BuzzFeed, and caused it to give the company $50 million, valuing the company at close to $1 billion. AH partner Chris Dixon has talked about the benefits of BuzzFeed’s version of “native advertising” or sponsored content — content that is so appealing and/or useful that it ceases to be advertising.

[tweet 499873329546010625 hide_thread=’true’]

For my part, I think Zuckerman has a point to a certain extent: an ad-based model does encourage companies to try and find out as much about their users as possible, and that often causes them to cross various ethical boundaries. But this isn’t something the internet invented — newspapers and magazines and political campaigns have been doing that kind of data collection for decades. The web just makes it orders of magnitude easier. In other words, it probably would have happened even if advertising wasn’t the foundation for everything.

One of the big flaws in Zuckerman’s proposal is that it would still make large parts of the web unavailable to people without the means to pay, either in Bitcoin or something else. And like Jarvis, I think advertising could become something better — if native advertising is useful or interesting enough, and it meets the needs of its users, then it should work much better than search keywords or pop-ups. That’s not to say we shouldn’t force companies like Facebook to be more transparent about their data collection — we should do that as well, not just let them off the hook by allowing them to charge us directly.

Post and thumbnail images courtesy of Flickr user Thomas Leuthard and Shutterstock / F.Schmidt

Me: What kinds of shows do you like to watch on TV? Daughter: What’s a TV?

The fact that television viewing is changing dramatically — being disrupted by the web, by YouTube (s goog) and other factors — isn’t breaking news. It’s something we report on a lot at Gigaom, and almost daily there is some announcement that helps reinforce that trend, like the fact that Netflix now has more subscription revenue than HBO, or a recent survey reported by Variety that shows YouTube stars are more popular with young internet users than Hollywood stars.

That last piece of news really hit home for me, because it got me thinking again about how my own family consumes what used to be called television, and how much has changed in only a single generation.

I’m old. Let’s get that out of the way right off the bat. I was born a few years before the moon landing, and I remember us all watching it as a family, my brothers and I lying on the carpet staring at the giant black-and-white TV set with the rotary knob for changing channels — something that we kids were required to do before the advent of remote controls. We had a total of about five channels then, as I recall (and we walked five miles to school every day, uphill both ways).

It’s all about Vine and YouTube

Now there’s a whole generation of cord-cutters, something my colleague Janko has written about extensively, and I have one daughter firmly in that camp: when she and her boyfriend got an apartment together, they chose to get high-speed internet and either download everything they want to watch or stream it via an Android set-top box. But my two youngest daughters — one teenager, one in her 20s — are even further down the curve: like the kids surveyed by Variety, names like PewDiePie and Smosh are more relevant to them than than most Hollywood actors.

469993293

Neither of them actually admits to liking PewDiePie, a Swedish man who talks about video games and has 29 million subscribers. But they certainly know who he is, and are intimately familiar with his work. And they are unabashed fans of other YouTube creators and also of a growing group of Vine artists — whose work is in some ways more fascinating, because each clip is just seven seconds long.

For them, the stars worth knowing about are YouTubers like Olan Rogers, or Vine artists like Thomas Sanders, who has 3.7 million followers. At this point, I would say 70 percent of their video consumption involves YouTube and Vine.

This method of consuming video has crossed over into other areas as well — so, for example, they both devoured the book The Fault In Our Stars and waited eagerly for the movie because they were already fans of author John Green, one-half of the group known as the Vlog Brothers, who got their start on YouTube and then branched out. Green’s novel hit on the best-seller list at Amazon before he had even finished writing it, in part because of his established social following.

It’s not just those kinds of names either, the ones that have already broken through to the mainstream. Both of our younger daughters would rather spend hours of their time with content from someone like Rooster Teeth — another social-web media conglomerate that started with voiced-over Halo game videos — than any regular broadcast TV show, even the ones that are trying desperately to use Twitter and other social media to drive attention to their programs.

The future of TV is social

Rooster Teeth is a fascinating story of a media entity that has reached a significant size without many people ever having even heard of it, and is now a kind of mini-studio for various kinds of mobile and social content. And then there’s the YouTube star known only as Disney Collector, who appears to be a fairly anonymous woman living in Florida, and makes anywhere from $1.6 million to $13 million a year doing short videos in which she reviews children’s toys.

redvsblue

Until recently, you probably could have put Twitch in that category as well: an offshoot of Justin.tv, it grew exponentially by focusing on gameplay videos, and anyone who wasn’t already part of that community likely didn’t notice until reports emerged that Google was going to buy it for $1 billion. I remember someone on This Week in Tech asking me why anyone would pay so much for such a thing, and I said: “Obviously you don’t have young kids.” By that point, my daughters were already spending hours watching video clips of people playing Minecraft.

The girls do watch what might be called “normal” TV, but in almost every case they are programs that have a heavy social component — shows like Doctor Who and Teen Wolf — and in almost every case they discovered them via Tumblr. A group of fans discussing one show will mention another, and they will move to that show and download whatever they can find. Shows often involve live-tweeting or live-blogging the episode, and one daughter maintains not just her own Twitter account but a fan-fiction style account based on a character from the show.

I’m sure not everyone is as deep into this kind of thing as my daughters are, but I find it hard to believe their behavior is that abnormal, and I think smart artists, creators, producers and others in the TV industry are already playing to that kind of emergent behavior — the way Teen Wolf has engaged in a back-and-forth with its online fans. Studios are looking for “crossover stars” like John Green, who can bring their social following with them to books and movies or TV shows. And the evolution of what we call TV continues to accelerate.

Post and thumbnail images courtesy of Thinkstock / Joanna Zieliska

Making fun of Silicon Valley is easy, but the next big thing always looks like a toy

It’s become popular to make fun not just of the “bros” who run a lot of startups — the ones that Businessweek magazine chose to parody on the cover of its latest issue — but of the whole idea of having technology startups in the first place, since so many come up with useless things like Yo, an app that exists solely to send the single word “Yo” to other users. But Y Combinator head Sam Altman argues that out of silliness and irrelevance, sometimes great things are made — and anyone who has followed even the recent history of technology would have a hard time disagreeing.

I confess that I’ve had my own share of fun ridiculing the idea behind Yo, as well as some recent startups such as ReservationHop, which was designed to corner the market in restaurant reservations by mass-booking them under assumed names and then selling them to the highest bidder. But what Altman said in a blog post he wrote in response to the Businessweek story still rings true:

“People often accuse people in Silicon Valley of working on things that don’t matter. Often they’re right. But many very important things start out looking as if they don’t matter, and so it’s a very bad mistake to dismiss everything that looks trivial…. Facebook, Twitter, Reddit, the Internet itself, the iPhone, and on and on and on — most people dismissed these things as incremental or trivial when they first came out.”

Sometimes toys grow up into services

I’ve made the same point before about Twitter, and how it seemed so inconsequential when it first appeared on the scene that I and many others (including our founder Om) ridiculed it as a massive waste of time. What possible purpose could there be in sending 140-character messages to people? It made no sense. After I got finished making fun of Yo, that’s the first thing that occurred to me: I totally failed to see any potential in Twitter — and not just when it launched, but for at least a year after that. Who am I to judge what is worthy?

Twitter NYSE generic

Chris Dixon, an entrepreneur who is now a partner at Andreessen Horowitz, pointed out in a blog post in 2010 that “the next big thing always starts out looking like a toy,” which is a kind of one-sentence paraphrase of disruption guru Clay Christensen’s theory from The Innovator’s Dilemma. Everything from Japanese cars to cheap disk drives started out looking like something no one in their right mind would take seriously — which is why it was so hard for their competitors to see them coming even when it should have been obvious.

Even the phone looked like a toy

Altman pulled his list of toy-turned-big-deal examples from the fairly recent past, presumably because he knew they would resonate with more people (and perhaps because he is under 30). But there are plenty of others, including the telephone — which many believed was an irritating plaything with little or no business application, a view the telegraph industry was happy to promote — and the television, both of which were seen primarily as entertainment devices rather than things that would ultimately transform the world. As Dixon noted:

“Disruptive technologies are dismissed as toys because when they are first launched they ‘undershoot’ user needs. The first telephone could only carry voices a mile or two. The leading telco of the time, Western Union, passed on acquiring the phone because they didn’t see how it could possibly be useful to businesses and railroads – their primary customers. What they failed to anticipate was how rapidly telephone technology and infrastructure would improve.”

Is Yo going to be listed in that kind of pantheon of global success stories? I’m going to go out on a limb and say probably not. But most people thought Mark Zuckerberg’s idea of a site where university students could post photos and personal details about themselves was a waste of time too, and Facebook recently passed IBM in market capitalization with a value of $190 billion and more than a billion users worldwide. Not bad for a toy.

Post and thumbnail images courtesy of Thinkstock / Yaruta and Shutterstock / Anthony Corella

Wrestling with the always-on social web, and trying to relearn the value of boredom

Sometimes I try to remember what it was like to be bored — not the boredom of a less-than-thrilling job assignment or a forced conversation with someone dull, but the mind-numbing, interminable boredom I remember from before the web. The hours spent in a car or bus with nothing to do, standing in line at the bank, sleep-walking through a university class, or killing time waiting for a friend. Strange as it may sound, these kinds of moments seem almost exotic to me now.

I was talking to a friend recently who doesn’t have a smartphone, and they asked me what was so great about it. That’s easy, I said — you’ll never be bored again. And it’s true, of course. As a smartphone user, we have an almost infinite array of time-wasting apps to help us fill those moments: we can read Twitter, look at Instagram or Facebook, play 2048 or Candy Crush, or do dozens of other things.

In effect, boredom has been more or less eradicated, like smallpox or scurvy. If I’m standing in line, waiting for a friend, or just not particularly interested the person I’m sitting with or the TV show I’m watching, I can flick open one of a hundred different apps and be transported somewhere else. Every spare moment can be filled with activity, from the time I open my eyes in the morning until I close them at night.

“Neither humanities nor science offers courses in boredom. At best, they may acquaint you with the sensation by incurring it. But what is a casual contact to an incurable malaise? The worst monotonous drone coming from a lectern or the eye-splitting textbook in turgid English is nothing in comparison to the psychological Sahara that starts right in your bedroom and spurns the horizon.” — Joseph Brodsky, 1995

Finding value in doing nothing

Of course, this is a hugely positive thing in many ways. Who wants to be bored? It feels so wasteful. Much better to feel as though we’re accomplishing something, even if it’s just pushing a virtual rock up a metaphorical hill in some video game. But now and then I feel like I am missing something — namely, the opportunity to let my thoughts wander, with no particular goal in mind. Artists in particular often talk about the benefits of “lateral thinking,” the kind that only comes when we are busy thinking about something else. And when I do get the chance to spend some time without a phone, I’m reminded of how liberating it can be to just daydream.

1-gQnhvKe7-33t1XePMrXUHw

I’ve written before about struggling to deal with an overload of notifications and alerts on my phone, and how I solved it in part by switching to Android from the iPhone, which at the time had relatively poor notification management. That helped me get the notification problem under control, but it didn’t help with an even larger problem: namely, how to stop picking up my phone even when there isn’t a notification. That turns out to be a lot harder to do.

But more and more, I’m starting to think that those tiny empty moments I fill by checking Twitter or browsing Instagram are a lot more important than they might appear at first. Even if spending that time staring off into space makes it feel like I’m not accomplishing anything worthwhile, I think I probably am — and there’s research that suggests I’m right: boredom has a lot of positive qualities.

Losing the fear of missing out

Don’t get me wrong, I’m not agreeing with sociologist Sherry Turkle, who believes that technology is making us gadget-addled hermits with no social skills. I don’t want to suddenly get rid of all my devices, or do what Verge writer Paul Miller did and go without the internet for a year. I don’t have any grand ambitions — I just want to try and find a better balance between being on my phone all the time and having some time to think, or maybe even interact with others face-to-face.

“Lately I’ve started worrying that I’m not getting enough boredom in my life. If I’m watching TV, I can fast-forward through commercials. If I’m standing in line at the store, I can check email or play “Angry Birds.” When I run on the treadmill, I listen to my iPod while reading the closed captions on the TV. I’ve eliminated boredom from my life.” — cartoonist Scott Adams

The biggest hurdle that there’s just so much interesting content out there — and I don’t mean BuzzFeed cat GIFs or Reddit threads. I’m talking about the links that get shared by the thousands of people I follow on Twitter, or the conversations and debates that are occurring around topics I’m interested in. I have no problem putting away 2048 or Reddit, but Twitter is more difficult because I feel like I’m missing out on something potentially fascinating. Why would I choose to be bored instead of reading about something that interests me?

What I’m trying to do a bit more is to remind myself is that this isn’t actually the choice that confronts me when I think about checking my phone for the fourteenth time. The choice is between spending a few moments reading through a stream or checking out someone’s photos vs. using those moments to recharge my brain and maybe even stimulate the creative process a bit. Even if it somehow seems less fulfilling, in the long run I think it is probably a better choice.

Post and thumbnail images courtesy of Thinkstock / Chalabala