One big benefit of the social web: Journalism emerges wherever it is needed

Media-industry watchers (including us) often tend to focus on how the web and social tools are disrupting existing forms of media and journalism by competing with them, or offering alternatives to traditional outlets and voices. But the democratization of both content production and distribution that was brought about by the social web can be even more powerful when it helps to fill in the gaps where traditional media doesn’t go — either because it doesn’t want to, or because it can’t.

Turkey is one example of that phenomenon at work: citizen journalism of many different kinds became extremely important as a source of unbiased reporting during the recent demonstrations against the government there, in large part because the local media weren’t doing it.

Stepping in to fill a gap

Another good example of this a little closer to home is a one-man operation called Jersey Shore Hurricane News, which was recently profiled by the Nieman Journalism Lab. Much like the celebrated British blogger known as Brown Moses — who transformed himself from an unemployed accountant into a crucial source of information about weapons being used by terrorists in Syria — this New Jersey site is the work of one man with little or no background in media or journalism who felt compelled to be of service.

“The man behind the updates was Justin Auciello, the founder and sole operator of Jersey Shore Hurricane News. It’s a Facebook-only news outlet with over 200,000 followers, most of them concentrated in a few counties of New Jersey. Auciello has been building up this following since just before Hurricane Irene hit in 2011. He has no particular background in journalism; by day, he’s an urban planner and consultant.”

Jersey Shore fire

As Nieman writer Caroline O’Donovan describes, Auciello started posting photos and information about hurricanes affecting the Jersey Shore on Facebook, and the page has gradually developed a devoted following of more than 200,000 people. An urban planner and consultant, Auciello said that he enjoyed the interaction with residents of the region — whom he asked for photos and news submissions after he broadened the site to include news about things other than just hurricanes — and they came to rely on him.

Serve the community’s needs

What I find fascinating is that Auciello didn’t set out to create a new-media entity, or to compete with existing media providers: he saw a need that wasn’t being filled by existing outlets like the Philadelphia Inquirer and the New Jersey Network, and he chose to contribute his efforts to the community. All of this is in his spare time, for no pay. And he is as concerned about verification and reporting the truth as any professional journalist — if not more so — saying:

“I generally will not publish anything about a car accident unless I know if there was an injury, because generally the first thing people ask is, are people injured? And that’s a common, 100 percent natural question. You really don’t want to leave unanswered questions, which, in my opinion, always lead to speculation, and that leads to rumors.”

At this point, according to the Nieman Lab, the Jersey Shore Hurricane News relies on Facebook for distribution and brings in zero revenue, although Auciello has received a grant from a New Jersey recovery fund set up to help the area rebuild after Hurricane Sandy, and he has been thanked by the White House for his service to the community. He is trying to expand his media operation through a partnership with a local public-radio station and also thinking about moving to the web instead of relying on Facebook.

Like Joey Coleman, who set up his own community-funded reporting operation in a small town in Ontario, Canada because he thought it wasn’t being well served, Auciello is a great example of someone who saw a need and decided to fill it — and thanks to the web and digital media, was able to do so. Perhaps this truly is a “golden age for journalism,” as some have argued.

Post and thumbnail photos courtesy of On The Pier Photography and Flickr user Christian Scholz

Twitter’s unlikely birth: The next big thing isn’t just a toy, sometimes it’s a complete accident

It’s tempting to see world-changing companies as the product of one person’s singular vision and willpower — not only does it make things easier to understand, but it caters to our love of the solitary genius, the Einstein or Jobs who sees the world revealed in a flash of insight. But the reality is often very different: in most cases, it is filled with the kind of messy human chaos that is often left out of such stories, and Twitter’s rise to glory is a great example.

TwitterDudes20131-3
Biz Stone, Ev Williams and Jack Dorsey at Current.TV offices in happier times

An excerpt from NYT writer Nick Bilton’s book about the company’s messy birth reinforces the fact that something we now take for granted — that Twitter has become a massively influential media company, one that is planning a public offering that could be worth as much as $20 billion — is so incredibly unlikely that it almost seems like an accident, or rather a chain reaction of accidents, each one more unpredictable than the rest. As Bilton says:

“In the Valley, these tales are called “the Creation Myth” because, while based on a true story, they exclude all the turmoil and occasional back stabbing that comes with founding a tech company. And while all origin stories contain some exaggerations, Twitter’s is cobbled together from an uncommon number of them.”

Tripping, falling, stumbling — all the way to success

twitter bird tweets logo drawing

It has been said that the next big thing always starts out as a toy, a statement that is a kind of capsule version of Clay Christensen’s disruption theory, and Twitter certainly falls into that category: for the first two or three years of its life, if not longer, it was dismissed as an irrelevant tool for nerds and narcissists to share what they were having for lunch. But as Bilton’s description makes clear, it was also a fluke that the service even got started in the first place, let alone succeeded and became a multibillion-dollar entity.

Take the place where Twitter co-founder Jack Dorsey reportedly came up with the idea for the service as an SMS-style status update (his original choice for a name, as detailed in a sketch he made, was Stat.us). South Park in San Francisco doesn’t just have dingy, beaten-up playground equipment, as Bilton notes — it is far more popular with homeless people and drug addicts than it is (or was) with CEOs or startup founders. It makes a garage look good.

“For many in Silicon Valley, this playground is hallowed ground. It was here, one breezy day in 2006, according to legend, that Jack Dorsey ordered burritos with two co-workers, scaled a slide and, in a black sweater and green beanie, like a geeked-out Moses on Mount Sinai, presented his idea.”

So what was the most crucial factor in Twitter’s early success? Was it that early staffer Noah Glass, who was later forced out of the company, came up with a catchy name after a frenzied search through the dictionary? Was it that Blogger founder Evan Williams, whose other business making podcast software was going nowhere fast, needed to find something new to focus on? Was it that Twitter fit in so well with the anarchic social atmosphere at South by Southwest, which at the time was the hottest geek conference around?

Chaos and openness is better than a bad plan

twitterfailwhale

It was all of these things and then some. Even in the early days, what struck me most about both the service and the company was that it seemed to consistently be able to snatch success from the jaws of defeat — just when you thought it was going down for good, after the umpteenth server failure or some high-school-yearbook style upheaval in the executive suite, it came back stronger than before. Users complained bitterly about the downtime and then when it came back they used it even more.

In some ways, it almost seems like the world — or at least certain tech and media-obsessed parts of it — wanted something like Twitter to exist, and were determined to somehow will it into being, despite all the repeated screw-ups and bumps in the road along the way. Users took a simple service that (I would argue) even its founders didn’t really understand completely, and turned it into something that changed the very fabric of the way the world communicates with itself. And not just about TV shows, but about even more important things like revolutions and wars and social phenomena of all kinds.

TwitterDudesAlGore20081
Al Gore at Current TV offices (with Jack Dorsey in the background)

If there’s one lesson that comes from Twitter’s messy origins and chaotic upbringing, it is that you can do as much damage to an idea by trying to force it into a specific mold as you can by letting it breathe and evolve on its own. It may have been an accident that Twitter was so open and free of constraints in the beginning — something the company tried hard to reverse after it got rid of Williams and started cracking down on third-party developers — but without all of that chaos and confusion, I’m not sure Twitter would exist at all.

Post and thumbnail photos courtesy of Flickr users Stephen Brace and Shawn Campbell

Newspapers may be dying, but the internet didn’t kill them — and journalism is doing just fine

Among the pieces of conventional wisdom that get trotted out whenever the subject of the newspaper industry’s decline comes up, one of the most popular is that the internet is the main culprit: in some cases, it’s the entire internet, and in some cases it’s specific web services like Craigslist. But while the democratization of distribution and the atomization of content have definitely accelerated the decline, journalism professor George Brock argues that newspapers have been on a slippery slope for some time, and that what journalism is going through is a natural evolution rather than a disaster.

Brock — who runs the journalism program at City University in London, England — makes these points in a book he recently published, but also laid some of them out in a blog post entitled “Spike the gloom — journalism has a bright future.” Everyone has a favorite example of the decline of the industry, he says, such as the sale of the Boston Globe for 97 percent less than it sold for two decades ago or the massive rounds of layoffs that continue to sweep through the business.

inquirer-newsroom-cropped

Newspapers are not the same as journalism

It’s certainly easy to find that kind of evidence of doom, but I think Brock is right when he argues that “this picture of deterioration is one-dimensional, incomplete and out of date,” and that journalism is flourishing if you know where to look. Among the key points he makes in the post:

Journalism is always reinventing itself: Journalism “is forced to re-invent itself at regular intervals” and always has done so, Brock says, whenever the changing context of economics, law, technology and culture shifts the ground beneath it. “Re-invention and experiment are the only constants in journalism’s history.”

Newspapers are not the same as journalism: Journalists confuse the two, says Brock, but the golden age of newspaper journalism in the second half of the 20th century “was, in reality, a long commercial decline. British national papers reached their peak total circulation in the early 1950s.”

Television killed more papers than the internet: More papers were killed off by the arrival of television “than have ever been closed by competition from the internet,” Brock says. The internet made things worse, and helped kill classified ad revenue in particular, but “the decline of print began before the internet was built.”

Demand for news is strong and growing: Newspapers may not be benefiting, but the demand for news remains strong, says Brock. “What has imploded is the effectiveness of the business model of large, general-interest daily papers which require news reporting to be cross-subsidised by advertising revenue.”

Journalism is doing just fine thanks

NYT newspaper stand

Brock goes on to say that some big journalism brands will be able to adapt and some will not — and meanwhile, some of what he calls “the insurgents of news publishing” will go on to become the giants of the future. Among those insurgents, he says, are sites like Talking Points Memo, The Huffington Post and BuzzFeed — the latter of which is following a familiar pattern of disruption by starting with something that is seen as trival or outside the norm and then gradually building on that and moving further into the mainstream.

In many ways, Brock’s arguments are similar to those advanced by Business Insider founder Henry Blodget in a post about how we are in a “golden age for journalism” — a phrase that Arianna Huffington has also used a number of times to describe the innovation that is occurring in online media. Even New York Times media critic David Carr described the current environment that way during a Q & A last year in Toronto, saying Twitter and other forms of citizen journalism are having a largely positive impact, despite their flaws.

And Brock’s point about BuzzFeed is a good one as well: while the site has been widely criticized for being infantile and/or irrelevant, and many mainstream journalists have scoffed at the idea that it could become anything but a place for cat GIFS, the company is profitable and growing rapidly, and founder Jonah Peretti says it is investing heavily in both breaking news and long-form investigative journalism — something few if any traditional media entities are doing.

Post and thumbnail photos courtesy of Shutterstock / Feng YuWill Steacy and Flickr user Monik Marcus

Still wondering why we need a stateless media entity like WikiLeaks? This is why

If it wasn’t already obvious that the U.S. government is targeting journalists as part of its ongoing war on leaks, it should be fairly clear now that Guardian writer Glenn Greenwald’s partner has been detained for nine hours in a British airport and had all of his electronics seized by authorities looking for classified documents like the ones Greenwald got from former CIA contractor Edward Snowden. More than anything, this kind of behavior highlights the value of having a stateless, independent media entity such as WikiLeaks.

And if that wasn’t enough, Guardian editor Alan Rusbridger has written about an almost unprecedented effort by British authorities to force the newspaper to stop reporting on the government’s surveillance of its citizens — including the seizure and destruction of hard drives at the newspaper’s offices and warnings about future action if the reporting continues. Rusbridger said the paper will continue its work, but will do so from the U.S. As he described it:

“And so one of the more bizarre moments in the Guardian’s long history occurred – with two GCHQ security experts overseeing the destruction of hard drives in the Guardian’s basement just to make sure there was nothing in the mangled bits of metal which could possibly be of any interest.”

A pattern of journalistic harassment

Reporter

Moving to the U.S. may not be much of an alternative, however, given the American government’s recent behavior. U.S. authorities have said that Britain took the action they did against Greenwald’s partner, Brazilian resident David Miranda, without any direction from the Obama administration — under Britain’s Schedule 7 anti-terrorism law — although the U.S. government did acknowledge that British authorities gave them a “head’s up” about the detention and search. But should we believe this, knowing that senior security officials have routinely lied about their activities?

Given what has happened with Snowden, it’s entirely believable that the Obama administration asked Britain to take such action, or at least suggested that it would be grateful if it occurred. What’s especially depressing is how quick some defenders of the U.S. security apparatus were to argue that it was Greenwald’s own fault his partner was treated in such a way — as though targeting the families of journalists for unreasonable search and seizure should be considered routine:

https://twitter.com/joshuafoust/status/369549055572987905

As the Free Press and others have pointed out, the detention is just part of a much larger pattern of harassment that has been directed at journalists by the U.S. government over the last year — a pattern that includes veiled threats of prosecution against Greenwald and other journalists who have been involved in leaks, as well as the ongoing quasi-legal measures it has been taking against WikiLeaks founder Julian Assange.

WikiLeaks is already a media entity

While the idea of WikiLeaks as a media entity is not universally accepted, I and others have argued that it deserves to be thought of in that way: journalism professor Jay Rosen has called it the “first stateless news organization,” and Harvard legal scholar Yochai Benkler has made a persuasive case — both in his writings and in testimony at the Bradley Manning trial — that WikiLeaks is a crucial part of what he calls “the networked Fourth Estate.”

The Guardian hard drive shredding scandal demonstrates why it is necessary to publish early publish often and publish globally.

— WikiLeaks (@wikileaks) August 20, 2013

Even Bill Keller, the former New York Times executive editor who has had a somewhat contentious relationship with both Assange and WikiLeaks, has told me that he believes the WikiLeaks founder should be given the same protections as any journalist, and that the attacks on the organization are a serious threat to freedom of the press.

“I would regard an attempt to criminalize WikiLeaks’ publication of these documents as an attack on all of us, and I believe the mainstream media should come to his defense. You don’t have to embrace Julian Assange as a kindred spirit to believe that what he did in publishing those cables falls under the protection of the First Amendment.”

Although WikiLeaks is arguably a media entity in its own right, it also benefits from forming partnerships with existing media players — as it has in the past with The Guardian, the New York Times and others — just as Edward Snowden saw it as valuable to reach out to Greenwald instead of just publishing the NSA documents he had on some random website. Traditional media outlets and journalists not only have a brand value and an existing audience, but they can help put things in context and make their meaning more obvious.

We need Anonymous for journalism

Anonymous

As the U.S. government and others not only put more pressure on the original whistleblowers in such cases — the Bradley Mannings and the Edward Snowdens — but also continue to ratchet up the pressure on the journalists who assist them, it becomes even more important to have some kind of entity like WikiLeaks that can act as a central outlet for such leaks, a place that is theoretically out of reach of U.S. control (if such a thing is even possible).

Even if WikiLeaks isn’t the best candidate for this kind of entity, either because of Assange’s personal behavior or his management style — or both — there arguably needs to be something similar. Perhaps a group like the hacker collective Anonymous — a diffused and leaderless movement that shares a common goal — but for journalistic documents might work. Or a combination of Anonymous and the file-sharing outlet Pirate Bay, where leakers can send their information and know that it will not fall into the wrong hands. Media outlets have tried to create such entities but mostly failed.

Having that kind of stateless, leaderless entity might make it harder for governments to make any headway by attacking individual journalists like Greenwald or even individual leakers. In some ways, it’s unfortunate that such a thing needs to exist at all, but even if we look only at what has happened over the past year, that case has arguably been made. Now all that is required is the motivation and the means to create it.

Post and thumbnail photos courtesy of Flickr users Carolina Georgatou, Jan-Arief Purwanto and Shutterstock / Rob Kint

No, Craigslist is not responsible for the death of newspapers

Maybe it’s the rash of newspaper sales recently — including the acquisition of the Washington Post by Amazon CEO Jeff Bezos and the sale of the Boston Globe to local businessman John Henry — but there seems to be a renewed interest in assigning blame for the rapid decline of the newspaper business, and one name tends to get the majority of the criticism: namely, Craigslist, the free classified-advertising service that some say killed newspapers.

In a recent piece for The New Republic, for example, Alec MacGillis accuses Craigslist founder Craig Newmark of hypocrisy for helping to put together an ethics guide for journalists, a project that Newmark has been working on — and also helping to fund personally — for some time now, along with the Poynter Institute. The New Republic writer argues that this kind of commitment is pretty rich coming from the guy whose service allegedly killed newspapers by sucking the lifeblood out of the print advertising market.

The internet killed newspapers, not Craigslist

Classified local newspaper advertisement and computer mouse

MacGillis seems even more incensed by the fact that Craigslist used to make money by charging for the posting of adult services, although what that has to do with anything isn’t really clear (the company shut down its adult listings section in 2010). Perhaps the point is that the site took money away from entities who produce valuable journalism and other beneficial pursuits — which would make sense if it wasn’t for the fact that most newspapers produce plenty of their own disposable and low-brow content, and have since before the internet came along.

“Ethics for journalists! How wonderful. Are those ethics different than the ones that allow one to make $36 million per year on prostitution ads, thereby making it easier to give away for free the classified listings that were a major source of newspaper revenue? Just checking.”

Leaving that part of his case aside, MacGillis’s argument that Craigslist killed newspapers is absurd, and always has been: as anyone who has followed the industry knows — and as Dan Mitchell points out in a piece at SF Weekly — the printed newspaper business has been decimated by the disruptive effects of the internet itself, and the unbundling of the tasks that a newspaper traditionally performed, something Clay Shirky, Emily Bell and Chris Anderson did a good job of outlining in their “post-industrial journalism” report last year, and something disruption guru Clay Christensen has also described.

Was Craigslist a part of this phenomenon? Of course it was. Newmark’s site, which he set up to make it easy for his friends and neighbors to post items they wanted to sell, took advantage of the internet and the social web to become a huge force in classified advertising, and there’s no question that had an effect on the advertising that went to newspapers. But Craigslist wasn’t the only online provider of free ads, by any means, nor was it the only disruptive force that ate into newspaper ad revenue — the entire internet arguably falls into that category, including a little company called Google.

Craigslist is just a scapegoat

The same problem appears in a new study from NYU’s Stern School of Business, which looks at Craigslist’s impact on the newspaper industry and concludes that it siphoned more than $5 billion from the classified advertising market over a period of years — which, according to the study, caused newspapers to implement a range of steps including boosting their subscription prices and putting up paywalls. But just as MacGillis does, the study looks at Craigslist in a vacuum, as though it was the only site on the internet that had any kind of disruptive effect on newspapers, which clearly isn’t the case.

Screen Shot 2013-08-14 at 6.45.09 PM

The reality is that the decline of print advertising rates and the resulting effect on newspaper revenue would likely have occurred with or without Craigslist, driven by the explosion of webpages and ad providers and the advertising industry’s increasing desire to focus on digital markets, not print-based ones. And those factors were arguably compounded by the newspaper industry’s focus on dumping commodity news content onto the web without approaching it as a separate market, the way web-native providers did.

Blaming Craigslist for the death of newspapers is like blaming Napster for the decline of the record industry: it makes for a convenient scapegoat, especially when the members of the market that has been disrupted don’t want to focus on how their own mistakes and ignorance helped push them off the cliff.

Screen Shot 2013-08-14 at 6.51.51 PM

This post was updated on Thursday to reflect the fact that Craigslist used to charge for adult services but has since shut down that section of the service.

Post and thumbnail images courtesy of Flickr user Zarko Drincic and Shutterstock / Feng Yu

Snooping on your kids: How I felt about my father’s online surveillance of me

(This was written by my middle daughter Meaghan, about the online surveillance of my three children I engaged in when they were younger)

This post is the final entry in a series of four stories about my experiences snooping on my kids and their online behavior over a period of years — in this post, my daughter Meaghan writes about her reaction to my surveillance. Part one in the series is here, part two is here and part three is here.

Last week, my dad wrote here about his experiences keeping an eye on me and my sisters while we were online, using keystroke-recording software, what amounts to “Facebook stalking,” and also following all three of us on Twitter and Tumblr. As a result of it all, he’s received a lot of feedback, most of which seems to be split essentially down the middle. Some people think what my dad did was the right thing — that watching over us on the internet was the responsible thing to do as a parent in this day and age — but others haven’t been so supportive.

In response, my dad and I both thought it would be a worthwhile idea for me to provide an account of my feelings about him “spying” on me.

For one thing, I don’t think spying is really the right word for what he did. Dad never hid his surveillance from me; he asked for my usernames and urls on various websites, and talked to me about what he was seeing. Which — as is to be expected for a twelve-year-old girl speaking to her father — often led to some embarrassing conversations, and I admit the rebellious teenager in me resented it.

Privacy is a tricky thing to define

Conversations and resentment like that are hard to avoid for parents. But when I was a frequent user on GaiaOnline, and even as I discovered Tumblr, I was always aware that my dad was paying attention. He’d check up on my Tumblog every so often, and if my url had changed, he’d ask me, and I’d give it to him. I rarely felt that I needed to hide my online activity from him (though I suppose I never really tried).

That said, however, I do understand where some of the backlash is coming from. Some parents are very strict about keeping an eye on their kids in regard to cellphone usage, visiting with friends, and dating, which can sometimes backfire on them. Alternately, some parents are not nearly as diligent, and they believe that freedom will keep their children on the straight and narrow of their own volition, which can also have unforeseen repercussions.

The concept of online privacy is a difficult one — even governments are still debating it and trying to pin it down, and it’s no different when it’s in the home. It’s understandable to see what my dad did with my sisters and I as a huge breach of trust, and as an invasion of our privacy. Definitely, there are facets of my online life and experiences I’ve had — or wanted to have — that I would have preferred to experience without my father’s supervision. And there have been times where I lamented that “my life is over,” and “you’re the worst, I hate you, get out of my life,” when my dad came to talk to me about what I was doing.

On the other hand, I think having him supervise — and knowing that he was supervising — helped me not only to stay out of trouble and behave appropriately for my age, but also fostered a certain amount of critical thinking about why my dad worried about some of the things I did.

A Panopticon phenomenon

It became something of a Panopticon surveillance phenomenon: by not knowing when my dad was watching, I policed my own behaviour and came to better understand what was good or bad, and why. It left me feeling much better about my experiences online knowing that my dad was there not only keeping me out of trouble, but also keeping an eye out for trouble that might be targeting me. I know that I never added any strangers on MSN or AIM or anything like that, but if I had, there would have been no worry in my mind that any predators or strangers could have taken advantage of me.

Having my dad watching me online never left me feeling like I was unable to do anything, and certainly nothing was ever blocked or password-protected. It wasn’t that I had my dad looking over my shoulder physically as I surfed the internet. The intent behind it was clear, at least to me: “Make mistakes and learn from them.”

I was invited to create my own borders on the internet, and it led me to make a lot of better choices than I might have otherwise. I found a community of writers that fostered my talent and put me on the path to cultivating a hobby I enjoyed. Through that, I found another community of fans that take part in the appreciation of books, movies and television shows that helped me to further my writing hobby. Being able to write my own rules when it came to the internet while still having the guiding hand of my father behind me allowed me the space to find what I was really looking for online: companionship.

All in all, my dad’s surveillance of my internet activities has not impacted me negatively in the slightest. I don’t know what my online experiences would have been like if my dad had been completely missing, or too involved in them — I do know that I appreciate what he’s done for me and my sisters. In a way, it almost feels like it’s a specific kind of affection: that my dad cares enough to find out what I’m doing online, but also cares enough that he trusts me to make the right decisions without hurting myself. I think that shows a level of parenting most children would be happy to have.

Images courtesy of Shutterstock users LightspringDenis Vrublev and Sergey Nivens

Snooping on your kids: Sometimes surveillance defeats the purpose

This post is the third of four stories about my experiences snooping on my kids and their online behavior over a period of years. Part one is here, part two is here and the final instalment is here.

In the first two installments of this series, I talked about how I started eavesdropping on our two younger daughters’ behavior online — out of a somewhat misplaced desire to protect them from a variety of imagined dangers — and how I learned something about them along the way, despite misgivings about my surveillance activities.

Our youngest daughter proved to be even more of a revelation in some ways, both because of the way the social web has evolved since I started my family spying program about a decade ago, and because of how her reaction to my monitoring made me rethink what I was doing.

In many ways, the evolution of our daughters’ use of the web has been a kind of microcosm of the broader changes in the internet over the past decade: When I started paying close attention to what our oldest was doing online as a teenager (she is 24 now), it was primarily instant messaging — which now seems like an ancient relic of the web, thanks to the rise of texting and apps like SnapChat or Instagram — as well as some websites where you could play rudimentary games or do puzzles. So a simple keystroke-logging program allowed me to eavesdrop quite easily on most of her activity.

The rise of Facebook and the social web

Facebook

By the time I started monitoring our second-oldest daughter and her online behavior as a teenager (she is now 19), she spent some time on websites with games or jokes, but she also started to spend a lot more of her time with sites and services that were more like prototypical social networks: virtual worlds like Habbo Hotel, where the engagement with other users was far more important than the actual surroundings or the simplistic games that were played — and sites, like Gaia Online, that offered the ability to write interactive fiction with others who were passionate about the same topics.

In much the same way, we’ve seen the internet evolve from being just a series of static websites through the dawn of what used to be called “Web 2.0” or the interactive web, to the rise of full-fledged — and globe-spanning — social networks like Facebook and Twitter.

Interestingly, all three of our daughters have used Facebook (which started to become popular just as our oldest reached teenager-hood), but their usage waned substantially as they grew older — and it is also a much smaller focus for our youngest daughter than it was for our other two at the same age.

In some ways, they seem to see Facebook as almost a necessary evil, like email is to an older generation, rather than something they want to spend a lot of time on for their own purposes. My colleague Eliza Kern has written about this phenomenon, which I think is fairly widespread with younger users.

Facebook gives way to Tumblr and Twitter

Tumblr

If our middle daughter started the trend in our family of being more interested in sites with a social element rather than just games or other activities, our youngest continued it — beginning with sites like Club Penguin as a child, and then moving on to Facebook and others as she became a teenager. What was interesting about her use of the web, however (as opposed to the usual teenager behavior like texting) was how quickly it started to center around Tumblr and Twitter, and how that more or less stymied my attempts to monitor her online activity the same way I had with her older sisters.

While keystroke-logging software worked with a one-on-one IM conversation, it was of no real use for texting (I didn’t really investigate whether there were similar tools for phones, because that seemed a little too draconian even for me) and it didn’t help much with trying to keep an eye on what she and her older sister were doing on Tumblr or Twitter either. All I got was a mess of text without any kind of reference point for who or what they were talking to or about, which didn’t help much.

And so I did what I’m sure plenty of other parents have done in a similar situation: I more or less gave up on the automated snooping and turned to stalking, by friending them on Facebook and following them on Tumblr and Twitter. The difficulty there, of course, is that following someone is a very difficult thing to keep hidden from the person you are following — it becomes obvious as soon as you do it, unless you create a secret account under a pseudonym just for the purpose, which seemed like a lot of effort to go to.

I decide to stop stalking my kids

Teen_Wolf

My daughter’s response to this was fairly predictable: She hated the idea that I was somehow looking over her shoulder while she interacted with her friends and other fans of the TV shows she talked about on Tumblr and Twitter, and I’m sure she felt much like I did when my parents would sit in the dining room and watch my friends and me trying to have a party in the living room — like a giant wet blanket had been dropped on her online life, smothering any chance of spontaneity. When I asked her to change her online name because it seemed a little offensive, she rolled her eyes and complied, but I could tell I had crossed a line.

Both her response and that of her older sister — who also spent most of her time on Tumblr, live-blogging Teen Wolf and Doctor Who and other favorite shows with an online community of fans — somehow made me feel worse than I had felt before, when I was just anonymously snooping on my daughter’s IM conversations. The idea that even my virtual presence on Tumblr or Twitter might prevent them from being able to express themselves or interact with their friends (some of whom they have never met) in an authentic way made me feel like I was robbing them of one of the most powerful features of the social web.

I had become increasingly concerned over the years about the broader invasion of privacy that my monitoring represented, and had also come to the conclusion that all of my surveillance was achieving very little — since it didn’t actually help me understand what they were going through or where potential trouble spots might lie.

But it was the interference with their development as fully functioning social human beings (whatever that means in an online context) that really gave me pause, and finally made me step back from all of my monitoring.

Now I am back to crossing my fingers and hoping for the best, like most parents have done since the beginning of time.

Monday: One of my daughers talks about what it was like to have a snooping parent.

Images courtesy of Shutterstock / Lightspring and Flickr user Gabrielle Colletti and Shutterstock / ollyy

Jack Dorsey on Twitter’s turning point as a news entity: The day a plane landed in the Hudson

After seven years with Twitter as a part of the social-media ecosystem, we’ve become pretty accustomed by now to the idea that the service functions as a real-time news platform — a cross between a social network and a news-wire staffed by millions of volunteer journalists, reporting on everything from a revolution in Egypt to the killing of Osama bin Laden. Was there a turning point when Twitter stopped being just a plaything for nerds and started becoming a journalistic entity? Co-founder Jack Dorsey says there was: the day an airplane crash-landed in the middle of the Hudson river in 2009.

Dorsey, who famously sketched out the idea for Twitter in 2000, talked to CNBC as part of the network’s recent documentary entitled “The Twitter Revolution,” and described it as the moment when the world started looking at the service as a potential news source rather than just a tech startup with a funny name. “It just changed everything,” he said. “Suddenly the world turned its attention (to us), because we were the source of news — but it wasn’t us, it was this person in the boat, using the service, which was even more amazing.” You can hear more from Dorsey about creating the experience of Twitter at our RoadMap conference in November in San Francisco.

[protected-iframe id=”4953b393c35381e203ba3766df4c8758-14960843-8890″ info=”http://plus.cnbc.com/rssvideosearch/action/player/id/3000185240/code/cnbcplayershare” width=”400″ height=”380″]

A sea change in the way the news works

Those comments from Dorsey resonated with me personally, because the landing of US Airways Flight 1549 was definitely a turning point in the way that Twitter was perceived by the traditional newspaper journalists I was working with at the time. Some of us had already begun to see the service as a powerful way of connecting with readers around our work, but few had seen the potential for Twitter to become an actual source of news — a way for the “sources to go direct,” as blogging pioneer Dave Winer has put it.

Even before the Hudson landing, there had already been a few incidents where Twitter had shown a glimpse of that potential: a rash of fires in California, an earthquake in China, and so on. But for whatever reason, the airplane rescue captured the imagination of many more people — journalists and otherwise — perhaps in part because it was such a miraculous event. And the photographer who took the iconic photo, Janis Krums, inadvertently became the prototype of the Twitter-enabled “citizen journalist.”

Over the next two years, Twitter became a larger and larger force not just in the delivery of traditional news but the actual creation of news — in the sense of those “random acts of journalism” that Andy Carvin of National Public Radio has talked about, like the one in which a computer programmer in Pakistan live-tweeted the U.S. special forces attack on Osama bin Laden’s compound. And by 2011, Carvin would be using Twitter as a crowdsourced real-time newsroom to report on the uprisings in Egypt and elsewhere (he has given the Smithsonian the iPhone that he used to do a lot of his Twitter curation).

A megaphone for the world to use

To reinforce that point, in another clip from the CNBC special, Bahraini activist Maryam Al-Khawaja talks about how Twitter has changed the way that dissidents in her country and elsewhere in the Arab world get their message out and connect with others who can help them or who are fighting similar battles:

[protected-iframe id=”6810b625aece2bba5bae5afe5ab434f0-14960843-8890″ info=”http://plus.cnbc.com/rssvideosearch/action/player/id/3000187809/code/cnbcplayershare” width=”400″ height=”380″]

The CNBC documentary has other segments as well, including one that follows Twitter CEO Dick Costolo to the gym for his workout, and a look at how social media affected the environment around a high-profile rape case in Torrington, Conn. — but for me, the comments from Jack Dorsey about Twitter’s role in the media just reinforced how far we have come in such a short time.

In many ways, the transformation that was triggered by that photo of Flight 1549 is still underway. Twitter is struggling to figure out what that means for it as a company, and also how it will deal with the conflicts between its own interests in doing business around the world and the restrictions that some countries want to place on the freedom of speech that it allows. But there is no question that, for better or worse, it has changed the way the news works forever.

Images courtesy of Shutterstock / Lightspring and Shutterstock / Vlad Star

Snooping on your kids: what I learned about my daughter, and how it changed our relationship

This post is the second of four stories about my experiences snooping on my kids and their online behavior over a period of years. Part one is here, part three is here and the final instalment is here.

When parents stoop to spying on their children, it’s usually because they are afraid something terrible is happening that they don’t know about — and often they turn out to be right. In my case, I chose to do it partly as a way of learning how to use the tools and partly as a kind of research project into my own children and their online behavior. And I learned a lot.

In the first part of this series, I talked about how reviewing some keystroke-logging software in the early 2000s — designed primarily for businesses to monitor their employees at work — lured me into eavesdropping on my three kids over the course of a decade, using a variety of tools that at times made me feel like I worked for the National Security Agency.

Tracking the online behavior of our first daughter didn’t reveal all that much, apart from the usual teenager angst, but things were somewhat different with our second daughter — in part because she was a different person, obviously, but also because the way she used the internet was different.

As I tried to point out in my first post, I am well aware of the ethical quandary that I dove into when I started this monitoring process, and if I wasn’t already aware of it when I started, I was regularly reminded of it whenever I brought the topic up with friends and fellow parents. Many of them accused me of acting like the secret police, and of not trusting my daughters enough — and yet, at the same time, I thought I could see in some of them a secret jealousy of my abilities, since they all felt the same parental desire I did: namely, to watch over our children in every way possible.

The dawn of the social web

weed joint

Our first daughter was kind of an experiment, since I was new to the tools available, and the social web was also relatively new: there was no Facebook yet, and no Twitter, and blogs were only just becoming popular with a small group of hardcore nerds. LiveJournal was fairly prominent — although my daughter didn’t really use it — but the really big deal, especially for teenagers, was instant messaging via AOL and MSN Messenger and ICQ (anyone remember them?). As far as my oldest was concerned, that was the entire internet.

Apart from one brief mention of marijuana experimentation at a friend’s party, trolling through my daughter’s IM conversations and emails via the aforementioned keystroke-logging software didn’t produce much of interest. There were no secret messages to older men arranging to meet them at a shopping mall, or any of the other bogeymen that parents have been taught to fear when it comes to the internet. And of course, the fact that it was boring was very reassuring.

Our second daughter used instant messaging a fair bit, and I continued using the keystroke-logging program for that purpose, as well as some other tools that pulled in email, etc. But as she moved into her teenage years, she started to spend less time on instant messaging and on childish websites playing silly games, and more time on another category of sites that I had never heard of before: sites that when I look back on it were like early prototypes of social networks — but aimed exclusively at teenagers rather than broadly targeted ones like Facebook or MySpace.

Habbo Hotel and Gaia Online

nsa-logo-

Habbo Hotel was one example of this phenomenon: a site that used cheesy eight-bit graphics from some old handheld computer game to create a world where residents of a giant hotel could set up their own rooms for a variety of purposes — including music, games, or just chat — and then invite people into their rooms and interact with them. At one point, Habbo (which was owned by a Finnish company) was a huge internet traffic story, and my daughter and her friends spent hundreds of hours a month on it. In some ways it was the Facebook of its day.

The hard part for me and my NSA-style surveillance program was that Habbo also proved to be very difficult to effectively monitor using most of the tools I had — except maybe the one that took random screenshots at regular intervals, which used up a lot of resources (my brother-in-law actually blocked Habbo Hotel at the router level so that his teenaged children wouldn’t go there, and eventually had to shut the internet off at night because they still managed to find a way around his block).

The most interesting aspect of my daughter’s internet use was the amount of time she spent on a site called Gaia Online, which as far as I could tell was devoted to games and socializing primarily around Japanese anime TV shows. But my keystroke-logging program picked up something fascinating after awhile, which I admit I wasn’t expecting: My middle daughter, who hadn’t really shown any interest in writing for school purposes, was spending hours every day writing interactive fiction on Gaia Online — long and involved, emotionally complicated stories based around characters from anime shows.

An unexpected insight

gaia online

Gaia Online was one of the first sites I came across that engaged in this kind of interactive fiction, where one writer would start a story and then others would add to it or take it in a different direction — or suggest different plot twists for the original author. This is almost exactly what Wattpad does now — the Toronto-based startup financed by Khosla Ventures allows authors (including some prominent ones like Margaret Atwood) to upload unfinished work and get feedback from readers.

The upshot of all this was that my snooping revealed not so much the questionable behavior I had been afraid of finding, but a whole side of my daughter that I had never really expected to find — a side that voluntarily spent hundreds of hours writing fiction and interacting with friends around that fiction. And while my daughter hasn’t become a famous writer (yet), she still carries on this behavior today, only now it occurs on Tumblr and is based around TV shows like Doctor Who and Teen Wolf. In a sense, this has helped to shape how she interacts with media as an adult, which I find fascinating.

This revelation made me feel even more torn when it came to my surveillance of her: On the one hand, I still felt bad for invading her privacy — something we have talked about since she stopped being a teenager — but I was also grateful in a sense for being able to discover this other side of my daughter, one that was filled with talent and a love of language and creativity. Does that make it worth all the snooping? That’s hard to say. I wouldn’t really wrestle with that question directly until I started to apply the same surveillance approach to our third and youngest daughter.

Tomorrow: How — and why — I decided to stop snooping on my kids.

Images courtesy of Shutterstock / Lightspring and Shutterstock / Vlad Star and Shutterstock / noporn

Snooping on your kids: If the NSA’s tools were available, I probably would have used them

This post is the first of four stories about my experiences snooping on my kids and their online behavior over a period of years. Part two is here, part three is here and the final instalment is here.

This isn’t an easy thing to admit, but I felt a secret twinge of shame when I was reading the recent leaks about the National Security Agency’s surveillance program — the one that allows them to index all the phone calls of suspected threats, scoop up emails and other internet traffic, and even reportedly listen in on real-time voice and text chats. Why? Because I have either used or tried to use similar types of tools (on a much smaller scale, obviously) to snoop on, creep, stalk and otherwise digitally eavesdrop on the behavior of my children over the past decade or so.

While the tools may have changed over the years, and the websites and mobile apps and social networks they used have also evolved — from simple instant messaging and gaming through virtual worlds like Habbo Hotel and Club Penguin, all the way to Instagram, Snapchat and Tumblr — the ethical and social dilemma remains the same for many parents I think.

The NSA and its defenders have argued that what the agency does is justified — even though it may technically be against the Fourth Amendment — because it allows them to identify potential terrorist threats to the U.S. I made a similar argument to myself about the surreptitious monitoring of my daughters’ online activity: namely, that by doing so, I was helping to identify potential threats to them in the form of drug abuse, poor relationship decisions and other hazards of teenage life. Was I right to do so? To be honest, I’m not sure.

Invasion of privacy or parental right?

nsa-logo-copy

I do know one thing: when I casually mentioned to a friend and fellow parent several years ago that I was spying on my then-teenaged daughter while she was on the internet — capturing instant messaging logs, reading emails, even at one point using “keystroke logging” software to track what she typed — my friend was not supportive at all. Instead, she was horrified. How could I do this, she asked, when it was such an invasion of my childrens’ privacy?

At the time, I made the same argument that legions of parents before me have probably made, which is that my children really have no expectation of privacy while they are under my roof. In a sense, I figured they were subject to my laws rather than those of the Constitution — within reason, of course — and if I believed that invading their privacy was what was required in order to keep them safe, then I figured I should be entitled to engage in whatever behavior I saw fit. Shouldn’t I?

The hard part about all this, however, is that there’s a lot more involved than just reading your child’s diary or picking up the extension in the living room to try and eavesdrop on a call they are making from the basement. Although I have stopped snooping on my three daughters — since the oldest is now 24, our middle child is 19 and the youngest is almost 16 — I expect that there is so much technology out there that will allow you to track their every click and status update that you could (as I did) find yourself getting sucked far deeper into monitoring than you ever intended to go.

When I look back at it now, after almost a decade since I first began monitoring their online activity, I can see a number of lessons, some of which are more obvious than others. And I can see how in some ways it was a mistake, but in other ways it showed me things about my children — worthwhile, valuable things — that I would never have learned otherwise. And what’s also interesting is how different all three have been in a number of ways: in their use cases for the internet, in the technologies they chose, and in how all that affected my own approach to eavesdropping on them.

Keystroke capture meets teenager

Free keylogger software by IwantSoft
Free keylogger software by IwantSoft

My interest in all this got triggered in the early 2000’s, when I decided to do a review of some software that allowed anyone with access to a computer to capture the keystrokes of a user and store them in a file for viewing later. The software was targeted at employers, but parents were also a potential market — as an alternative to earlier “gatekeeping” software such as Net Nanny, which could be used to block certain websites from young children.

At the time, my oldest daughter — who was then about 13 — had been spending a lot of time talking with friends using Microsoft’s Instant Messenger, and I thought the software would allow me to eavesdrop a little bit on her conversations while also reviewing the software. I installed it as directed (it was just a driver that loaded before the keyboard driver, and stored all the information sent via the keys) and soon I was reading all of my daughter’s chat conversations.

For the most part, this was incredibly boring, I’m happy to say. Our daughter wasn’t the kind of troubled child who cried out for internet monitoring, so there was nothing outlandish like plans to meet up with some 35-year-old in Detroit. There was a lot of talk about boys and homework, and TV shows or books she liked. There wasn’t even any sign of “cyber-bullying,” which had become a big topic of conversation in the media, and which a niece of mine had been subjected to during her teenage years (another reason I was curious to try out the software).

A permanent loss of trust?

father daughter

The only thing remotely interesting that turned up was a conversation about smoking pot one night at a friend’s party. Since 13 seemed a little young to be encouraging that kind of behavior, my wife and I had a little chat with our daughter about the wisdom of that kind of activity — without telling her how we found out about it — and that was pretty much the end of it. Eventually, I stopped looking at the emailed chat logs that the software forwarded me (it would send them based on certain word triggers as well) and went back to not paying much attention to what my daughter did online.

After the discussion with my friend and fellow parent who was shocked about my invasion of our daughter’s privacy, I did tell our kids that we had ways of looking over their shoulders online (without going into too much detail) and that we wouldn’t hesitate to use these powers if necessary. Better to be vague, I thought, so that they wouldn’t know what we were capable of — another echo of the NSA’s approach.

Obviously, my daughters’ emotional turmoil and fondness for certain bands isn’t even remotely comparable to the dangers of terrorism, but the parallels with what the NSA does (and what American citizens allow it to do in their name) still seem pretty strong to me. I believed that what I was doing was justified because I wanted to protect my daughters from themselves — but in the end, I decided that the loss of trust was actually much worse than anything I was theoretically saving them from. Is there a lesson for the NSA in there?

Thursday: My surveillance program continues with our middle daughter, and I discover something unexpected about her.

Images courtesy of Shutterstock / Lightspring and Shutterstock / Denis Vrublevski

The “barbell problem” in media: The ends are fine, but the middle is getting squeezed

While in New York this week for a GigaOM event, I had coffee and lunch with a number of media-industry insiders and observers, including Jay Rosen and Clay Shirky – two people I think are among the smartest media analysts in the business. And one thing that kept coming up is what I have chosen to call the “barbell problem” for media, and specifically for newspapers: in other words, the feeling that while both ends of the journalism spectrum are probably going to be fine, the middle is getting squeezed to the point where its future is uncertain at best.

So the New York Times, for example, is going through the same kind of uncertainty and upheaval as the rest of the industry – having to lay off staff, cutting costs, selling assets. But while the paper’s paywall and other measures may not totally fill the gap caused by erosion of advertising revenue, the NYT has enough resources to not only survive but do well. Likewise, the Financial Times and the Wall Street Journal will probably survive and prosper, along with some other large brands.

Some prominent journalism brands will likely be fine

This is exactly why Shirky and his coauthors on the recent “Post-Industrial Journalism” report from Columbia specifically excluded any discussion of the Times from their analysis of the future of journalism. As Shirky described it, it’s like the average driver measuring themselves by looking at someone who races on the Formula One circuit. Practically speaking, there are very few meaningful lessons other newspapers can learn from the New York Times.

Tribune

That’s one end of the barbell. The other end is the ultra-small, hyper-local newspaper – the daily or even weekly broadsheet that serves a small town or region, where the disruptive forces of the Web haven’t made themselves felt as strongly and local shopping flyers are probably still a pretty good business. This is the kind of newspaper that billionaire Warren Buffett is buying up – the kind that still has a lock on a local market. Paywalls may work well here because of the lack of compelling alternatives.

And what’s in the middle? Everything else – medium-sized papers like the Miami Herald or the San Francisco Chronicle or the Boston Globe, as well as most of the larger metro papers like the Chicago Tribune and the Los Angeles Times and the Philadelphia Inquirer. What does their future look like?

Many of these papers have been trying to make paywalls work, but for most the results appear to be fairly lackluster at best – even the Boston Globe, which is far from the worst newspaper in a medium sized market, has attracted just 28,000 subscribers after more than a year. Its owner the New York Times has put it up for sale and may get less than $100 million for it, and that’s after removing the single most damaging part of the business from the equation – namely, the paper’s $200 million or so in pension obligations.

What happens to the news that doesn’t pay?

Those pension obligations are one of the biggest mill-stones around the neck of traditional media entities. And the bottom line is that even with some reader support, as Rosen and I discussed, these papers are going to have to shrink dramatically or come up with new forms of revenue, which is why the Washington Post is experimenting with what has come to be known as “sponsored content” (something we’ll be talking about more at paidContent Live on April 17)

In a recent post at Slate, writer Matt Yglesias responded to the somewhat fatalistic tone of coverage around the recent Pew report on the state of the media by arguing that as news consumers, we are better off now than we have ever been, thanks to social media and other forces. And it is easy to see how that is the case for certain topics and certain parts of the world – but as Dan Mitchell pointed out in a rebuttal to Yglesias, it isn’t the case for much local coverage of things like municipal affairs and public-policy topics.

So what happens to that kind of coverage as newspapers shrink and even die? If all the things that have subsidized that kind of journalism have been removed – the car ads and travel writing and so on – all these papers are left with is the kind of content that advertisers aren’t interested in and readers don’t want to pay for. What then? ProPublica and the Texas Tribune are interesting publicly supported models, but how scalable are they? Is every state or region going to have one?

Will some form of “citizen journalism” be able to fill this gap – whether it’s local bloggers or some kind of automated Twitter feed etc.? Perhaps. Will newspapers use outsourced services like Journatic or even robot journalists like Narrative Science? In all likelihood it will be a combination of all of these, and possibly other things we haven’t even thought of yet. At this point, the answers are a whole lot murkier than the questions.

Post and thumbnail image courtesy of Flickr user George Kelly and Jan-Arief Purwanto

One big red flag for Facebook investors: Zuckerberg’s iron grip

Facebook’s initial public offering is one of the most eagerly anticipated technology IPOs since Google went public in 2004: It is a launch that values the social network at more than $100 billion, and it has sparked a number of debates about the future of the company, including whether social advertising is effective or not and whether the company is failing to take advantage of mobile properly. But there is another issue investors should be concerned about when they look at Facebook, and it is arguably even more important than any of these operational questions: namely, the iron grip co-founder and CEO Mark Zuckerberg has over the shares of the company.

For someone who only just turned 28, Zuckerberg wields an almost unprecedented amount of power over the fate of Facebook — not just over the votes that go along with shares of the company but also over the board of directors. The secret to this tight grip is buried in the history of the social network and in particular in the advice Zuckerberg received from former Napster founder Sean Parker, who was a mentor to the Facebook founder and at one time was president of the social network. The implications of what he helped Zuckerberg construct are profound and create a substantial investment risk.

Sean Parker helped Zuckerberg take control of the company

Long before Facebook became the giant it is today, with almost a billion users and revenue of more than $2 billion, Parker advised its young CEO that he should do everything he could to hang onto control of his company. This advice was based on Parker’s own experiences at Napster and another company called Plaxo, where he was removed by the board. So when Facebook took its initial funding round in 2005, he convinced Zuckerberg to push for control over two seats on the board of directors. And when Parker left the company that same year (after some bad publicity over a drug charge), he gave Zuckerberg his seat as well.

That gave the Facebook CEO control over three of the company’s five board seats. But that was just the beginning: In 2009, the board (which he controls) created a new class of “super-voting” shares that have 10 votes per share, compared with the single vote normal shares have. Through his ownership of a chunk of these super-voting shares, Zuckerberg controls almost 30 percent of the votes in the company. Plus, he has proxy agreements with a number of other shareholders, including co-founder Dustin Moskovitz, that give him control over their shares as well.

In all, these various arrangements give the Facebook CEO control over the board of directors and about 58 percent of the votes in the company. If Zuckerberg decides to do something, he doesn’t even have to ask for permission from the board or put it to a shareholder vote (or if he does the latter, he knows he can prevail). Investors got a tangible example of what this means last month, when Facebook announced it would be acquiring the hot mobile photo-sharing service Instagram for $1 billion. According to a number of reports, Zuckerberg told the board of directors about this billion-dollar deal later — via email.

Super-voting shares are popular, but are they wise?

The visionary founder who controls the progress of the company he founded with an iron fist has become a Silicon Valley archetype, thanks in large part to Apple CEO Steve Jobs, who was famously ejected from the company he co-founded due to his erratic behavior. He returned later to rescue Apple from certain disaster and transform it into one of the most valuable companies in history, giving every startup founder a reason why he too should be allowed to control every aspect of his company, regardless of things like shareholder rights.

Google also helped popularize the idea of multiple-voting shares, which gave co-founders Larry Page and Sergey Brin and former CEO Eric Schmidt effective control over the company. In Google’s offering prospectus when it went public in 2004, the co-founders argued this kind of arrangement was necessary in order to ensure Google remained true to their long-term vision rather than getting sidetracked by questions of short-term financial value.

A number of other tech companies have chosen this same route, including both LinkedIn and Zynga. They have also made the argument that dual-class shares are necessary to protect the company and its vision from the vagaries of the daily stock market. (What many of these companies don’t say is that such structures also make it a lot harder to acquire the company, something that has made multiple-voting shares popular with media companies such as the New York Times and Washington Post.)

Multiple-voting shares are a double-edged sword

So why should investors be wary of multiple-voting shares? Because while they protect the long-term vision of the founders of a company, they can also protect the founders from the kind of shareholder action that is often necessary when a company loses its way. Yahoo and AOL, for example, have both been the target of lobbying efforts by activist shareholders who argue they have been mismanaged and that their share price has been damaged. If either AOL or Yahoo had provisions like Facebook does, there would be no hope of ever changing anything at either company.

As a number of shareholder-rights groups and activists pointed out in advance of Facebook’s IPO, tapping the public markets for funding is supposed to bring with it certain responsibilities on the part of corporate executives — including a duty to be answerable to common shareholders and for any failure to act in the best interests of those investors. If companies don’t want to abide by those rules, the argument goes, they should remain private and seek funding from banks or private venture-capital groups.

In a recent report about Facebook’s voting structure, the activist group Institutional Shareholder Services said the dual-class structure is “an autocratic model of governance” that makes Facebook “less viable than a competitor whose governance gives owners a voice proportionate to the economics they have at risk.” The group also criticized the growing trend for such super-voting shares, saying, “Facebook appears to have taken the same outdated dance lessons as many other recent tech sector debutantes.”

Having Zuckerberg control 58 percent of the votes at Facebook isn’t likely to be a problem for anyone as long as the company is making smart decisions about what to do with its money and as long as the stock price is going up. But if it starts to go south and Facebook makes some unwise decisions, it will become obvious that such voting structures are a fair-weather friend: They are great to have when things are going well, but they can become a huge liability.

Is Jack Dorsey the heir apparent to Steve Jobs?

Before Steve Jobs had even passed away, people had already started playing the “who is the next Steve Jobs” game — trying to come up with names of technology and design visionaries who might be able to don the mantle of the Apple co-founder and CEO. Jeff Bezos of Amazon? Napster and Spotify founder Sean Parker? Those names and others have been floated by industry watchers, but listening to Twitter and Square founder Jack Dorsey at GigaOM’s RoadMap conference on Thursday made me think that he is at least as strong a contender for that mantle (if such a thing even exists) as any of them. Could Dorsey change the way we interact with technology and the world around us in as profound a way as Jobs?

Why do we even need an heir to Steve Jobs? The obvious answer is that we don’t. Jobs was unique, in both positive and negative ways, and the precise combination of those features made him who he was and thus made Apple what it was. No one is going to be the next Steve Jobs because they will have a different combination of strengths and weaknesses, and they may not be as lucky or as smart in specific ways. But when it comes to the role that Jobs played in technology — the role of visionary designer, creator, instigator and disruptor — we need those people more than ever, because visionaries inspire others, even if the things they do themselves don’t always succeed. They change the way we look at the world in fundamental ways.

I haven’t spent a lot of time around Jack Dorsey, but based on his conversation with Om at RoadMap, he clearly spends a lot of time thinking about the big picture behind the technology that he is involved in. So it’s not just about Twitter and how it works — or what it looks like or even how to monetize it — but how it connects us to our own “humanness” as he put it, and enables us to experience things and see through the eyes of others. He described how he found this an incredibly powerful thing during the protests in Iran, and I think others have had a similar response to the events of the Arab Spring and the earthquakes in Japan and Haiti.

And when it comes to Square — the other company that Dorsey is helping to shape and create — it’s not just making payments easier or more efficient that interests him, but how making that easier can help artisans and individuals become fully functioning businesses more easily, and how that could help change society.

Dorsey’s roles with two very different companies have also sparked some comparisons to Jobs, who helped revolutionize animated films with Pixar while also changing the personal electronics industry at Apple (the differences between Square and Twitter are arguably even more dramatic than Pixar and Apple, since Square is a device that people pay for and Twitter is a service). And Dorsey was also forced out of the company he founded, much like Jobs was — after a dispute with former CEO Evan Williams, who funded the company in its early years — and then returned to become the product visionary.

One of the things that is very different about Dorsey and Twitter stems from the fact that Twitter is a service rather than a product. Under Steve Jobs, Apple excelled at product design, but it has been notoriously inept at anything service related: iTunes, to take just one example, is a total mess when it comes to usability and design despite years of evolution, and things like Ping have effectively been stillborn. One of the most powerful things about Twitter, however, is the way in which the service was transformed by its users, with additions like the @ mention and the retweet — features that were never even imagined by its creators. Steve Jobs, by contrast, wouldn’t even let people replace the battery in his products.

From what I can tell, Dorsey also seems to be missing what could charitably be called the “difficult” elements of Jobs’ personality (other people have more blunt terms for it), which are detailed in Walter Isaacson’s biography: the shouting, the merciless humiliation, the ruthlessness even with friends, the crying in meetings, and so on. One of the questions that this description of Jobs raised for me is whether those things were a necessary part of his success as a visionary and designer, or were they simply character flaws? Would Apple products have been the same, or been as revolutionary, if he were a different kind of person.

So is Jack Dorsey the new Steve Jobs? Probably not. But that said, he clearly has a vision about two fairly significant areas of the technology sphere — the way in which even a simple service like Twitter can change the way we interact with each other and distribute information in a digital and connected world, and the way a simple payment service like Square can potentially transform entrepreneurialism and small businesses. And he is thoughtful about the implications of those things in a way that many product or business-focused technology executives are not (he even has a fascination with the application of Zen Buddhist principles to design, as Steve Jobs did).

Dorsey has already altered the media landscape with Twitter — whether he knew that was what he was doing or not. And he is trying to alter the payment landscape as well with Square, which could ultimately help make it easier for entrepreneurs and small businesses to get paid. Whether those changes will be as massive and transformational as the ones Jobs unleashed remains to be seen, but we could definitely use more visionaries.

Google: fighting shadows with antitrust inquiry

A decade ago, Microsoft finally settled a long-running antitrust case that was launched against the software giant by the Department of Justice, based on accusations that the company was using its monopoly on computer operating systems to crush competitors. Now it is Google’s turn to face the antitrust spotlight: Although it is not the subject of a formal government case yet, the search behemoth is involved in an official inquiry by the Federal Trade Commission, which is investigating whether Google is distorting the market for web search and search-related advertising.

What makes this case more difficult than the one against Microsoft — and ultimately a lot harder to prove — is the question of what a monopoly even means in the age of the web, when the browser has taken over from the operating system as the primary way in which we use our computers and mobile devices. Does Google have a monopoly in any real sense? And if it does, can it be shown that the company is using that position unfairly, causing harm either to competitors and/or to consumers of web services?

Critics of the company argue that both of these things are true. And the list of Google’s enemies is a fairly long one, including fellow giants like Microsoft — one of the main proponents of the antitrust allegations, somewhat ironically — as well as large web services like Expedia and the recommendation service Yelp, who argue that Google is giving its own competing services preferential treatment and thereby distorting the market. And Google isn’t just facing antitrust inquiries in the U.S.: It is also under investigation in several European countries for similar alleged offences.

A monopoly on search and search-related ads

The charges being leveled at Google are twofold. One is that it has a monopoly on search and on search-related advertising and that this gives it an unreasonable amount of control over how content is found online — since search is the primary way that many people discover websites and services — as well as an ocean of cash from search-related advertising. The second allegation is that Google uses the money from its advertising monopoly to develop or buy services that compete with those from other companies and that it then uses its control over search to give those services preferential treatment in its search results, which provides an unfair advantage.

So in the case of Yelp, whose founder and CEO, Jeremy Stoppelman, testified in a recent Senate hearing regarding Google’s behavior (which wasn’t part of the official FTC investigation but raised many of the same issues), the claim is that when Google couldn’t acquire the company for its local recommendations, it first tried to steal Yelp’s content and use it without asking, threatened to remove Yelp from Google’s search results altogether and then bought a competing service called Zagat.

This effectively pulls in all the different aspects of the antitrust case against Google, to the extent that there is a case: the use of its giant cash reserves to try and take over Yelp; the “scraping” of Yelp’s content for use in Google’s own local service, Google Places; the pressure on Yelp to play by Google’s rules or face deletion from its all-powerful search index; and finally, the acquisition of a competitor, to which Google is allegedly giving preferential treatment in its search results. Expedia made similar allegations about Google following its purchase of ITA, which provides travel-related information that is used by Expedia and other services (a deal that was reviewed by the FTC).

Having a monopoly isn’t enough for antitrust

As I tried to explain in a recent GigaOM post, antitrust law in the U.S. doesn’t make having a monopoly in a particular market illegal. What the Sherman Act is designed to fight are monopolies that have been achieved through illegal means (i.e., collusion or restraint of trade) and/or monopolies that are being used to harm a particular sector. But it’s even more complex than that. Unfortunately for Yelp and Google’s other critics, it’s not enough just to show that a company with a dominant market position is being unfair to its competitors. It has to be proven that being unfair has some tangible impact on the market, either by restricting choice or raising prices or both.

So Yelp might argue that Google is being unfair by a) taking its content without asking and b) giving its own Zagat results a higher ranking in search (assuming it can even be shown that Google is doing this). But does Google’s behavior have any impact apart from being unfair to Yelp? Does it restrict consumer choice when it comes to recommendations services in any real way? And if it does, will consumers have to pay more for those services? Similar questions would have to be asked about Google’s dominance in search itself or search-related advertising. Does that dominance affect consumers in a tangible way?

Thomas Barnett, the former head of the Justice Department’s antitrust division, argued in a presentation to the Senate committee that Google’s control over search-related ads would lead to higher prices for those ads and that consumers would pay more because the companies buying those ads would inevitably pass those higher costs on to their customers. But this is not obvious at all. Even if someone could prove that prices for search ads are higher than they should be (whatever that means), an antitrust case would then also have to prove that companies were passing those costs on instead of just absorbing them.

One of Barnett’s counterparts, a former antitrust specialist with the New York attorney’s office who worked on the Microsoft case, argues that Google simply doesn’t fit the description of an illegal monopolist in the same way that Microsoft did. One of the main reasons for this is that Google provides a web service that is free to anyone and that has multiple well-funded competitors (including Microsoft itself). Users are not forced to search in Google, nor are they forced in any real sense to pick Zagat’s reviews over Yelp’s, or Google Travel’s results over those from Expedia or Travelocity.

Dominant web players rarely last long

I’ve argued before that one of the most powerful arguments against a federal antitrust case against Google is that such investigations rarely have much impact, in many cases because they drag on too long and involve so much complicated testimony that is difficult to prove. Also, the market for technology itself usually does a good enough job of destroying or disrupting monopolies without the government’s help. A research paper that looked at several high-profile cases, including Microsoft’s and AT&T’s, came to a similar conclusion. In almost every case, technological change had more of a tangible effect than any government investigation or penalty did.

Google, for example, is under significant pressure from the socialization of the web. The way that people find content and services is being altered by the popularity of social networks like Facebook and Twitter — to the point where search may no longer be the primary way that people find new services. Google is trying to take advantage of that phenomenon by building its own Google+ network, and by adding “+1” recommendation features to its search. But will this be enough? It’s entirely possible that Facebook’s social search (which is currently powered by a partnership with Microsoft) could become a significant competitor to Google.

If and when Google does wind up testifying to the Federal Trade Commission or the Department of Justice, it might even argue that Facebook is the real threat — due to its control over the social-networking market, its refusal to release data to competitors such as Google and its powerful relationships with Microsoft, Skype and other competing services. That might seem like a legal gambit, but there is a lot of truth to it as well. The web is still changing so quickly that even a seemingly unassailable monopoly like Google’s could be over before the government gets around to investigating it.

Newspapers and Social Media: Still Not Really Getting It

Many traditional media entities have embraced social-media services like Twitter and Facebook and blogs — at least to some extent — as tools for reporting and journalism, using them to publish and curate news reports. But newspapers in particular seem to have a hard time accepting the “social” part of these tools, at least when it comes to letting their journalists engage with readers as human beings. A case in point is the new social-media policy introduced at a major newspaper in Canada, which tells its staff not to express personal opinions — even on their personal accounts or pages — and not to engage with readers in the comments.

The policy, which I received from a source close to the Toronto Star, has a number of sensible things to say about using social media, including the fact that these tools “can be valuable sources for story ideas and contacts for journalists, and as a means of connecting directly with the communities we cover.” The paper also says that it “encourages journalists – reporters, columnists, photographers and editors – to take advantage of social media tools in their daily work.” But it warns that any comments posted using such tools “can be circulated beyond their intended audience.”

This all makes perfect sense. Social media is useful for journalism, and it does connect reporters to the communities they cover — better than just about anything else does. And yes, it is wise to be aware of the unintended consequences of even offhand remarks.

No talking about what you do

Then comes the part about being impartial and objective, and that’s when the trouble starts. The policy says that reporters and editors should “never post information on social media that could undermine your credibility with the public or damage the Star’s reputation in any way, including as an impartial source of news.” And that’s not all — the document goes on to say that:

Anything published on social media – whether on Star sites or personal platforms – cannot reveal information about content in development, newsroom issues or Star sources. Negative commentary about your colleagues or workplace will not be tolerated.

In other words, no posting about stories that are being worked on, no comments on newsroom-related topics, no talking about people who might be used or are being used as sources for Star reporting. And this prohibition doesn’t just apply to Star accounts or services under the newspaper’s name — it applies to any comments that a reporter or editor might make on their own personal accounts as well. Obviously the paper doesn’t want staffers bad-mouthing each other or talking about sensitive internal issues (something the New York Times also confronted last year), but a blanket ban on anything related to content seems unnecessarily harsh, not to mention completely unrealistic.

Never talk to your readers

It gets worse. The policy goes on to say that journalists who report for the Star “should not editorialize on the topics they cover,” because readers could could construe this as evidence that their news reporting is biased — and then tells reporters and editors that they shouldn’t respond in the online comments on stories. It says:

As well, journalists should refrain from debating issues within the Star’s online comments forum to avoid any suggestion that they may be biased in their reporting.

This last prohibition is a classic case of missing the point completely. According to the Star, apparently, comments on news stories are something that exists to allow readers to talk amongst themselves, not something that a reporter or editor should get involved in. That’s just wrong. As someone who was intimately involved in social-media strategy for another major metropolitan newspaper in Canada (full disclosure: this paper competes with the Toronto Star to some extent), one of the main features of having comments is the ability for readers to interact with writers and editors at the paper.

Treating the comments section as something that journalists shouldn’t get involved in turns it into a ghetto, and also contributes to the problems that many newspapers have with flaming and trolls and other issues — why should anyone behave properly in a comment forum if none of the staff at the paper are going to bother getting involved?

Never express an opinion on anything

The Star is far from alone in this short-sighted approach. Apart from a few staffers here and there who make use of Twitter and other social media, most major newspapers have still failed to take advantage of these tools when it comes to building relationships between their writers and readers. The biggest single factor holding them back seems to be fear — namely, a fear that they will no longer be seen as objective, something NYT executive editor Bill Keller reinforced in a recent column, in which he suggested that the paper was one of the few remaining holdouts in a world where everyone feels free to state their opinion.

Here’s a news flash for Bill, and for the rest of the newspaper world: that particular genie is already out of the bottle and has been for some time now. As journalism professor Jay Rosen has argued, the “view from nowhere” that mainstream media continues to try and defend is not only dying, but arguably does readers a disservice — since it often distorts the news in order to maintain a perfectly balanced (and unrealistic) view of events. Some journalists, like ** in a recent column in The Atlantic, have started to admit that they have personal interests and causes, but that remains rare.

The point that newspapers and other traditional media are missing is that social media is powerful precisely because it is personal. If you remove the personal aspect, all you have is a glorified news release wire or RSS feed with links to your content — and that has very little power any more. The best way to make social media work is to allow reporters and editors to be themselves, to be human, and to engage with readers through Twitter and Facebook and comments and blogs.

Is there a risk that someone might say something wrong? Of course there is. But without that human touch, there is no point in doing it at all.

Update: Toronto Star spokesman Bob Hepburn got back to me and said that the paper’s policy was “well in line with what mainstream media organizations have always done. We’ve always placed some limitations on journalists in terms of them expressing their opinions, either in the newspaper or outside of the newspaper.”

Post and thumbnail photos courtesy of Flickr user World Economic Forum