NYT, Al-Jazeera Doing An End-Run Around WikiLeaks

The New York Times is considering creating an electronic tip line so that leakers of classified documents can go direct instead of having to use a middleman like WikiLeaks, according to comments made by executive editor Bill Keller in an interview with the Cutline blog. Keller said that the plan is still in its formative stages, but the idea is to create a “kind of EZ Pass lane for leakers,” to make it easier for them to contact the paper and deliver information. And the Times isn’t the only one doing this: Al-Jazeera has already launched its own drop-box for leaks, and recently released thousands of documents related to the conflict between Israel and Palestine.

When WikiLeaks first burst into public view last year with a treasure trove of secret documents about the Iraq war, including a classified video of an American military attack on civilians, one of the first things some media-industry observers wondered was: why didn’t the sources of this material — widely believed to be Bradley Manning, a U.S. intelligence officer now in a military prison in Guantanamo Bay — just go directly to a newspaper like the New York Times instead of leaking it to some shadowy organization like WikiLeaks? The New York Times probably wondered that too, which is why it’s not surprising to hear that the paper is working on its own digital tip line.

In some ways, it’s surprising that it has taken the NYT and other newspapers this long to come up with this idea. Newspapers and other media outlets have always relied on those with access to secret or confidential information — either about companies or about governments — to deliver material in brown envelopes that are dropped off at the front desk or handed over to people in parking garages, as the famous Watergate documents were in **. Doug Saunders, the European bureau chief for the Canadian newspaper The Globe and Mail compared WikiLeaks to a brown envelope when it first came to prominence, and said it was nothing more than a middleman, which to a large extent is true.

The key difference with an entity like WikiLeaks, however, is that it is also a publisher — it can instantly release whatever documents it wishes its own web site or dozens of other sites that it has relationships with, although so far it has only released the same documents that the New York Times and The Guardian and other media outlets have (with names of some individuals redacted to prevent them from being targeted). The main thing that WikiLeaks gains by working with the NYT and other traditional media entities is a broader reach — in effect, publicity for the leaks, as Icelandic MP and early WikiLeaks supporter Birgitta Jonsdottir explained in a recent speech in Toronto.

So will more leakers go direct to either the New York Times or Al-Jazeera? Possibly. But the one thing that sources gain by going through WikiLeaks instead of a specific media outlet is the knowledge that they aren’t relying on one newspaper’s view of the documents — in other words, that the New York Times doesn’t control what gets released and what doesn’t, or what gets written about and what doesn’t, since WikiLeaks typically works with several competing organizations at once. For anyone who remembers how the Times behaved when it was reporting about the issues leading up to the Iraq War, that could be a very powerful incentive to use WikiLeaks rather than going direct.

But WikiLeaks is about to get some more competition on that front as well: a new organization called OpenLeaks, set up by former WikiLeaks staffer **, is expected to launch soon with a much more distributed model that was developed in part as a response to criticism about WikiLeaks and the behavior of front-man Julian Assange. For better or worse, the organization appears to have opened a Pandora’s box when it comes to political transparency that may never be closed.

Twitter Is A Great Tool, But What Happens When It’s Wrong?

By now, thanks to incidents like the revolution in Tunisia and the recent shooting of congresswoman Gabrielle Giffords in Arizona, most people have come to grips with the fact that Twitter is effectively a real-time news network — like a version of CNN that is powered by hundreds of thousands or even millions of users around the world. But what happens when that real-time news network is spreading mis-information? That happened during the Giffords shooting, when the congresswoman was initially reported to be dead, and there are other more recent cases as well: on Wednesday, for example, reports of a shooting in Oxford Circus in London, England swept through the Twitter-sphere but turned out to be a mistake.

The British incident appears to have been caused by two coincidental events: according to several reports, one was an email about a police training exercise involving a shooting in Oxford Circus, which somehow got into the wrong hands and was posted as though it was the real thing. Meanwhile, another Twitter user posted an unrelated message about a TV commercial “shooting” in the area, and the combination of those two things helped to fan the flames of hysteria for a number of hours about buildings being locked down and police sharpshooters being brought in, etc. — which can be seen in the chronicle of tweets collected by one Twitter observer at the site Exquisite Tweets.

In the case of Rep. Giffords, in the minutes following the initial reports of the shooting, a number of outlets reported that the congresswoman had been killed, and these reports made their way onto Twitter — in some cases because the reporters for those news outlets posted them, and in other cases because users heard or saw the reports and then tweeted about them. For hours after the shooting these erroneous reports continued to circulate, even after the reporters and media outlets themselves had posted corrections. Andy Carvin of National Public Radio, for example, spent a considerable amount of time correcting people about the report that he posted, but it continued to be re-tweeted.

This led to a discussion by a number of journalists (including me) in the days that followed, about how to handle an incorrect tweet. Should it be deleted, to prevent the error from being circulated any further? A number of reporters and bloggers said that it should — but others, such as Salon founder Scott Rosenberg and Carvin (who described his thoughts in this comment at Lost Remote), argued that the error should be allowed to remain, but that whoever posted it should do their best to update Twitter with the correct information, and respond to those re-tweeting it by telling them of the mistake. Craig Silverman of Regret The Error, who wrote a post cataloguing the erroneous reports, has also described a way in which Twitter could implement a correction function, by tying any correction to the original tweet so that everyone who saw the original would then see the update.

The problem with this approach, of course, is that Twitter is by definition a stream of content. Parts of it can be posted on blogs by using a number of tools — including the company’s own Blackbird Pie feature, as well as Storify and Curated.by — but the stream never stops flowing, and during breaking news events it flows so quickly it’s almost impossible to filter it all. And because it is an asynchronous experience, meaning people step away from it and then come back repeatedly, and therefore don’t see every tweet even from the people they follow, there is no way to guarantee that everyone is going to see an update or a correction, or to stop them from re-tweeting incorrect information (although someone suggested Twitter could allow users to block tweets from being re-tweeted).

It’s possible that Twitter might be able to either embed corrections or tie errors and updates together using its so-called Annotations feature, which the company was working on last year and had originally hoped to launch in the fall. But work on that project was apparently put on hold while the company launched a revamped website version of the service, and while it sorted out the management changes that saw Dick Costolo take over as CEO from founder Evan Williams. It’s not clear whether Annotations will be revived, but the idea behind it was that information about a tweet — or “meta data” such as location or a number of other variables — could be attached to it as it travelled through the network, something that might work for corrections as well.

Twitter isn’t the only medium that has had to worry about corrections, obviously. Traditional media have struggled with the issue as well, with newspapers often running corrections days or weeks after a mistake was made, with no real indication of what the actual error was. In a sense, Twitter is like a real-time, distributed version of a news-wire service such as Reuters or Associated Press; when those services post something that is wrong, they simply send out an update to their customers, and hope that no one has published it in the paper or online yet. Twitter’s great strength is that it allows anyone to publish — and re-publish — information instantly, and distribute that information to hundreds of thousands or even millions of people within minutes. But when a mistake gets distributed, there’s no single source that can send out a correction.

That’s the double-edged sword that a truly distributed and real-time news network like Twitter represents: it can spread the news faster than just about anything else available, including CNN, but it can also spread mis-information just as quickly.

Was What Happened in Tunisia a Twitter Revolution?

As it did during the recent shootings in Arizona, the Twitter network provided a ringside seat for another major news event on Friday — the overthrow of a corrupt government in Tunisia, after weeks of protests over repression and economic upheaval. And even as the country’s ruler was being hustled onto a plane, the debate began over whether Twitter had played even more of a role in the revolution than just reporting on it as it happened: was this the first real Twitter revolution? The most correct answer is probably yes and no. Did it help protesters, and thus the end goal of overthrowing the government? Undoubtedly. Was it solely responsible for that happening? Hardly.

Among those arguing the question — on Twitter, of course — were foreign affairs commentator Evgeny Morozov, who writes for Foreign Policy magazine, along with Jillian York of Harvard’s Berkman Center for the Internet and Society, Ethan Zuckerman — who founded Global Voices Online while he was a fellow at the Berkman Center — as well as media theorist Clay Shirky and sociologist Zeynep Tufekci from the University of Maryland. Shirky, responding to Morozov, said that “no one claims social media makes people angry enough to act [but] it helps angry people coordinate their actions.” The Foreign Policy writer, meanwhile, argued in a blog post that Twitter did not play a strong role, asking rhetorically:

Would this revolution have happened if there were no Facebook and Twitter? I think this is a key question to ask. If the answer is “yes,” then the contribution that the Internet has made was minor; there is no way around it.

Jillian York also cautioned against attributing too much of what happened to social media, saying: “Don’t get all techno-utopian. Twitter’s great for spreading news, but this revolution happened offline.” She later amended her comment, however, saying that she definitely believed social media played a role in the day’s events. Tufekci, meanwhile, wondered why there had to be such a dividing line between offline vs. online activity, asking: “I don’t get this was it online or offline dichotomy. The online world is part of the world. It has a role.” She added that trying to answer the question of whether it was a Twitter revolution was “like asking was the French Revolution a printing press revolution?”

There’s no question that Twitter definitely helped to spread the information about what was happening in Tunisia, as demonstrated by the tweets and videos and other media collected by Andy Carvin at National Public Radio while the events unfolded. And at least one Tunisian revolutionary, who runs a website called Free Tunisia, directly contradicted Morozov and told a Huffington Post blogger that Twitter — along with cellphones, text messaging and various websites — was crucial to the flow of information and helped protesters gather and plan their demonstrations. Said Bechir Blagui:

They called it the jasmine revolt, Sidi Bouzid revolt, Tunisian revolt… but there is only one name that does justice to what is happening in the homeland: Social media revolution.

The role of social media in activism is something that has been debated a lot over the past year or so, in part because of a piece Malcolm Gladwell wrote poo-poohing the idea — which Shirky responded to somewhat in a piece he wrote on the topic for Foreign Affairs magazine recently, arguing that social media and other modern communication networks may not directly lead to revolution, but they sure help.

The reality is that Twitter is an information-distribution network — not that different from the telephone or email or text messaging, except that it is real-time (in a way that email is not) and it is massively distributed, in the sense that a message posted by a Tunisian blogger can be re-published thousands of times a second and transmitted halfway around the world to be quoted on television in the blink of an eye. That is a very powerful thing — far more powerful than the telephone or email or even blogging, arguably, because the more rapidly the news is distributed, the more it can create a sense of momentum, helping a revolution to “go viral,” as marketing types like to say. Tufekci also noted that Twitter can “strengthen communities prior to unrest by allowing a parallel public(ish) sphere that is harder to censor.”

So was what happened in Tunisia a Twitter revolution? Not any more than what happened in Poland in 1989 was a telephone revolution. But the reality of modern media is that Twitter and Facebook and other social-media tools can be incredibly useful for spreading the news about revolutions, and that can help them expand and ultimately achieve some kind of effect. Whether that means the world will see more revolutions, or simply revolutions that happen more quickly or are better reported, remains to be seen.

For All Its Flaws, Wikipedia is the Way Information Works Now

Wikipedia, which turns 10 years old this weekend, has taken a lot of heat over the years. There has been criticism of the site’s accuracy, of the so-called “cabal” of editors who decide which changes are accepted and which are not, and of founder Jimmy Wales and various aspects of his personal life and how he manages the non-profit service. But as a Pew Research report released today confirms, Wikipedia has become a crucial aspect of our online lives, and in many ways it has shown us — for better or worse — what all information online is becoming: social, distributed, interactive and (at times) chaotic.

According to Pew’s research, 53 percent of American Internet users said they regularly look for information on Wikipedia during a survey last year, up from 36 percent of the same group the first time the research center asked the question in February of 2007. Usage by those under the age of 30 is even higher — more than 60 percent of that age group uses the site regularly, compared with just 33 percent of users 65 and older. Based on Pew’s other research, using Wikipedia is more popular than sending instant messages (which less than half of Internet users do) and rating a product or service (which only 32 percent do), and is only a little less popular than using social networking services, which 61 percent of users do regularly.

The term “wiki” — just like the word “blog,” or the name “Google” for that matter — is one of those words that sounds so ridiculous it was hard to imagine anyone using it with a straight face when Wikipedia first emerged in the early 2000s. But despite a weird name and a confusing interface (which the site has been trying to improve recently to make it easier to edit things), Wikipedia took off and has become a powerhouse of “crowdsourcing,” before most people had even heard that word. In fact, the idea of a wiki has become so powerful that document-leaking organization WikiLeaks adopted the term even though (as many critics like to point out) it doesn’t really function as a wiki at all.

Most people will never edit a Wikipedia page — like most social media or interactive services, it follows the 90-9-1 rule, which states that 90 percent of users will simply consume the content, 9 percent or so will contribute regularly, and only about 1 percent will ever become dedicated contributors. But even with those kinds of numbers, the site has still seen more than 4 billion individual edits in its lifetime, and has more than 127,000 active users. Those include people like Simon Pulsifer, once known as “the king of Wikipedia” because he edited over 100,000 articles on a wide variety of subjects. Why? Because that was his idea of fun, he explained to me once at a web conference (he’s a Wikipedia administrator now).

Yes, there will always be people who decide to edit the Natalie Portman page so that it says she is going to marry them, or create fictional pages about people they dislike. But the surprising thing isn’t that this happens — it’s how rarely it happens, and how quickly those errors are found and corrected.

With Twitter, we are starting to see how a Wikipedia-like approach to information scales even further. As events like the Giffords shooting take hold of the national consciousness, Twitter becomes a real-time news service that anyone can contribute to, and it gradually builds a picture of what has happened and what it means. Along the way, there are errors and all kinds of other noise — but over time, it produces a very real and human view of the news. Is it going to replace newspapers and television and other media? No, just as Wikipedia hasn’t replaced encyclopedias (although it has made them less relevant with each passing year). But it is the way information works now, and for all its flaws, Wikipedia and Jimmy Wales were among the first to recognize that.

How Social Media and Mobile Tech Helped in Haiti

Today is the one-year anniversary of the devastating earthquake in Haiti, which killed an estimated 230,000 people and has left millions of others homeless. As in some other recent catastrophes, social media such as Twitter, text messaging, interactive online maps and other tools were used by both victims and rescue workers to co-ordinate relief efforts. The Knight Foundation has released a comprehensive study of the use of technology during the aftermath of the quake, and found that while there is still a lot of work to be done, such tools can make rescue efforts easier and faster.

Haiti quickly became what the report describes as “a living laboratory for new applications such as SMS, interactive online maps and radio-cell phone hybrids.” But while many of the tools were extremely useful in transmitting crucial information, this information often wasn’t used as well as it could have been, for a variety of reasons. The report notes:

As new media activists have pointed out, “Technology is easy, community is hard.” Many of the
obstacles to the relief efforts concerned difficulties in dialogue between communities: between international
organizations and local Haitian groups, between volunteers and professional humanitarian organizations and between civilians and military.

and

While the democratic approach to information management fuels crowdsourcing, this characteristic can also serve as a limitation in crisis settings. Information may be gathered and assembled in an open, democratic fashion. But often the practical response effort is driven by large organizations that deal with information in a radically different way. Military and international humanitarian organizations manage information within more closed systems.

One of the most powerful new-media and online tools used in the relief efforts, the Knight report says, was Ushahidi — a service that was developed in Kenya in 2007, and can be used to aggregate and process information that comes in from a variety of sources such as SMS, Twitter and radio, and then plot the information on a map. The service “developed an RSS feed for the U.S. Coast Guard to help them retrieve emergency information [and] a team of four to eight Coast Guard responders retrieved the information and disseminated it to forces on the ground.” A group of students at Georgia Tech’s School of Computer Science converted the Ushahidi data to Google Earth file formats.

Crowdsourcing also played a large role in the aftermath of the disaster, the report says: two weeks after the earthquake, the labor-on-demand company CrowdFlower took over management of the workflow of volunteers to “translate, classify and geocode the messages” coming in via the short-code 4636. Later, an outsourcing company called Samasource took over the bulk of the translation and coding work in co-operation with a local Haiti-based group. And accurate maps of the country and the location of survivors and victims were also crowdsourced using the OpenStreetMap standard, the Knight Foundation report says.

One of the biggest problems of crisis response in developing countries lies in finding locations that do not appear on any maps. In some cases, the maps have never been made; in others, rural populations have crowded into urban areas so quickly that maps soon become outdated. These problems were addressed in Haiti by another notable development in information technology: the OpenStreetMap (OSM) Haiti mapping initiative.

Although social media and other tools were important, the report makes a point of cautioning that the Haiti relief effort shouldn’t be seen as a “new-media success story,” because some of the new approaches used did not work very well, due to a lack of co-ordination — and in many cases a lack of understanding of how to use the tools. For example, U.S. Air Force Col. Lee Harvis, the chief medical officer who landed in Port-au-Prince 36 hours after the earthquake, said that he had no knowledge of Ushahidi, and neither did any of the other military doctors operating in the country.

The Knight Foundation report (which was co-produced with Internews) also noted that despite all the new media tools, the single most important tool in Haiti was one that has also been crucial in almost every other major disaster in the past 50 years: namely, traditional radio broadcasting. However, the report’s authors noted that social media and other tools helped spread the information farther than radio would otherwise have been able, and that this was an important aspect of the relief efforts.

Icelandic MP Says It’s Our Duty to Fight For WikiLeaks

Birgitta Jónsdóttir, a member of the Icelandic parliament and an early support of WikiLeaks, said that despite having had a falling out with WikiLeaks founder Julian Assange over his role in the organization, she is willing to “stand up and stick my neck out for him” and defend the document-leaking entity against attacks by the U.S. government and others, because doing so is her duty. “We must all stand behind WikiLeaks and defend freedom of information and freedom of speech,” Jónsdóttir said in a presentation at the University of Toronto on Tuesday night, in which she also called on media outlets to support the organization. Jónsdóttir also said “even if they chop the head off WikiLeaks, a thousand more heads will come out.”

The Icelandic MP didn’t talk a lot about the WikiLeaks leader, except to say that “WikiLeaks is bigger than Julian Assange.” But she did talk about how she met him at a conference in Germany in 2009, while she and her party were developing proposed legislation in Iceland called the Icelandic Modern Media Initiative, and Assange was looking for a “transparency haven” that could help the organization. The IMMI legislation is aimed at helping to protect freedom of information and whistleblowers like WikiLeaks who leak documents — something Iceland as a whole is also interested in, because many believe that more whistleblowing could have helped the country avoid its financial meltdown in 2008.

Jónsdóttir and Assange started working together, and in the spring of last year he showed her a copy of the infamous U.S. military video of American bombers firing on a civilian vehicle during an attack in Iraq. The Icelandic MP described how she watched the video in a crowded cafe and began to cry — and at that point decided to help WikiLeaks get publicity for the video, which she said she was afraid would get lost amid all the other leaked documents on the organization’s website. Jónsdóttir spent her Easter holiday editing the video, including pulling out still photographs to send to various media outlets. WikiLeaks even sent people to Iraq to the village where the attack took place, to confirm whether there were children in the van.

That video was the beginning of an explosion of interest in WikiLeaks, which culminated with the leaking of thousands of U.S. diplomatic cables late last year, and the current attempt by the U.S. government to mount a case against Assange under the Espionage Act. As part of that effort, the Department of Justice has gotten a court order that compels Twitter to release certain information — including messages, IP addresses, payment information and other details — about the personal accounts of Jónsdóttir, Assange, Dutch hacker Rop Gonggrijp and American programmer Jacob Appelbaum. Jónsdóttir has said that she will resist this order, and has hired the Electronic Frontier Foundation to help with her defense.

In her talk, Jónsdóttir also freely admitted that she was completely unprepared for entering government. A member of a loosely-affiliated group of human rights protesters known simply as The Movement, she only volunteered to run for office because there weren’t enough female candidates, she said — and “to my great shock, I actually won, and I was in parliament two weeks later.” But the MP, who is an author and a poet, said that she believed her ignorance of the ways of government was a benefit rather than a disadvantage, because it meant that she could look at everything with fresh eyes and try things that others might not, including pushing forward the idea of the IMMI legislation.

Jónsdóttir said the idea behind the initiative — which was unanimously supported by the Icelandic parliament in a vote last summer — is to create the most advanced freedom-of-information and whistleblower-protection legislation in the world. The group looked at laws protecting freedom of speech and freedom of information in dozens of major countries and cherry-picked what they thought were the best ones. “The Internet is becoming industrialized and corporatized,” she said. “We need to make sure we don’t lose our freedom of speech and freedom of information.” Here’s a video interview that Jónsdóttir did with the public television station TVO while she was in Toronto

MySpace Vs. Facebook — There Can Be Only One

As I read about the layoffs at MySpace — the company today confirmed that it is shedding close to half of the company, or about 500 employees, including virtually the entire international operation — I couldn’t help thinking of the legendary 1986 science-fiction film Highlander, which starred Sean Connery and Christopher Lambert as warriors fighting to become the world’s sole remaining immortal. The tag-line for the movie was “There can be only one,” and that certainly seems to be the case when it comes to social networks.

News Corp. (s nws) has tried hard to make something of the company it acquired for close to $600 million in 2005: it has changed chief executives repeatedly, to the point where it has almost become comical, and it has refocused several times, with the latest incarnation targeting the entertainment market. The latest redesign pitched the network as the place where people can follow their favorite musicians and other celebrities, and then not long afterwards the network added the ability to integrate user accounts with Facebook — a final sign of how completely it has surrendered to its former foe.

The layoffs also appear to be a sign that no one is rushing forward to take the company off the hands of its corporate parent. News Corp. has made it clear that it is looking to unload the operation, but so far there have been no reports of interest. While some content portals such as Yahoo might be more attracted to the social network once it cuts its staffing levels by 50 percent and takes a huge writedown, the best News Corp. can probably hope for is a Bebo-style deal, like the one that saw AOL shed its own failed social network for a fraction of what it paid.

MySpace’s latest CEO, Mike Jones, did his best to put a positive spin on the downsizing, saying the company has seen ** new signups since ** and that traffic — particularly mobile traffic — has increased. But the reality is that the social network has been in decline for years now, and there are no signs that it can recover any of that lost ground. And in the (admittedly brief) history of modern technology companies, there are very few that can claim they laid off half of their staff and yet still went on to become successful. The best-case scenario for News Corp. is that it either manages to sell the company to someone, or run it on a shoestring for awhile and then quietly shut it down.

DoJ Subpoena Proves Twitter’s Value, and Its Weakness

Not that long ago, there was much debate about whether Twitter was just an ephemeral plaything for nerds or a powerful, real-time information network. Now the US Department of Justice has answered the question for us by serving the company with a court order related to WikiLeaks, and the case the government is trying to make against founder Julian Assange. But the subpoena also points out how easy it is for the DoJ to get the information it seeks, because Twitter acts as a central gatekeeper.

To Twitter’s credit, the company has effectively made this process public, unlike some others, including Facebook and Google, that have reportedly received similar orders. The subpoena first came to light on Friday, when Birgitta Jónsdóttir — a member of the Icelandic parliament and an early supporter of WikiLeaks — said on Twitter that she had been informed by the company of a DoJ order. As Glenn Greenwald has reported, the order (a copy of which is embedded below) compels Twitter to turn over not just tweets, but also IP addresses, payment information from their Twitter accounts and various other personal information the government claims is related to its case (Jónsdóttir has said she is going to fight the order).

In addition to Manning and Jónsdóttir, letters about the court order have been sent to Dutch hacker Rop Gonggrijp, also an early supporter of WikiLeaks. According to the official WikiLeaks account on Twitter, similar orders have been sent to Google and Facebook, although neither of these companies has made the federal requests public, if they in fact have received them (a spokesman for Facebook said the company “has no comment to make at this time,” and Google has not responded to an emailed request for comment). The most likely explanation for the orders is that the DoJ is trying to make a case against Assange under the Espionage Act by proving that he conspired with the leaker of the diplomatic cables.

According to Greenwald, the court order sent to Twitter would not have become public at all if the company had not initially refused to comply with the DoJ request and effectively forced it out into the open. Twitter should be congratulated for this (and has been by many users on Twitter since the news broke Friday night). The company didn’t have to fight for this court order to be made public; it could easily have complied with the DoJ subpoena in private, and simply never admitted that it had done so.

The fact that Twitter is being targeted by the government is another sign of how important the network has become as a real-time publishing platform, and also of how centralized the service is — something that could spark interest in distributed and open-source alternatives such as Status.net, just as the downtime suffered by the network early last year did. It is another sign of how much we rely on networks that are controlled by a single corporate entity, as Global Voices founder Ethan Zuckerman pointed out when WikiLeaks was ejected from Amazon’s servers and had its DNS service shut down.

All of this makes it even more important that Twitter has forced the government’s attempts out into the light. One would hope that Facebook and Google — the latter of whom has talked a lot in the past about its commitment to freedom of speech, and has also taken action in China to protest that government’s digital surveillance of its citizens — would also come clean about any court orders they have received, especially when the DoJ appears determined to make a case that could easily entrap virtually anyone, up to and including reporters for the New York Times.

The US government’s move to “tap” Twitter as a way of engaging in digital surveillance confirms the network’s status as a real-time information network, but also makes it obvious how much we have come to rely on it, and the implications of that dependence. As founder Evan Williams has noted, Twitter effectively makes everyone a publisher — and that means we are all potentially targets for similar court orders.

Memo to Newspapers: Stop Thinking Like a Portal

The story of homeless radio announcer Ted Williams became an Internet sensation this week, as a video of him got passed around on Twitter and in the blogosphere, and quickly led to appearances on the Today Show and job offers from around the country. But the video that started it all — an interview with a reporter from the Columbus Dispatch newspaper in Ohio — is no longer available on YouTube. In yet another example of a newspaper that can’t see the forest for the dead trees, there is just a statement from the video-hosting site that the clip “has been removed due to a copyright claim by The Dispatch.”

A web editor in the Dispatch newsroom seemed confused when asked why the paper ordered YouTube to take the clip down. “It’s our video, and someone put it there without our permission,” he said. All of which is true — the original clip was copied from the Dispatch site and uploaded to YouTube, and therefore the newspaper had a pretty clear copyright claim. The video can still be seen at the Dispatch website, along with other videos related to the Williams story. But how many people are going to watch the video there? Likely a fraction of the 13 million who watched it at YouTube.

In fact, not only does it make little sense to pull a video after it has already been seen by 13 million people — not to mention the fact that there are half a dozen other versions available at YouTube, including one from the Associated Press newswire — but the Williams story might not even have happened if it wasn’t for YouTube. Although the link to the Dispatch site could have been shared on Twitter and other social networks just as easily as the link to the YouTube video was, the newspaper doesn’t allow its video to be embedded, and therefore it likely wouldn’t have spread so far so quickly. Williams might never have come to the attention of any of the companies now offering him jobs.

The larger issue here, of course, is one of control over content, something newspapers and other traditional media outlets seem determined to fight for, whether through copyright takedowns or by putting up paywalls, or by shipping iPad apps that don’t allow users to share or even link to content. Few publishers — apart from The Guardian, which launched an ambitious “open platform” last year, and some equally forward-thinking outlets such as the Journal Register Co. in New Jersey — seem to have really embraced the idea that content can’t be bottled up and locked behind walls any more, and that there is more to be gained by letting it be shared than there is to be lost.

In the late 1990s, everyone wanted to become a “portal” — a destination site where users would get all their email and news and entertainment and so on. Yahoo and AOL and Microsoft spent billions building these businesses. Then along came Google, with its single search box and the complete opposite approach: it does its best to send you away as quickly as possible. That’s because the web giant doesn’t think of itself as a “content” or media company. It is simply providing a service — and to the extent that it does a good job of providing that service, readers are more willing to come back, and to click on related ads. Pretty simple, really.

What has the Dispatch gained by removing its video from YouTube? It hasn’t stopped people from sharing the video, since there are plenty of other versions out there, and it likely hasn’t convinced anyone to go to its website other than readers who were already going there anyway — and even when they get to the video, there are no comments or any other social or community elements to keep them there. All the takedown has done is make the newspaper look like a company that doesn’t really understand what it is doing online, or why.

Update: As noted by a commenter here who lives in Columbus (and who wrote a blog post about the takedown of the video), the Dispatch has created a YouTube channel and uploaded a copy of the Ted Williams video — something it probably should have done before, rather than after (the new version of the video had 136 views at last check). The editor of the paper has also written a blog post about the incident.

Sure, RSS Is Dead — Just Like the Web Is Dead

A brush fire has been swirling through the blogosphere of late over whether RSS is dead, dying, or possibly severely injured and in need of assistance. It seems to have started with a post from UK-based web designer Kroc Camen that got picked up by Hacker News and re-tweeted a lot. The flames were fanned by a blog post from TechCrunch that drove RSS developer Dave Winer into a bit of a Twitter frenzy. But is RSS actually doomed, or even ailing? Not really. Like plenty of other technologies, it is just becoming part of the plumbing of the real-time web.

Camen’s criticisms seem focused on the fact that Firefox doesn’t make it easy to find or subscribe to RSS feeds from within the browser (although designer Asa Dotzler takes issue with that case in a comment near the bottom of the post). Instead of the usual RSS icon, he says, there is nothing except an entry in a menu. But did anyone other than a handful of geeks and tech aficionados make use of those RSS icons? It’s not clear that many regular web users have done so — or ever will. Browsers like Internet Explorer have had built-in support for RSS subscriptions for years, but there’s little signs of it becoming a mainstream thing.

So can we say that RSS is dead? Sure — in the same way that HTML is dead, or the web itself is dead (if the “RSS is dead” idea seems familiar, that’s because it has reared its head several times before). There used to be plenty of HTML editors out there, which allowed people to create their own websites and web pages, but they never really went mainstream either, and HTML has evolved to the point where it’s a specialty that requires actual programming skills in order to be effective. Is that bad thing? Not if you make a living as a web designer. Hypertext markup language has become part of the plumbing of the web, and now allows far more utility than it used to.

In a similar vein, Wired magazine made the argument that the web is dead — based on some faulty data and a perception that apps for devices like the iPhone and iPad are taking over from the regular web. While there is some reason for concern about walled gardens such as Facebook and the control Apple has over its ecosystem — as both the web’s inventor Sir Tim Berners-Lee and law professor Tim Wu have argued in separate opinion pieces recently — the reality is that the web is continuing to evolve, and apps could well be just an interim step in that evolution.

In the same way, RSS has become a crucial part of how web content gets fed from blogs and other sites into real-time services such as Twitter and Facebook, as well as aggregation apps like Flipboard, as CEO Mike McCue noted during the debate between Winer and TechCrunch. Do Twitter and Facebook compete with RSS to some extent, in terms of content discovery? Sure they do — but they also benefit from it. Along with real-time publishing tools such as Pubsubhubbub, RSS is one of the things that provides a foundation for the apps and services we see all around us (including real-time search).

The fact that RSS is fading in terms of user awareness is actually a good thing rather than a bad thing. The sooner people can forget about it because it just works in the background, the better off we will all be — in the same way that many of us have forgotten (if we ever knew) how the internal-combustion engine works, because we no longer have to pull over and fix them ourselves.

Is What WikiLeaks Does Journalism? Good Question

While the U.S. government tries to determine whether what WikiLeaks and front-man Julian Assange have done qualifies as espionage, media theorists and critics alike continue to debate whether releasing those classified diplomatic cables qualifies as journalism. It’s more than just an academic question — if it is journalism in some sense, then Assange and WikiLeaks should be protected by the First Amendment and freedom of the press. The fact that no one can seem to agree on this question emphasizes just how deeply the media and journalism have been disrupted, to the point where we aren’t even sure what they are any more.

The debate flared up again on the Thursday just before Christmas, with a back-and-forth Twitter discussion involving a number of media critics and journalists, including MIT Technology Review editor and author Jason Pontin, New York University professor Jay Rosen, ** Aaron Bady, freelance writer and author Tim Carmody and several other occasional contributors. Pontin seems to have started the debate by saying — in a comment about a piece Bruce Sterling wrote on WikiLeaks and Assange — that the WikiLeaks founder was a hacker, not a journalist.

Pontin’s point, which he elaborated on in subsequent tweets, seemed to be that because Assange’s primary intent is to destabilize a secretive state or government apparatus through technological means, then what he is doing isn’t journalism. Not everyone was buying this, however. Aaron Bady — who wrote a well-regarded post on Assange and WikiLeaks’ motives — asked why he couldn’t be a hacker *and* a journalist at the same time, and argued that perhaps society needs to protect the act of journalism, regardless of who practices it.

Rosen, meanwhile, was adamant that WikiLeaks is a journalistic entity, period, and journalism prof and author Jeff Jarvis made the same point. Tim Carmody argued that the principle of freedom of the press enshrined in the First Amendment was designed to protect individuals who published pamphlets and handed them out in the street just as much as it was to protect large media entities, and Aaron Bady made a point that I have tried to make as well, which is that it’s difficult to criminalize what WikiLeaks has done without also making a criminal out of the New York Times.

This debate has been going on since before the diplomatic cables were released, ever since Julian Assange first made headlines with leaked video footage of American soliders firing on unarmed civilians in Iraq. At the time, Rosen — who runs an experimental journalism lab at NYU — called WikiLeaks “the first stateless news organization,” and described where he saw it fitting into a new ecosystem of news. Not everyone agreed, however: critics of this idea said that journalism had to have some civic function and/or had to involve journalists analyzing and sorting through the information.

Like Rosen and others, I’ve tried to argue that in the current era, media — a broad term that includes what we think of as journalism — has been dis-aggregated or atomized; in other words, split into its component parts, parts that include what WikiLeaks does. In some cases, these may be things that we didn’t even realize were separate parts of the process to begin with, because they have always been joined together. And in some cases they merge different parts that were previously separate, in confusing ways, such as the distinction between a source and a publisher. WikiLeaks, for example, can be seen as both.

And while it is clearly not run by journalists — and to a great extent relies on journalists at the New York Times, The Guardian and other news outlets to do the heavy lifting in terms of analysis of the documents it holds and distributes — I think an argument can be made that WikiLeaks is at least an instrument of journalism. In other words, it is a part of the larger ecosystem of news media that has been developing with the advent of blogs, wikis, Twitter and all the other publishing tools we have now, which Twitter founder Ev Williams I think correctly argued are important ways of getting us closer to the truth.

Among those taking part in the Twitter debate on Thursday was Chris Anderson, a professor of media culture in New York who also writes for the Nieman Journalism Lab, and someone who has tried to clarify what journalism as an ecosystem really means and how we can distinguish between the different parts of this new process. In one post at the Nieman Lab blog, for example, he plotted the new pieces of this ecosystem on a graph with two axes: one going from “institutionalized” to “de-institutionalized” and the other going from “pure commentary” to “fact-gathering.” While WikiLeaks doesn’t appear on Anderson’s graph, it is clearly part of that process, just as the New York Times is.

Regardless of what we think about Julian Assange or WikiLeaks — or any of the other WikiLeaks-style organizations that seem to be emerging — this is the new reality of media. It may be confusing, but it is the best we have, so we had better start getting used to how it works.

What the Media Need to Learn About the Web — and Fast

Traditional media — publishers of newspapers, magazines and other print publications — have had at least a decade or more to get used to the idea of the web and the disruptive effect it is having on their businesses, but many continue to drag their feet when it comes to adapting. Some experiment with paywalls, while others hope that iPad apps will be the solution to their problems, now that Apple allows them to charge users directly through the tablet. But the lessons of how to adapt to the web and take advantage of it are not complicated, if media outlets are willing to listen. And these lessons don’t just apply to mainstream media either — anyone whose business involves putting content online needs to think hard about applying them.

Newspapers in particular continue to come under pressure from the digital world: eMarketer recently estimated that online advertising will eclipse newspaper advertising this year for the first time — a further sign of the declining importance of newspapers in the online commercial ecosystem, where Facebook and Twitter are getting a lot more interest from advertisers than any traditional publication. Online, newspapers and magazines are just another source of content and pageviews or clickthroughs — they are no longer the default place for brand building or awareness advertising, nor are they even one of the most popular.

Rupert Murdoch, among others, seems to believe that paywalls are the route to success online, and recently installed one at the Times of London and the Sunday Times in England. But paywalls are are mostly a rearguard action that newspapers and magazines are fighting to try and keep some of their subscribers paying for the product, rather than just getting it for free through the web. The editors of the Times have said that they are happy with the response to their paywall, even though their readership dropped by more than 99 percent following the introduction of subscriptions for the website. That suggests it is far more important to the paper to keep even a few thousand paying readers rather than appealing to the vast number of potential readers who will now never see the site’s content.

It’s true that the Wall Street Journal and the Economist, among others, have been successful in getting readers and users to pay for their content — but it’s also true that not every publication can be the Wall Street Journal or the Economist. Whether you are a newspaper or magazine publisher, or whether you have some other business that depends on online publishing of content in some way, here are some of the lessons that you need to absorb to take advantage of the web:

* Forget about being a destination: In the old days, it was enough to “build it and they will come,” and so everyone from AOL and Yahoo to existing publishers of content tried to make their sites a destination for users, complete with walls designed to keep them from leaving. But Google showed that successful businesses can be built by actually sending people away, and others — including The Guardian newspaper in Britain — have shown that value can be generated by distributing your content to wherever people are, via open APIs and other tools, rather than expecting them to come to you.

* Don’t just talk about being social: Social media is a hot term, but the reality is that all media is becoming social, and that includes advertising and other forms of media content. Whether you are writing newspaper stories or publishing blog posts on your company blog, you will get feedback from readers and/or users — and you had better be in a position to respond, and then take advantage of the feedback you get. If you don’t, or if you block your employees from using Twitter and Facebook and other such tools, you will not get any benefit, and you will be worse off as a result.

* Get to know your community: This is something that new media outlets such as The Huffington Post have done very well — reaching out to readers and users, providing a number of different ways for them to share and interact with the site. News sites like Toronto-based OpenFile are designed around the idea that every member of a community has something to offer, and that allowing these ideas into the process via “crowdsourcing” can generate a lot of value. Even some older media players such as the Journal Register newspaper chain have been getting this message, and opening up what they call a “community newsroom” as a way of building bridges with readers.

* Use all the tools available to you: Large media entities — and large companies of all kinds — often have a “not invented here” mentality that requires them to build or develop everything in house. But one of the benefits of the distributed web is that there are other services you can integrate with easily in order to get the benefit of their networks, without having to reinvent the wheel. Groupon is a great example: many publishers and websites are implementing “daily deal” offers through a partnership with Groupon, while others are using a white-label service from a competitor called Tippr. Take a look around you and make use of what you can. David Weinberger, author of the Cluetrain Manifesto, called the web “small pieces, loosely joined.”

* Don’t pave the cart paths: Media outlets, including a number of leading newspapers and magazines, seem to feel that the ideal way of using a new technology such as the iPad is to take existing content from their websites or print publications and simply dump it on the device — in much the same way that many publications did with CD-ROMs when they first arrived on the scene. Why bother putting your content on the iPad if you aren’t going to take advantage of the features of the device, including the ability to share content? And yet, many major media apps provide no way for users to share or even link to the content they provide.

* Be prepared to “burn the boats”: Venture capitalist Marc Andreessen wrote about how media entities in some cases should “burn the boats,” as ** is said to have done in order to show that he was fully committed to his cause and would never retreat. The idea being that if you are still mostly focused on your existing non-web operations, and always see those as the most important, then you will inevitably fail to be as aggressive as you need to be when it comes to competing with online-only counterparts, and that could spell doom. The Christian Science Monitor and several other papers shut down their print operations completely and went web only. Obviously that isn’t for everyone, but sometimes drastic action is required.

It seems unlikely that Rupert Murdoch will ever be convinced that he has made a mistake with his paywalls, despite a track record of poor judgment calls such as the purchase of MySpace. And other newspapers and publishers of all kinds are free to make similar mistakes. But if you are engaged in a business that involves content and you want to remain competitive online, you have to become just as web-focused and adaptable as your online-only counterparts — or you will wind up cornering the market in things that most people no longer want, or at least no longer want to pay for.

Google Fights Growing Battle Over “Search Neutrality”

The European Union, which has been investigating Google’s dominance in web search as a result of complaints from several competitors, is broadening that investigation to include other aspects of the company’s business, EU officials announced today. The EU opened the original case last month, and has now added two German complaints to it — one made by a group of media outlets and one by a mapping company, both of whom claim that Google is favoring its own properties unfairly, and also has refused to compensate publishers for their content.

The original case was opened last month by EU competition commissioner Joaquin Almunia, and an official statement from the commision said that investigators would be looking at “complaints by search service providers about unfavourable treatment of their services in Google’s unpaid and sponsored search results, coupled with an alleged preferential placement of Google’s own services.”

It isn’t just the EU that has raised concerns about Google treating its own assets and services differently in search results: in a recent Wall Street Journal story on the same issue, a number of competitors in a variety of markets — including TripAdvisor, WebMD and CitySearch — complained about this preferential treatment by the web giant. They said **. Google responded with a blog post saying it was concerned only about producing the best results for users, regardless of whose service was being presented in those results.

Although competition laws are somewhat different in Europe than they are in the United States — where antitrust investigators have to show that consumers have been harmed by an abuse of monopoly power, not just that competitors have been harmed — the EU investigation is sure to increase the heat on the web giant. And it comes at an especially inopportune time, since Google is trying to get federal approval for its purchase of travel-information service ITA. Competitors have complained that if Google buys the company, it will be incorporated into travel-related search results in an unfair way.

Washington Post columnist Steve Pearlstein raised similar concerns about Google’s growing dominance in a recent piece, arguing that the company should be prevented from buying major players in other markets because it is so dominant in web search. Google responded by arguing that it competes with plenty of other companies when it comes to acquisitions, and there has been no evidence shown that consumers have been harmed by its growth (I think Pearlstein’s argument is flawed, as I tried to point out in this blog post). Pearlstein has since responded to Google here.

There seems to be a growing attempt to pin Google down based in part on the concept of “search neutrality” — the idea that the web giant should be agnostic when it comes to search results, in the same way net neutrality is designed to keep carriers from penalizing competitors. But should search be considered a utility in that sense? That’s a tough question. In many ways, the complaints from mapping companies and others seem to be driven in part by sour grapes over Google’s success and their own inability to take advantage of the web properly, as Om argues in a recent GigaOM Pro report (subscription required).

Let’s Be Careful About Calling This a Cyber-War

Terms like “cyber-war” have been used a lot in the wake of the recent denial-of-service attacks on MasterCard, Visa and other entities that cut off support for WikiLeaks. But do these attacks really qualify? An analysis by network security firm Arbor Networks suggests that they don’t, and that what we have seen from the group Anonymous and “Operation Payback” is more like vandalism or civil disobedience. And we should be careful about tossing around terms like cyber-war — some believe the government is just itching to find an excuse to adopt unprecedented Internet monitoring powers, and cyber-war would be just the ticket.

The “info-war” description has been used by a number of media outlets in referring to the activities of Anonymous, the loosely organized group of hackers — associated with the counter-culture website known as 4chan — who have been using a number of Twitter accounts and other online forums to co-ordinate the attacks on MasterCard and others over the past week. But the idea got a big boost from John Perry Barlow, an online veteran and co-founder of the Electronic Frontier Federation, who said on Twitter that:

The first serious infowar is now engaged. The field of battle is WikiLeaks. You are the troops.

As stirring an image as that might be, however — especially to suburban teenagers downloading a DDoS script from Anonymous, who might like to think of themselves as warriors in the battle for truth and justice — there is no real indication that Operation Payback has even come close to being a real “info-war.” While the attacks have been getting more complex, in the sense that they are using a number of different exploits, Arbor Networks says its research shows that they are still relatively puny and unsophisticated compared with other hacking incidents in the past.

Distributed denial-of-service attacks like the kind Operation Payback has been involved with have been ramping up in size, Arbor says, with large “flooding attacks” involving as much as 50 gigabytes of data or more, something that can overwhelm data centers and carrier backbones.

So were the Operation Payback strikes against Amazon, MasterCard, Visa and a Swedish bank (which cut off funds belonging to WikiLeaks) in this category? No, says Arbor.

Were these attacks massive high-end flooding DDoS or very sophisticated application level attacks? Neither. Despite the thousands of tweets, press articles and endless hype, most of the attacks over the last week were both relatively small and unsophisticated. In short, other than than intense media scrutiny, the attacks were unremarkable.

In other words, the most impressive thing about the attacks is the name of the easily downloadable tool they employ, which hackers like to call a “Low Orbit Ion Cannon” or LOIC for short (there are also a couple of related programs with minor modifications that are known as the “High Orbit Ion Cannon” and the “Geosynchronous Orbit Ion Cannon”). But unlike a real ion cannon, the ones used by Operation Payback only managed to take down the websites of their victims for a few hours at most.

As Arbor notes in its blog post on the attacks, however, real cyber-war is something the U.S. government and other governments are very interested in, for a variety of reasons — and it has a lot more to do with malicious worms such as Stuxnet, which seeks out and disables specific machinery in a deliberate wave of sabotage, than it does some DDoS attacks run by voluntary bot-nets such as the one organized by Anonymous. And among other things — as investigative journalism Seymour Hersh noted in a recent New Yorker piece entitled “The Online Threat: Should We Be Worried About a Cyber War?” — such a war would give the military even more justification for monitoring and potentially having back-door access to networks and systems, allegedly to defend against foreign attacks.

How Big Should We Let Google Get? Wrong Question

While Google is busy trying to compete with the growing power of Facebook, there are still those who believe that the government needs to do something to blunt the growing power of Google. Washington Post business columnist Steven Pearlstein is the latest to join this crowd, with a piece entitled “Time to Loosen Google’s Grip?,” in which he argues that the company needs to be prevented from buying its way into new markets and new technologies. Not surprisingly, Google disagrees — the company’s deputy general counsel has written a response to Pearlstein in which he argues that Google competes fair and square with lots of other companies, and that its acquisitions are not likely to cause any harm.

So who is right? Obviously the government has the authority to approve or not approve acquisitions such as Google’s potential purchase of ITA, the travel-software firm that the company agreed to acquire in July — which some have argued would give Google too much control over the online travel search-and-booking market (since ITA powers dozens of other sites and services in that market). But does Pearlstein’s argument hold water? Not really. More than anything, his complaint seems to be that Google is really big and has a lot of money, so we should stop it from buying things.

Pearlstein starts out by noting that Google isn’t just a web search company any more, but is moving into “operating system and application software, mobile telephone software, e-mail, Web browsers, maps, and video aggregation.” Not to be unkind, but did Pearlstein just notice that Google has a mapping service and is doing video aggregation? Surely those wars are long over now. But no, the WaPo columnist suggests the company shouldn’t have been allowed to buy YouTube, because it had a “dominant position” in its market. This, of course, ignores the fact that there wasn’t even a market for what YouTube had when Google bought it, which is why many people thought the deal was a bad idea.

Pearlstein’s motivation becomes obvious when he says things like “The question now is how much bigger and more dominant we want this innovative and ambitious company to become,” or that he has a problem with “allowing Google to buy its way into new markets and new technologies.” Since when do we decide how big companies are allowed to become, or whether they should be able to enter new markets? Antitrust laws were designed to prevent companies from using their monopoly power to negative effect in specific markets, not simply to keep companies from becoming large. But Pearlstein seems to be arguing that they should be broadened to cover any big company that buys other big companies:

Decades of cramped judicial opinions have so limited application of antitrust laws that each transaction can be considered only in terms of how it affects the narrowly defined niche market that an acquiring company hopes to enter.

The Washington Post columnist also trots out the “network effect” argument, which he says results in a market where “a few companies get very big very fast, the others die away and new competitors rarely emerge.” So how then do we explain the fact that Facebook arose out of nowhere and completely displaced massive existing networks like MySpace and Friendster? And while Google may be dominant in search and search-related advertising, the company has so far failed to extend that dominance into any other major market, including operating systems (where it competes with a company you may have heard of called Microsoft), mobile phone software and web-based application software. In fact, Google arguably has far more failed acquisitions and new market entries than it does successful ones.

Google’s deputy counsel also makes a fairly powerful point in his defence of the company’s acquisitions, which is that antitrust laws are meant to protect consumers, not other businesses or competitors, and — so far at least — there is virtually no compelling evidence that the company’s purchases have made the web or any of its features either harder to use or more expensive for consumers, or removed any choice. If anything, in fact, Google has been the single biggest force in making formerly paid services free. That’s going to make an antitrust case pretty hard to argue, regardless of what Mr. Pearlstein thinks.