Is What WikiLeaks Does Journalism? Good Question

While the U.S. government tries to determine whether what WikiLeaks and front-man Julian Assange have done qualifies as espionage, media theorists and critics alike continue to debate whether releasing those classified diplomatic cables qualifies as journalism. It’s more than just an academic question — if it is journalism in some sense, then Assange and WikiLeaks should be protected by the First Amendment and freedom of the press. The fact that no one can seem to agree on this question emphasizes just how deeply the media and journalism have been disrupted, to the point where we aren’t even sure what they are any more.

The debate flared up again on the Thursday just before Christmas, with a back-and-forth Twitter discussion involving a number of media critics and journalists, including MIT Technology Review editor and author Jason Pontin, New York University professor Jay Rosen, ** Aaron Bady, freelance writer and author Tim Carmody and several other occasional contributors. Pontin seems to have started the debate by saying — in a comment about a piece Bruce Sterling wrote on WikiLeaks and Assange — that the WikiLeaks founder was a hacker, not a journalist.

Pontin’s point, which he elaborated on in subsequent tweets, seemed to be that because Assange’s primary intent is to destabilize a secretive state or government apparatus through technological means, then what he is doing isn’t journalism. Not everyone was buying this, however. Aaron Bady — who wrote a well-regarded post on Assange and WikiLeaks’ motives — asked why he couldn’t be a hacker *and* a journalist at the same time, and argued that perhaps society needs to protect the act of journalism, regardless of who practices it.

Rosen, meanwhile, was adamant that WikiLeaks is a journalistic entity, period, and journalism prof and author Jeff Jarvis made the same point. Tim Carmody argued that the principle of freedom of the press enshrined in the First Amendment was designed to protect individuals who published pamphlets and handed them out in the street just as much as it was to protect large media entities, and Aaron Bady made a point that I have tried to make as well, which is that it’s difficult to criminalize what WikiLeaks has done without also making a criminal out of the New York Times.

This debate has been going on since before the diplomatic cables were released, ever since Julian Assange first made headlines with leaked video footage of American soliders firing on unarmed civilians in Iraq. At the time, Rosen — who runs an experimental journalism lab at NYU — called WikiLeaks “the first stateless news organization,” and described where he saw it fitting into a new ecosystem of news. Not everyone agreed, however: critics of this idea said that journalism had to have some civic function and/or had to involve journalists analyzing and sorting through the information.

Like Rosen and others, I’ve tried to argue that in the current era, media — a broad term that includes what we think of as journalism — has been dis-aggregated or atomized; in other words, split into its component parts, parts that include what WikiLeaks does. In some cases, these may be things that we didn’t even realize were separate parts of the process to begin with, because they have always been joined together. And in some cases they merge different parts that were previously separate, in confusing ways, such as the distinction between a source and a publisher. WikiLeaks, for example, can be seen as both.

And while it is clearly not run by journalists — and to a great extent relies on journalists at the New York Times, The Guardian and other news outlets to do the heavy lifting in terms of analysis of the documents it holds and distributes — I think an argument can be made that WikiLeaks is at least an instrument of journalism. In other words, it is a part of the larger ecosystem of news media that has been developing with the advent of blogs, wikis, Twitter and all the other publishing tools we have now, which Twitter founder Ev Williams I think correctly argued are important ways of getting us closer to the truth.

Among those taking part in the Twitter debate on Thursday was Chris Anderson, a professor of media culture in New York who also writes for the Nieman Journalism Lab, and someone who has tried to clarify what journalism as an ecosystem really means and how we can distinguish between the different parts of this new process. In one post at the Nieman Lab blog, for example, he plotted the new pieces of this ecosystem on a graph with two axes: one going from “institutionalized” to “de-institutionalized” and the other going from “pure commentary” to “fact-gathering.” While WikiLeaks doesn’t appear on Anderson’s graph, it is clearly part of that process, just as the New York Times is.

Regardless of what we think about Julian Assange or WikiLeaks — or any of the other WikiLeaks-style organizations that seem to be emerging — this is the new reality of media. It may be confusing, but it is the best we have, so we had better start getting used to how it works.

What the Media Need to Learn About the Web — and Fast

Traditional media — publishers of newspapers, magazines and other print publications — have had at least a decade or more to get used to the idea of the web and the disruptive effect it is having on their businesses, but many continue to drag their feet when it comes to adapting. Some experiment with paywalls, while others hope that iPad apps will be the solution to their problems, now that Apple allows them to charge users directly through the tablet. But the lessons of how to adapt to the web and take advantage of it are not complicated, if media outlets are willing to listen. And these lessons don’t just apply to mainstream media either — anyone whose business involves putting content online needs to think hard about applying them.

Newspapers in particular continue to come under pressure from the digital world: eMarketer recently estimated that online advertising will eclipse newspaper advertising this year for the first time — a further sign of the declining importance of newspapers in the online commercial ecosystem, where Facebook and Twitter are getting a lot more interest from advertisers than any traditional publication. Online, newspapers and magazines are just another source of content and pageviews or clickthroughs — they are no longer the default place for brand building or awareness advertising, nor are they even one of the most popular.

Rupert Murdoch, among others, seems to believe that paywalls are the route to success online, and recently installed one at the Times of London and the Sunday Times in England. But paywalls are are mostly a rearguard action that newspapers and magazines are fighting to try and keep some of their subscribers paying for the product, rather than just getting it for free through the web. The editors of the Times have said that they are happy with the response to their paywall, even though their readership dropped by more than 99 percent following the introduction of subscriptions for the website. That suggests it is far more important to the paper to keep even a few thousand paying readers rather than appealing to the vast number of potential readers who will now never see the site’s content.

It’s true that the Wall Street Journal and the Economist, among others, have been successful in getting readers and users to pay for their content — but it’s also true that not every publication can be the Wall Street Journal or the Economist. Whether you are a newspaper or magazine publisher, or whether you have some other business that depends on online publishing of content in some way, here are some of the lessons that you need to absorb to take advantage of the web:

* Forget about being a destination: In the old days, it was enough to “build it and they will come,” and so everyone from AOL and Yahoo to existing publishers of content tried to make their sites a destination for users, complete with walls designed to keep them from leaving. But Google showed that successful businesses can be built by actually sending people away, and others — including The Guardian newspaper in Britain — have shown that value can be generated by distributing your content to wherever people are, via open APIs and other tools, rather than expecting them to come to you.

* Don’t just talk about being social: Social media is a hot term, but the reality is that all media is becoming social, and that includes advertising and other forms of media content. Whether you are writing newspaper stories or publishing blog posts on your company blog, you will get feedback from readers and/or users — and you had better be in a position to respond, and then take advantage of the feedback you get. If you don’t, or if you block your employees from using Twitter and Facebook and other such tools, you will not get any benefit, and you will be worse off as a result.

* Get to know your community: This is something that new media outlets such as The Huffington Post have done very well — reaching out to readers and users, providing a number of different ways for them to share and interact with the site. News sites like Toronto-based OpenFile are designed around the idea that every member of a community has something to offer, and that allowing these ideas into the process via “crowdsourcing” can generate a lot of value. Even some older media players such as the Journal Register newspaper chain have been getting this message, and opening up what they call a “community newsroom” as a way of building bridges with readers.

* Use all the tools available to you: Large media entities — and large companies of all kinds — often have a “not invented here” mentality that requires them to build or develop everything in house. But one of the benefits of the distributed web is that there are other services you can integrate with easily in order to get the benefit of their networks, without having to reinvent the wheel. Groupon is a great example: many publishers and websites are implementing “daily deal” offers through a partnership with Groupon, while others are using a white-label service from a competitor called Tippr. Take a look around you and make use of what you can. David Weinberger, author of the Cluetrain Manifesto, called the web “small pieces, loosely joined.”

* Don’t pave the cart paths: Media outlets, including a number of leading newspapers and magazines, seem to feel that the ideal way of using a new technology such as the iPad is to take existing content from their websites or print publications and simply dump it on the device — in much the same way that many publications did with CD-ROMs when they first arrived on the scene. Why bother putting your content on the iPad if you aren’t going to take advantage of the features of the device, including the ability to share content? And yet, many major media apps provide no way for users to share or even link to the content they provide.

* Be prepared to “burn the boats”: Venture capitalist Marc Andreessen wrote about how media entities in some cases should “burn the boats,” as ** is said to have done in order to show that he was fully committed to his cause and would never retreat. The idea being that if you are still mostly focused on your existing non-web operations, and always see those as the most important, then you will inevitably fail to be as aggressive as you need to be when it comes to competing with online-only counterparts, and that could spell doom. The Christian Science Monitor and several other papers shut down their print operations completely and went web only. Obviously that isn’t for everyone, but sometimes drastic action is required.

It seems unlikely that Rupert Murdoch will ever be convinced that he has made a mistake with his paywalls, despite a track record of poor judgment calls such as the purchase of MySpace. And other newspapers and publishers of all kinds are free to make similar mistakes. But if you are engaged in a business that involves content and you want to remain competitive online, you have to become just as web-focused and adaptable as your online-only counterparts — or you will wind up cornering the market in things that most people no longer want, or at least no longer want to pay for.

Ingram Family Christmas Letter for 2010

I’ve taken a little time off from deciphering classified U.S. diplomatic cables on the WikiLeaks website to bring you some news about the Ingram family — or my little branch of it anyway. As usual, I am going to leave out most of the disappointments and exaggerate the highlights until they are all out proportion, because that’s how I roll in these Christmas letters. As just one sign of what a great year it has been for the entire Ingram clan, I am typing these words on my iPad — one of the few times that I have been able to get it out of the clutches of one of my lovely daughters, who seem to believe that I got it for them to play Angry Birds or Fruit Ninja.

The year started as most of our recent years have: with a lovely New Year’s party up in the frozen north country near Buckhorn (yes, there really is such a place), at the Farm with Marc and Kris and several other friends and family members. We skated on the pond near the old farmhouse and played ice bocce, a challenging game involving frozen Tide bottles filled with water, and even did a little skating and hiking on the trails around the property, when the weather co-operated. Then it was back to the city and back to reality. Caitlin headed back to McMaster for the last part of her second year of nursing, and Meaghan went back to Grade 11 and her musical theatre obsession, and Zoe went back to finish off Grade 6 — the end of primary school.

At the same time, I made a life-changing decision. No, I didn’t decide to shave off my beard or convert to the Church of the Subgenius (already a member, I am happy to say). I left the Globe and Mail after 15 years working there in a variety of writing and editing roles — most recently as the paper’s first “community editor,” helping reporters and editors try to understand Twitter and Facebook and comments on news stories and how to handle all these new tools for “social media.” On January 18, I became a senior writer with a technology blog network based in San Francisco called GigaOM, named after founder Om Malik, who I got to know several years ago.

Leaving the Globe was hard, and not just because my mother doesn’t really know what to say now when people ask her what I do for a living. I worked with a lot of great people, and I enjoyed being part of a great media company, but it was time to move on, and if you like writing about technology and how it is changing the media and changing our lives — and I do — then the web is where you need to do it (I think this Internet thing is really going to take off). GigaOM is a great outfit with a terrific team of writers and editors, and visiting San Francisco every couple of months is pretty great too, even if it is rainy and cool a lot of the time (I did get to meet Craig Newmark of Craigslist though).

But enough about me. As we have most years, we visited Ottawa for Winterlude with Becky’s sister Barb and her family, as well as Becky’s brother Dave and his family. We skated the canal and stuffed our faces with beaver tails and poutine and maple taffy rolled up on a stick, and a great time was had by all. In March, we headed down to Florida with Meaghan and Zoe, and visited Becky’s mom Edie and her boyfriend Ron at Ron’s place on the east side of Florida — where we took in a baseball game — then headed over for some time on the west side near the Gulf, where Edie still has a place. Coming back to winter was hard, but by then spring was on its way.

In May, we took a fantastic trip to California with some friends, renting cars and driving up Highway 101 north of San Francisco through Sonoma wine country (where we stopped at a number of great wineries, both big and small) to a little town called Redway, where Kris’s family has a couple of cabins deep in a redwood forest, built by her grandfather. We spent a week there, hiking through the giant trees in Humboldt State Park, driving the winding mountain roads out to Shelter Cove on the “Lost Coast,” kayaking with some sea lions near the tiny town of Trinidad, and hiking through Fern Canyon — where they filmed part of Jurassic Park because it looks like the dawn of time. On the way back to San Francisco we stopped at a small airfield and went for rides in a glider as well as a restored open-cockpit bi-plane, which was incredibly fun. And we also did some typical San Francisco things, like climbing the Coit Tower and visiting Alcatraz.

May also saw the fifth annual Mesh conference, which drew a sell-out crowd to hear people like the Privacy Commissioner of Canada and author Joseph Menn talk about privacy online. The team at the excellent TVO show The Agenda even showed up at Mesh to film a panel on the topic — which I was a member of — and host Steve Paikin did a terrific job with it as usual. Mesh put on its first spin-off conference in November as well, called MeshMarketing, which was also a great success.

In June, we had Zoe’s graduation from Grade 6, which was a star-studded event that involved a team of hair-dressers known as sister Caitlin and her friends. And Meaghan went off to spend the entire summer at a camp near the Ingram summer homestead in the Ottawa Valley, where she was a counsellor and kitchen staff and had a fantastic time. At one point during the summer, she had her little sister Zoe and about six of Zoe’s cousins and friends there as well, and she was so professional that she only tormented them a tiny bit here and there. Caitlin spent the summer taking courses at McMaster, since jobs in nursing proved to be elusive.

Becky and I spent the summer working at the cottage, sitting out on the porch overlooking the lake, with a laptop set up on a table on wheels — and we picked the perfect summer to do it, since the temperatures were in the 30s for weeks at a time. The downside was that we were working, but the upside was that during breaks we could go for a swim, or take a paddle in the new canoe we bought (to replace the one that got crushed by the same tree that took out the corner of the cottage last year). And in August we had a great party at the Farm for Becky’s 50th, with cake and ice cream and champagne down by the pond and a wonderful crowd of friends and family.

The fall saw Meaghan move into Grade 12, where she has been working like a trouper on the school musical, getting up early and staying late on school days and weekends, along with working at her job at the deli at the local Metro (which did not survive the year, unfortunately). Zoe moved into Grade 6, and seemed to go from being a child to being quite the young lady almost overnight — although she continues to play hockey on both a house league and a select team, where she is a great defenceman and a sometime goalie. And Caitlin started her third year of nursing, and even managed to squeeze in some time to see her family now and then.

We visited The Farm to do the usual annual cutting down of harmless trees and had a giant bonfire. And I did a couple of quick trips to San Francisco — one in November where I dropped in on Twitter and a second one in December where I attended a party and some complete strangers I met decided to take a photo that looked like we had just dropped the hottest album of the year. The year ended with a fantastic retreat weekend at Blue Mountain near Collingwood organized as a working mini-vacation for the Mesh team and their families. We had a day of meetings but also some great food and skiing and swimming in the outdoor heated pools and hot tubs, topped off by a great Scandinavian spa day with a one-hour Swedish massage followed by a series of hot pools, cold plunges, steam rooms and saunas. A pretty fantastic end to the year.

We hope your year was just as good, and that all of your friends and loveds ones are happy and well, and that you get a chance to see them over the holidays. And if we haven’t seen you in a while, please know that you are in our thoughts and that we would love to get together sometime. Give us a ring or drop us a line at [email protected] or [email protected]. All the best.

Google Fights Growing Battle Over “Search Neutrality”

The European Union, which has been investigating Google’s dominance in web search as a result of complaints from several competitors, is broadening that investigation to include other aspects of the company’s business, EU officials announced today. The EU opened the original case last month, and has now added two German complaints to it — one made by a group of media outlets and one by a mapping company, both of whom claim that Google is favoring its own properties unfairly, and also has refused to compensate publishers for their content.

The original case was opened last month by EU competition commissioner Joaquin Almunia, and an official statement from the commision said that investigators would be looking at “complaints by search service providers about unfavourable treatment of their services in Google’s unpaid and sponsored search results, coupled with an alleged preferential placement of Google’s own services.”

It isn’t just the EU that has raised concerns about Google treating its own assets and services differently in search results: in a recent Wall Street Journal story on the same issue, a number of competitors in a variety of markets — including TripAdvisor, WebMD and CitySearch — complained about this preferential treatment by the web giant. They said **. Google responded with a blog post saying it was concerned only about producing the best results for users, regardless of whose service was being presented in those results.

Although competition laws are somewhat different in Europe than they are in the United States — where antitrust investigators have to show that consumers have been harmed by an abuse of monopoly power, not just that competitors have been harmed — the EU investigation is sure to increase the heat on the web giant. And it comes at an especially inopportune time, since Google is trying to get federal approval for its purchase of travel-information service ITA. Competitors have complained that if Google buys the company, it will be incorporated into travel-related search results in an unfair way.

Washington Post columnist Steve Pearlstein raised similar concerns about Google’s growing dominance in a recent piece, arguing that the company should be prevented from buying major players in other markets because it is so dominant in web search. Google responded by arguing that it competes with plenty of other companies when it comes to acquisitions, and there has been no evidence shown that consumers have been harmed by its growth (I think Pearlstein’s argument is flawed, as I tried to point out in this blog post). Pearlstein has since responded to Google here.

There seems to be a growing attempt to pin Google down based in part on the concept of “search neutrality” — the idea that the web giant should be agnostic when it comes to search results, in the same way net neutrality is designed to keep carriers from penalizing competitors. But should search be considered a utility in that sense? That’s a tough question. In many ways, the complaints from mapping companies and others seem to be driven in part by sour grapes over Google’s success and their own inability to take advantage of the web properly, as Om argues in a recent GigaOM Pro report (subscription required).

Let’s Be Careful About Calling This a Cyber-War

Terms like “cyber-war” have been used a lot in the wake of the recent denial-of-service attacks on MasterCard, Visa and other entities that cut off support for WikiLeaks. But do these attacks really qualify? An analysis by network security firm Arbor Networks suggests that they don’t, and that what we have seen from the group Anonymous and “Operation Payback” is more like vandalism or civil disobedience. And we should be careful about tossing around terms like cyber-war — some believe the government is just itching to find an excuse to adopt unprecedented Internet monitoring powers, and cyber-war would be just the ticket.

The “info-war” description has been used by a number of media outlets in referring to the activities of Anonymous, the loosely organized group of hackers — associated with the counter-culture website known as 4chan — who have been using a number of Twitter accounts and other online forums to co-ordinate the attacks on MasterCard and others over the past week. But the idea got a big boost from John Perry Barlow, an online veteran and co-founder of the Electronic Frontier Federation, who said on Twitter that:

The first serious infowar is now engaged. The field of battle is WikiLeaks. You are the troops.

As stirring an image as that might be, however — especially to suburban teenagers downloading a DDoS script from Anonymous, who might like to think of themselves as warriors in the battle for truth and justice — there is no real indication that Operation Payback has even come close to being a real “info-war.” While the attacks have been getting more complex, in the sense that they are using a number of different exploits, Arbor Networks says its research shows that they are still relatively puny and unsophisticated compared with other hacking incidents in the past.

Distributed denial-of-service attacks like the kind Operation Payback has been involved with have been ramping up in size, Arbor says, with large “flooding attacks” involving as much as 50 gigabytes of data or more, something that can overwhelm data centers and carrier backbones.

So were the Operation Payback strikes against Amazon, MasterCard, Visa and a Swedish bank (which cut off funds belonging to WikiLeaks) in this category? No, says Arbor.

Were these attacks massive high-end flooding DDoS or very sophisticated application level attacks? Neither. Despite the thousands of tweets, press articles and endless hype, most of the attacks over the last week were both relatively small and unsophisticated. In short, other than than intense media scrutiny, the attacks were unremarkable.

In other words, the most impressive thing about the attacks is the name of the easily downloadable tool they employ, which hackers like to call a “Low Orbit Ion Cannon” or LOIC for short (there are also a couple of related programs with minor modifications that are known as the “High Orbit Ion Cannon” and the “Geosynchronous Orbit Ion Cannon”). But unlike a real ion cannon, the ones used by Operation Payback only managed to take down the websites of their victims for a few hours at most.

As Arbor notes in its blog post on the attacks, however, real cyber-war is something the U.S. government and other governments are very interested in, for a variety of reasons — and it has a lot more to do with malicious worms such as Stuxnet, which seeks out and disables specific machinery in a deliberate wave of sabotage, than it does some DDoS attacks run by voluntary bot-nets such as the one organized by Anonymous. And among other things — as investigative journalism Seymour Hersh noted in a recent New Yorker piece entitled “The Online Threat: Should We Be Worried About a Cyber War?” — such a war would give the military even more justification for monitoring and potentially having back-door access to networks and systems, allegedly to defend against foreign attacks.

How Big Should We Let Google Get? Wrong Question

While Google is busy trying to compete with the growing power of Facebook, there are still those who believe that the government needs to do something to blunt the growing power of Google. Washington Post business columnist Steven Pearlstein is the latest to join this crowd, with a piece entitled “Time to Loosen Google’s Grip?,” in which he argues that the company needs to be prevented from buying its way into new markets and new technologies. Not surprisingly, Google disagrees — the company’s deputy general counsel has written a response to Pearlstein in which he argues that Google competes fair and square with lots of other companies, and that its acquisitions are not likely to cause any harm.

So who is right? Obviously the government has the authority to approve or not approve acquisitions such as Google’s potential purchase of ITA, the travel-software firm that the company agreed to acquire in July — which some have argued would give Google too much control over the online travel search-and-booking market (since ITA powers dozens of other sites and services in that market). But does Pearlstein’s argument hold water? Not really. More than anything, his complaint seems to be that Google is really big and has a lot of money, so we should stop it from buying things.

Pearlstein starts out by noting that Google isn’t just a web search company any more, but is moving into “operating system and application software, mobile telephone software, e-mail, Web browsers, maps, and video aggregation.” Not to be unkind, but did Pearlstein just notice that Google has a mapping service and is doing video aggregation? Surely those wars are long over now. But no, the WaPo columnist suggests the company shouldn’t have been allowed to buy YouTube, because it had a “dominant position” in its market. This, of course, ignores the fact that there wasn’t even a market for what YouTube had when Google bought it, which is why many people thought the deal was a bad idea.

Pearlstein’s motivation becomes obvious when he says things like “The question now is how much bigger and more dominant we want this innovative and ambitious company to become,” or that he has a problem with “allowing Google to buy its way into new markets and new technologies.” Since when do we decide how big companies are allowed to become, or whether they should be able to enter new markets? Antitrust laws were designed to prevent companies from using their monopoly power to negative effect in specific markets, not simply to keep companies from becoming large. But Pearlstein seems to be arguing that they should be broadened to cover any big company that buys other big companies:

Decades of cramped judicial opinions have so limited application of antitrust laws that each transaction can be considered only in terms of how it affects the narrowly defined niche market that an acquiring company hopes to enter.

The Washington Post columnist also trots out the “network effect” argument, which he says results in a market where “a few companies get very big very fast, the others die away and new competitors rarely emerge.” So how then do we explain the fact that Facebook arose out of nowhere and completely displaced massive existing networks like MySpace and Friendster? And while Google may be dominant in search and search-related advertising, the company has so far failed to extend that dominance into any other major market, including operating systems (where it competes with a company you may have heard of called Microsoft), mobile phone software and web-based application software. In fact, Google arguably has far more failed acquisitions and new market entries than it does successful ones.

Google’s deputy counsel also makes a fairly powerful point in his defence of the company’s acquisitions, which is that antitrust laws are meant to protect consumers, not other businesses or competitors, and — so far at least — there is virtually no compelling evidence that the company’s purchases have made the web or any of its features either harder to use or more expensive for consumers, or removed any choice. If anything, in fact, Google has been the single biggest force in making formerly paid services free. That’s going to make an antitrust case pretty hard to argue, regardless of what Mr. Pearlstein thinks.

Facebook Draws a Map of the Connected World

If there’s one thing you get when you have close to 600 million users the way Facebook does, it’s a lot of data about how they are all connected — and when you plot those inter-relationships based on location, as one of the company’s engineers found, you get a world map made up of social connections. There are gaps in the data, of course, with dark spots in China and other countries that block the social network (or have large competitors of their own, as Russia does), but the result is quite an amazing picture of a connected world. If that’s what an intern at Facebook can come up with, imagine what else would be possible with that data.

The visualization is the work of Paul Butler, an intern on Facebook’s data infrastructure engineering team. As he described in a blog post, he started by taking a sample of about ten million pairs of friends from the Facebook data warehouse, then combined that with each user’s current city and added up the number of friends between each pair of cities, and merged that with the longitude and latitude of each city. And then to make the data more visible, Butler says he “defined weights for each pair of cities as a function of the Euclidean distance between them and the number of friends between them.”

I was a bit taken aback by what I saw. The blob had turned into a surprisingly detailed map of the world. Not only were continents visible, certain international borders were apparent as well. What really struck me, though, was knowing that the lines didn’t represent coasts or rivers or political borders, but real human relationships.

(image)

What Butler did with the data is similar to — although much more elaborate than — what a programmer outside Facebook tried to do with some of the site’s profile data, before he was threatened with a lawsuit. Pete Warden scraped information from millions of profiles and then analyzed it to see the connections between states and between countries, and drew interactive maps based on the number of those connections. But Facebook threatened him with a lawsuit and he was forced to delete the data, because his scraping of user profiles was against the site’s terms of service.

Amazon, WikiLeaks and the Need For an Open Cloud Host

As the WikiLeaks saga continues, with founder Julian Assange facing potential extradition to Sweden (although not for leaking secret documents) and the U.S. considering espionage charges against him, it’s easy to overlook some of the key issues that have arisen out of the affair — particularly those raised by Amazon’s removal of WikiLeaks from its servers, out of concern about the legality of the content being hosted there. At least one senior technologist thinks that this could raise red flags about the utility of cloud computing, while programmer and open-web advocate Dave Winer believes that the incident reinforces the need for an open cloud host of some kind.

In the Wall Street Journal yesterday, Dr. Joseph Reger — chief technology officer for Fujitsu Technology Solutions — said that Amazon’s decision to withdraw hosting for WikiLeaks from its EC2 servers is “bad news for the new IT paradigm of of cloud computing,” and ultimately calls “the security and availability of cloud services into question.” Although Amazon maintained that it was simply enforcing its terms of service — which prevent companies from hosting content to which they do not have the rights, or content that will could lead to injury — Reger said that the company’s actions would cause many to lose faith in the cloud.

The Fujitsu executive also raised the issue of whether cloud providers should even be in the business of assessing the legality or morality of the content on their servers, asking: “Should providers of cloud services constantly review whether any of their customers are pursuing an unpopular or immoral activity and continually make value judgments as to whether they are willing to continue the service?” Deciding whether content is legal, he said, “is not the job of providers. It has to be judged by a court of law.” Reger has a point: is Amazon going to start reviewing all the content on its servers just in case someone has uploaded something to which they don’t own the rights?

As pointed out by Ethan Zuckerman and Rebecca Mackinnon — both of whom are affiliated with the Harvard Berkman Center for the Internet and Society, and are the co-founders of Global Voices Online — the Internet may seem like a giant open commons where we share our thoughts, but it is effectively the domain of large corporations. And any of them can cut off our access or our ability to host content whenever they wish, according to terms of service and service-level agreements that are often vague and easy to bend in whatever direction a company wants them to go.

All of this has led Winer, who developed the RSS syndication format and other web technologies, to call for a “web trust” that can reliably and safely store documents of all kinds — whether they are WikiLeaks cables or personal Twitter accounts — in such a way that they are free from both corporate and government intervention, an entity that is “part news organization, university, library and foundation.” Winer said in a blog post that he has been discussing this idea with Brewster Kahle, the founder of Archive.org, which has been building a public archive of the web for years as well as an Open Library of e-books.

When WikiLeaks was first removed from Amazon, and then had its DNS listing deleted by EveryDNS (ironically, it has since gotten support from Canadian provider EasyDNS, which many mistakenly assumed was its original host), we raised the idea of a “stateless, independent data haven” that could host the documents, something WikiLeaks has been trying to create in Iceland. Luckily for Assange, his organization has secure hosting from a Swedish company whose servers are located deep inside a mountain — and says it has no plans to stop providing service to the organization — as well as the country’s Pirate Party and other supporters.

But what about those who don’t have the kind of resources and support that WikiLeaks does? They are at the mercy of Amazon and other hosting companies — and while Google has refused requests to pull down information in the past, citing free speech, it could just as easily change its mind at some point down the road. Winer’s proposal may never get off the ground, but it is a worthwhile effort nonetheless.

Top Twitter Trend for 2010: No, It Wasn’t Justin Bieber

The year isn’t quite over yet, but Twitter has already come out with the top trending topics for 2010, and surprisingly enough Justin Bieber — the guy who is so popular that Twitter had to modify the way it calculates trending topics — did not take the top slot. That went to the Gulf oil spill. Soccer and movies were also top discussion topics, relegating Mr. Bieber to the number eight spot on Twitter’s list (although he did get number one on the people-related trend list). The numbers came from Twitter’s analysis of more than 25 billion tweets sent during the year.

Trending topics have been a somewhat controversial issue for Twitter over the past week or so, with a number of users accusing the company of censoring its trends to keep WikiLeaks from being a top discussion topic. Twitter eventually posted an explanation of how it arrives at the top trends, noting that the feature is designed to show topics that are being discussed more than they have been previously — in other words, if Bieber discussion is hot and heavy for days at a time, then that becomes the benchmark and it will not become a trending topic until it goes above that level.

(Please read the rest of this post at GigaOM here).

Now That We Have the Web, Do We Need Associated Press?

According to media analyst Clay Shirky, author of Here Comes Everybody, the list of things that the Internet has killed — or is in the process of killing — includes media syndication of the kind that the Associated Press and other newswires are built on. In a look at what 2011 will bring for media, written for the Nieman Journalism Lab, Shirky says this process, which is “a key part of the economic structure of the news business,” is next in line for widespread disruption.

In fact, as Shirky himself admits, the kind of distribution that a newswire engages in has been in decline for some time now. Newspapers still push content to The Associated Press, hoping to get the benefit of the syndication it offers, but the only ones getting any benefit are tiny newspapers and websites who rely on the wire because they can’t produce enough content by themselves. While the web and RSS and other digital syndication models are not perfect, the need to have a combination one-stop shop for content and Big Brother-style copyright cop is dwindling. Says Shirky:

Put simply, syndication makes little sense in a world with URLs. When news outlets were segmented by geography, having live human beings sitting around in ten thousand separate markets deciding which stories to pull off the wire was a service. Now it’s just a cost.

Even the newswire itself realizes this, of course, and it has been trying desperately for the past year or two to find some way of shoring up the crumbling walls of its former gatekeeper status. It has railed against Google News and threatened to file claims against everyone from the web giant to individual bloggers because of the use of even tiny excerpts of its content, but still its media castle continues to erode.

As Shirky notes in his piece, the AP has also been talking for some time now about changing the nature of its relationship with member papers, and keeping some of its content to itself — requiring members to link to that content on the AP website, rather than running it on their own sites. The wire service, which was originally formed to distribute content produced by its members, seems to want to become a destination, now that the Internet allows anyone to distribute content far and wide without the AP’s help.

One interesting sub-plot is that Google is working on developing better attribution for content that appears in Google News, according to a recent blog post entitled “Giving credit where credit is due.” The idea is that publishers will tag their content with special tags so that the search engine can recognize who originally created a story — and presumably use this as a way of determining which of those 45 carbon-copy versions of a story it should highlight in Google News. Shirky is right that this could improve things for users, but make things substantially worse for newspapers and wire services:

Giving credit where credit is due will reward original work, whether scoops, hot news, or unique analysis or perspective. This will be great for readers. It may not, however, be so great for newspapers, or at least not for their revenues, because most of what shows up in a newspaper isn’t original or unique. It’s the first four grafs of something ripped off the wire and lightly re-written, a process repeated countless times a day with no new value being added to the story.

The AP isn’t completely dead yet, mind you. The service has its own news staff, who generate their own stories, just as Reuters and Bloomberg and other wire services do. Google’s pending change to attribution rules could actually help the AP when it comes to these internally produced stories — but they could also do substantial damage to the service at the same time, by shifting the spotlight to member papers who create the original stories that AP would traditionally get credit for. In a world where syndication is available to anyone with an Internet connection, what is AP selling?

Lessons From The Atlantic: Cannibalize Yourself First

Everywhere you look, newspapers and magazines are trying to figure out how to evolve in an online world. Some have merged with online outlets, like Newsweek did with The Daily Beast, while others — including the New York Times — are busy putting up paywalls to try and retain readers. But The Atlantic took a more radical approach to surviving in the web era: it set out to deliberately disrupt its own business, rather than letting someone else do it, and while the experiment is not over yet, it seems to be paying dividends for the magazine’s parent company.

A feature in the New York Times on Sunday details how the magazine, which has been around for over 150 years, has not turned a profit for more than a decade — but is now looking at recording a healthy profit for 2010 of almost $2 million. How did it manage to do such a thing? According to Atlantic Media president Justin Smith, who joined the company at a low point three years ago, the magazine imagined itself as a venture-capital backed startup in Silicon Valley “whose mission was to attack and disrupt The Atlantic.” As he described it to the New York Times:

In essence, we brainstormed the question, “What would we do if the goal was to aggressively cannibalize ourselves?”

The first thing to do was to remove the walls — both literal and figurative — between the web side and the print side of the publication, both in terms of the business operations and the editorial division. Another wall that came down was the website’s paywall (are you listening, New York Times?). Younger writers with web experience were hired, and advertising staff were given the freedom to sell print or online ads, so long as they hit their targets. The magazine also branched out into conferences and other brand-extension experiments, and it hired superstar blogger Andrew Sullivan away from Time magazine.

The result? Revenue at The Atlantic has almost doubled since 2005, hitting $32 million this year, of which half is made up of advertising revenue. Digital advertising accounts for almost 40 percent of that number, compared with less than 15 percent at some other traditional print publications, and the amount of digital ad revenue is up by close to 70 percent over 2009. The addition of traffic draws like Sullivan has undoubtedly helped — he accounts for almost 25 percent of the site’s 4.8 million monthly unique visitors, a number that is up 50 percent over last year.

As the NYT feature notes, not every traditional publication is going to be able to do what The Atlantic has done — or at least, not as easily. It is a relatively small business compared with giants such as Newsweek and Time magazine, and has a single motivated owner. But The Atlantic had plenty of one thing that was crucial to its success: desperation. According to owner David Bradley, who bought the magazine in 1999, “Atlantic had so serially failed that it was overwhelmingly likely the next thing we would do was fail, and the next thing we would do was fail.”

That sense of desperation provided just the impetus that the magazine needed to remake itself — and not just a little, but from the top down and from the inside out. There are plenty of traditional media outlets who could use a bit more of that desperation themselves, as they tinker and fidget instead of making the hard changes that need to be made.

Is WikiLeaks the Beginning of a New Form of Media?

As WikiLeaks continues to release classified diplomatic cables, and fights to remain online and solvent, it is becoming increasingly clear that what is happening has less to do with WikiLeaks itself, and more to do with what seems to be a new form of media emerging: not a news or journalism entity specifically, but a kind of media middleman that exposes secret or undiscovered information, which can then become a source of news. Could WikiLeaks — and the other similar efforts it appears to be spawning — become a crucial new part of the digital media ecosystem?

Over the past couple of weeks, we’ve seen WikiLeaks attacked by the U.S. government — now apparently considering espionage charges against leader Julian Assange for publishing the cables — and shut down by companies such as PayPal and Amazon (which seems to see no irony in selling a book made up of the WikiLeaks’ cables). Both of those companies have in turn come under attack by Anonymous, a rogue group of hackers who targeted their websites as part of what the group called Operation Payback, although the group appears to be moving away from denial-of-service attacks to less destructive attention-getting strategies.

Meanwhile, WikiLeaks has been making itself so distributed — by setting up over a thousand mirror sites through which it can publish documents automatically, as well as moving servers to several different hosts — that it seems almost unassailable, even if Assange is found guilty of something. The WikiLeaks founder has said that in addition to the mirror sites, BitTorrent archives of the cables have been provided to 10,000 sources who could continue to publish them even if WikiLeaks was somehow taken offline.

And it’s not just WikiLeaks any more: a new spin-off group called OpenLeaks, formed in part by a splinter faction from within WikiLeaks, says it is launching next week with much the same mandate as its predecessor — to make documents public whether governments and companies want them to be or not. And another group calling itself BrusselsLeaks is apparently also looking to create the kind of document clearinghouse that WikiLeaks has set up, but it will be focused on **.

As Evgeny Morozov notes in a piece written for the New York Times, and in a summary of that piece on his blog at Foreign Affairs magazine, WikiLeaks has come to serve as a kind of middleman for media outlets such as the NYT and The Guardian. Although these entities have investigative teams, they can’t possibly find everything — and there is so much more information out there to comb through. What agencies such as WikiLeaks and OpenLeaks could provide is a single source for such documents, as well as a way of publicizing that these secrets have been revealed, something that WikiLeaks has done very well.

Do newspapers and other media need WikiLeaks? Some would argue that the sources who went to Assange could just as easily have gone to the NYT or The Guardian directly. So why didn’t they? Possibly because they wanted the information to be spread more widely than just one media outlet, or were worried that one newspaper might not report on the cables properly if they were the only ones with that information. In a sense, as my former colleague Doug Saunders — the European bureau chief for Canadian newspaper The Globe and Mail — has noted, WikiLeaks is not that different from the brown envelope that the leaker behind the Watergate scandal delivered documents in.

In this era of real-time publishing and the ubiquitous web, however, the power of that brown envelope has been amplified a thousandfold, and its reach is far broader than was ever possible before, and that changes the game entirely.

A Day Spent Without My Arm — I Mean, My Phone

If you’ve used a smartphone — like an iPhone or an Android, or one of the newer BlackBerrys — for a fairly long time, here’s a challenge: go for a day or two without your phone, and see how it feels. And I don’t mean going skiing or hiking the Appalachian Trail or something like that either; try to go without it during a regular day in a city, or better still try to do without it when you are on a business trip to an unfamiliar city. I did that — not deliberately, mind you — on a recent day in San Francisco, when my iPhone suddenly decided to lock me out (maybe I wasn’t paying enough attention to it). And it was a painful experience.

Why was it painful? Simply put, I was disconnected. And I don’t mean that I couldn’t make phone calls — in fact, that was the part about the phone I missed the least. But I couldn’t look up where I was in Google Maps or any other GPS-based service, to try and find out where I was going, or measure how long it was going to take me, or get directions on how to get there (I was trying to get to the Apple store, so they could help me fix the phone, which suddenly started asking me for a passcode, even though I hadn’t set one). Particularly in an unfamiliar city, this kind of tool is hugely useful — and even in the city I normally live in, I use it all the time.

But it was more than just that. I couldn’t take photos of my surroundings, which is another thing I like to do a lot (especially in a city as great-looking as San Francisco), because the iPhone is my main camera, and I have it with me at all times. I used to like to snap shots and upload them to Flickr or Facebook — and now I share them with Instagram — a service that posts your photos to a stream your friends can follow and comment on, but also automatically cross-posts them to other services as well, including Twitter, Flickr, Facebook and Tumblr.

Twitter and Facebook were the other two things I missed. As anyone who follows me on Twitter (I’m @mathewi) probably knows, I share a lot on Twitter — thoughts, but mostly links to interesting content. It’s become an integral part of my day (and often of my night as well); not just posting things that I come across, but reading and commenting on the things that others post. I know it’s an overused term, but it really is a conversation, and it was something I missed a lot. And probably above all else, I missed being able to do that while killing time waiting in line — like the line I was waiting in to get my phone fixed.

But it was more than that. I missed the ability to look up anything I was curious about in Google at a moment’s notice. What is that building? Why is it called that? What does that sign mean? Why is there a giant bow and arrow sticking in the ground near the Embarcadero, which is right on the bay in San Francisco? Lots of questions like that occurred to me, but I was incapable of finding the answers. Sure, I could have bought a guidebook or something, or I suppose I could have stopped someone, but the ability to do it from a handheld device on a whim is something I have become fairly addicted to. And I have learned a lot as well.

This isn’t about the iPhone either — I am a big fan of the iPhone, but I think it’s fine if other people use a BlackBerry or an Android. My point is that smartphones have changed our lives in hundreds of tiny ways, and it isn’t until we try to spend a day or two without them that we find out exactly how dependent we are on them. Is that a good thing? I don’t know, to be honest. Maybe not. Maybe I should remember more things, instead of relying on my ability to look them up in Google. But I do know that having those tools at my fingertips is incredibly powerful — it may change the world, but it has certainly changed mine. And I think for the better.

WikiLeaks Gets Its Own “Axis of Evil” Defence Network

If the WikiLeaks saga was a comic book, it would be starting to look a lot like the Justice League of America vs. the League of Supervillians — or maybe it’s more like Star Wars, with the plucky rebel alliance up against the might of the Empire. As the U.S. government and a variety of corporations such as Visa and PayPal keep up the pressure on the document-leaking organization that they see as a traitor and a scofflaw, a rough alliance of WikiLeaks supporters have taken it upon themselves to wage a cyber-war in its defense.

Leading the fight is a shadowy group called Operation Payback, which in turn is loosely affiliated with Anonymous, an organization (although that term makes it sound more co-ordinated than it really is) that grew out of the alternative website 4chan, and became infamous for its attacks on Scientology, among other things. At last check, the Operation Payback site itself was offline — another symptom of the back-and-forth battle in which the group has been co-ordinating “distributed denial of service” or DDoS attacks on Amazon, PayPal, Visa and MasterCard.

All of those corporations have cut off support for WikiLeaks in the past week, despite the fact that it’s not clear the organization has actually done anything illegal by publishing classified military documents — something the New York Times and The Guardian have also done. In a statement on its website, Operation Payback quoted digital guru John Perry Barlow, co-founder of the Electronic Frontier Foundation, who said on Twitter that “The first serious infowar is now engaged. The field of battle is WikiLeaks. You are the troops.” Operation Payback added that:

While we don’t have much of an affiliation with WikiLeaks, we fight for the same reasons. We want transparency and we counter censorship. The attempts to silence WikiLeaks are long strides closer to a world where we can not say what we think and are unable to express our opinions and ideas.

It’s not clear how much disruption the group and its supporters have been able to create, however. MasterCard’s website was down for at least part of Wednesday, but the company said its cardholders and payment systems were not affected. PayPal said that it suffered a denial-of-service attack on Monday but that it was dealt with fairly rapidly, and Visa has not reported any issues at all so far. The website for the Swedish bank that froze WikiLeaks’ founder Julian Assange’s accounts went down for at least part of Tuesday, but the bank’s other operations appeared unaffected.

In other words, the Empire remains strong. Meanwhile, after sending out a plea for ways to keep the site up and running following the removal of DNS services by its provider EveryDNS, the organization now has over 1,200 mirror sites set up — many of them in Europe — through which it can publish any documents instantly. The site has also taken a number of other steps that will make it virtually impossible to remove it completely from the Internet (including having at least some of its servers hosted by The Pirate Bay, the file-sharing network based in Sweden) and Assange has said that there are over 10,000 sites that have full copies of the diplomatic cables.

Has WikiLeaks Actually Done Anything Illegal?

The cyber-noose is tightening around Wikileaks: Visa has joined the list of corporations that will no longer allow its users to send payments to the organization, which is looking for funding support as it continues to release thousands of classified U.S. diplomatic cables. MasterCard has done the same, and so has online payment service PayPal. All three have said they have legal concerns about dealing with WikiLeaks — but is there any real justification for this? Not really. In fact, it’s not clear that what WikiLeaks is doing is even illegal.

As media analyst Jeff Jarvis and others have pointed out, Visa and MasterCard and other payment services allow online users to send funds to a wide range of questionable entities, including sites that offer pornography. So why are they so concerned about WikiLeaks? Visa said that it had suspended support for payments to WikiLeaks while it “investigates” the organization, while MasterCard said that its rules prohibit customers from “directly or indirectly engaging in or facilitating any action that is illegal.” PayPal also said its terms of use prevents the service from being used to “encourage, promote, facilitate or instruct others to engage in illegal activity.”

Needless to say, the phrasing of those rules casts a pretty wide net — not just engaging in illegal activity, but encouraging it or instructing others in how to engage in it. But do even these broad rules apply to what WikiLeaks is doing? It’s not clear that they do. All the organization has done is to publish classified documents that originally belonged to the U.S. government — something that may be uncomfortable and embarrassing, but is not obviously illegal (even the Justice Department doesn’t seem too sure about whether WikiLeaks is guilty of anything). The only obvious crime that was involved in the release of those diplomatic cables was committed by the person who originally took them, since doing so is an offence under the U.S. Espionage Act.

Publishing those documents is not illegal — or at least, not yet, which is why Senator Joe Lieberman (I-Conn), the chairman of the Homeland Security and Governmental Affairs committee, has put forward his proposed SHIELD law (which stands for Securing Human Intelligence and Enforcing Lawful Dissemination), which would make it a crime to publish leaked classified information if doing so endangered U.S. agents or was otherwise not in the national interest. And this law would not just apply to WikiLeaks, but potentially any mainstream or online publication or media outlet that chose to publish any of the information, since — as I have tried to argue before — WikiLeaks is effectively a media entity.

Interestingly enough, while companies such as Amazon, Visa, MasterCard and PayPal have cut off the organization, Facebook released a statement saying that it has no issue with WikiLeaks — although so far no classified cables have been posted to the site’s Facebook page. Meanwhile, WikiLeaks’ leader Julian Assange is in court in London facing possible extradition to Sweden on sexual assault charges.