Is What WikiLeaks Does Journalism? Good Question

While the U.S. government tries to determine whether what WikiLeaks and front-man Julian Assange have done qualifies as espionage, media theorists and critics alike continue to debate whether releasing those classified diplomatic cables qualifies as journalism. It’s more than just an academic question — if it is journalism in some sense, then Assange and WikiLeaks should be protected by the First Amendment and freedom of the press. The fact that no one can seem to agree on this question emphasizes just how deeply the media and journalism have been disrupted, to the point where we aren’t even sure what they are any more.

The debate flared up again on the Thursday just before Christmas, with a back-and-forth Twitter discussion involving a number of media critics and journalists, including MIT Technology Review editor and author Jason Pontin, New York University professor Jay Rosen, ** Aaron Bady, freelance writer and author Tim Carmody and several other occasional contributors. Pontin seems to have started the debate by saying — in a comment about a piece Bruce Sterling wrote on WikiLeaks and Assange — that the WikiLeaks founder was a hacker, not a journalist.

Pontin’s point, which he elaborated on in subsequent tweets, seemed to be that because Assange’s primary intent is to destabilize a secretive state or government apparatus through technological means, then what he is doing isn’t journalism. Not everyone was buying this, however. Aaron Bady — who wrote a well-regarded post on Assange and WikiLeaks’ motives — asked why he couldn’t be a hacker *and* a journalist at the same time, and argued that perhaps society needs to protect the act of journalism, regardless of who practices it.

Rosen, meanwhile, was adamant that WikiLeaks is a journalistic entity, period, and journalism prof and author Jeff Jarvis made the same point. Tim Carmody argued that the principle of freedom of the press enshrined in the First Amendment was designed to protect individuals who published pamphlets and handed them out in the street just as much as it was to protect large media entities, and Aaron Bady made a point that I have tried to make as well, which is that it’s difficult to criminalize what WikiLeaks has done without also making a criminal out of the New York Times.

This debate has been going on since before the diplomatic cables were released, ever since Julian Assange first made headlines with leaked video footage of American soliders firing on unarmed civilians in Iraq. At the time, Rosen — who runs an experimental journalism lab at NYU — called WikiLeaks “the first stateless news organization,” and described where he saw it fitting into a new ecosystem of news. Not everyone agreed, however: critics of this idea said that journalism had to have some civic function and/or had to involve journalists analyzing and sorting through the information.

Like Rosen and others, I’ve tried to argue that in the current era, media — a broad term that includes what we think of as journalism — has been dis-aggregated or atomized; in other words, split into its component parts, parts that include what WikiLeaks does. In some cases, these may be things that we didn’t even realize were separate parts of the process to begin with, because they have always been joined together. And in some cases they merge different parts that were previously separate, in confusing ways, such as the distinction between a source and a publisher. WikiLeaks, for example, can be seen as both.

And while it is clearly not run by journalists — and to a great extent relies on journalists at the New York Times, The Guardian and other news outlets to do the heavy lifting in terms of analysis of the documents it holds and distributes — I think an argument can be made that WikiLeaks is at least an instrument of journalism. In other words, it is a part of the larger ecosystem of news media that has been developing with the advent of blogs, wikis, Twitter and all the other publishing tools we have now, which Twitter founder Ev Williams I think correctly argued are important ways of getting us closer to the truth.

Among those taking part in the Twitter debate on Thursday was Chris Anderson, a professor of media culture in New York who also writes for the Nieman Journalism Lab, and someone who has tried to clarify what journalism as an ecosystem really means and how we can distinguish between the different parts of this new process. In one post at the Nieman Lab blog, for example, he plotted the new pieces of this ecosystem on a graph with two axes: one going from “institutionalized” to “de-institutionalized” and the other going from “pure commentary” to “fact-gathering.” While WikiLeaks doesn’t appear on Anderson’s graph, it is clearly part of that process, just as the New York Times is.

Regardless of what we think about Julian Assange or WikiLeaks — or any of the other WikiLeaks-style organizations that seem to be emerging — this is the new reality of media. It may be confusing, but it is the best we have, so we had better start getting used to how it works.

What the Media Need to Learn About the Web — and Fast

Traditional media — publishers of newspapers, magazines and other print publications — have had at least a decade or more to get used to the idea of the web and the disruptive effect it is having on their businesses, but many continue to drag their feet when it comes to adapting. Some experiment with paywalls, while others hope that iPad apps will be the solution to their problems, now that Apple allows them to charge users directly through the tablet. But the lessons of how to adapt to the web and take advantage of it are not complicated, if media outlets are willing to listen. And these lessons don’t just apply to mainstream media either — anyone whose business involves putting content online needs to think hard about applying them.

Newspapers in particular continue to come under pressure from the digital world: eMarketer recently estimated that online advertising will eclipse newspaper advertising this year for the first time — a further sign of the declining importance of newspapers in the online commercial ecosystem, where Facebook and Twitter are getting a lot more interest from advertisers than any traditional publication. Online, newspapers and magazines are just another source of content and pageviews or clickthroughs — they are no longer the default place for brand building or awareness advertising, nor are they even one of the most popular.

Rupert Murdoch, among others, seems to believe that paywalls are the route to success online, and recently installed one at the Times of London and the Sunday Times in England. But paywalls are are mostly a rearguard action that newspapers and magazines are fighting to try and keep some of their subscribers paying for the product, rather than just getting it for free through the web. The editors of the Times have said that they are happy with the response to their paywall, even though their readership dropped by more than 99 percent following the introduction of subscriptions for the website. That suggests it is far more important to the paper to keep even a few thousand paying readers rather than appealing to the vast number of potential readers who will now never see the site’s content.

It’s true that the Wall Street Journal and the Economist, among others, have been successful in getting readers and users to pay for their content — but it’s also true that not every publication can be the Wall Street Journal or the Economist. Whether you are a newspaper or magazine publisher, or whether you have some other business that depends on online publishing of content in some way, here are some of the lessons that you need to absorb to take advantage of the web:

* Forget about being a destination: In the old days, it was enough to “build it and they will come,” and so everyone from AOL and Yahoo to existing publishers of content tried to make their sites a destination for users, complete with walls designed to keep them from leaving. But Google showed that successful businesses can be built by actually sending people away, and others — including The Guardian newspaper in Britain — have shown that value can be generated by distributing your content to wherever people are, via open APIs and other tools, rather than expecting them to come to you.

* Don’t just talk about being social: Social media is a hot term, but the reality is that all media is becoming social, and that includes advertising and other forms of media content. Whether you are writing newspaper stories or publishing blog posts on your company blog, you will get feedback from readers and/or users — and you had better be in a position to respond, and then take advantage of the feedback you get. If you don’t, or if you block your employees from using Twitter and Facebook and other such tools, you will not get any benefit, and you will be worse off as a result.

* Get to know your community: This is something that new media outlets such as The Huffington Post have done very well — reaching out to readers and users, providing a number of different ways for them to share and interact with the site. News sites like Toronto-based OpenFile are designed around the idea that every member of a community has something to offer, and that allowing these ideas into the process via “crowdsourcing” can generate a lot of value. Even some older media players such as the Journal Register newspaper chain have been getting this message, and opening up what they call a “community newsroom” as a way of building bridges with readers.

* Use all the tools available to you: Large media entities — and large companies of all kinds — often have a “not invented here” mentality that requires them to build or develop everything in house. But one of the benefits of the distributed web is that there are other services you can integrate with easily in order to get the benefit of their networks, without having to reinvent the wheel. Groupon is a great example: many publishers and websites are implementing “daily deal” offers through a partnership with Groupon, while others are using a white-label service from a competitor called Tippr. Take a look around you and make use of what you can. David Weinberger, author of the Cluetrain Manifesto, called the web “small pieces, loosely joined.”

* Don’t pave the cart paths: Media outlets, including a number of leading newspapers and magazines, seem to feel that the ideal way of using a new technology such as the iPad is to take existing content from their websites or print publications and simply dump it on the device — in much the same way that many publications did with CD-ROMs when they first arrived on the scene. Why bother putting your content on the iPad if you aren’t going to take advantage of the features of the device, including the ability to share content? And yet, many major media apps provide no way for users to share or even link to the content they provide.

* Be prepared to “burn the boats”: Venture capitalist Marc Andreessen wrote about how media entities in some cases should “burn the boats,” as ** is said to have done in order to show that he was fully committed to his cause and would never retreat. The idea being that if you are still mostly focused on your existing non-web operations, and always see those as the most important, then you will inevitably fail to be as aggressive as you need to be when it comes to competing with online-only counterparts, and that could spell doom. The Christian Science Monitor and several other papers shut down their print operations completely and went web only. Obviously that isn’t for everyone, but sometimes drastic action is required.

It seems unlikely that Rupert Murdoch will ever be convinced that he has made a mistake with his paywalls, despite a track record of poor judgment calls such as the purchase of MySpace. And other newspapers and publishers of all kinds are free to make similar mistakes. But if you are engaged in a business that involves content and you want to remain competitive online, you have to become just as web-focused and adaptable as your online-only counterparts — or you will wind up cornering the market in things that most people no longer want, or at least no longer want to pay for.

Merry Christmas 2010 From The Ingrams

I’ve taken a little time off from deciphering classified U.S. diplomatic cables on the WikiLeaks website to bring you some news about the Ingram family — or my little branch of it anyway. As usual, I am going to leave out most of the disappointments and exaggerate the highlights until they are all out proportion, because that’s how I roll in these Christmas letters. As just one sign of what a great year it has been for the entire Ingram clan, I am typing these words on my iPad — one of the few times that I have been able to get it out of the clutches of one of my lovely daughters, who seem to believe that I got it for them to play Angry Birds or Fruit Ninja.

The year started as most of our recent years have: with a lovely New Year’s party up in the frozen north country near Buckhorn (yes, there really is such a place), at the Farm with Marc and Kris and several other friends and family members. We skated on the pond near the old farmhouse and played ice bocce, a challenging game involving frozen Tide bottles filled with water, and even did a little skating and hiking on the trails around the property, when the weather co-operated. Then it was back to the city and back to reality. Caitlin headed back to McMaster for the last part of her second year of nursing, and Meaghan went back to Grade 11 and her musical theatre obsession, and Zoe went back to finish off Grade 6 — the end of primary school.

At the same time, I made a life-changing decision. No, I didn’t decide to shave off my beard or convert to the Church of the Subgenius (already a member, I am happy to say). I left the Globe and Mail after 15 years working there in a variety of writing and editing roles — most recently as the paper’s first “community editor,” helping reporters and editors try to understand Twitter and Facebook and comments on news stories and how to handle all these new tools for “social media.” On January 18, I became a senior writer with a technology blog network based in San Francisco called GigaOM, named after founder Om Malik, who I got to know several years ago.

Leaving the Globe was hard, and not just because my mother doesn’t really know what to say now when people ask her what I do for a living. I worked with a lot of great people, and I enjoyed being part of a great media company, but it was time to move on, and if you like writing about technology and how it is changing the media and changing our lives — and I do — then the web is where you need to do it (I think this Internet thing is really going to take off). GigaOM is a great outfit with a terrific team of writers and editors, and visiting San Francisco every couple of months is pretty great too, even if it is rainy and cool a lot of the time.

But enough about me. As we have most years, we visited Ottawa for Winterlude with Becky’s sister Barb and her family, as well as Becky’s brother Dave and his family. We skated the canal and stuffed our faces with beaver tails and poutine and maple taffy rolled up on a stick, and a great time was had by all. In March, we headed down to Florida with Meaghan and Zoe, and visited Becky’s mom Edie and her boyfriend Ron at Ron’s place on the east side of Florida — where we took in a baseball game — then headed over for some time on the west side near the Gulf, where Edie still has a place. Coming back to winter was hard, but by then spring was on its way.

In May, we took a fantastic trip to California with some friends, renting cars and driving up Highway 101 north of San Francisco through Sonoma wine country (where we stopped at a number of great wineries, both big and small) to a little town called Redway, where Kris’s family has a couple of cabins deep in a redwood forest, built by her grandfather. We spent a week there, hiking through the giant trees in Humboldt State Park, driving the winding mountain roads out to Shelter Cove on the “Lost Coast,” kayaking with some sea lions near the tiny town of Trinidad, and hiking through Fern Canyon — where they filmed part of Jurassic Park because it looks like the dawn of time. On the way back to San Francisco we stopped at a small airfield and went for rides in a glider as well as a restored open-cockpit bi-plane, which was incredibly fun. And we also did some typical San Francisco things, like climbing the Coit Tower and visiting Alcatraz.

May also saw the fifth annual Mesh conference, which drew a sell-out crowd to hear people like the Privacy Commissioner of Canada and author Joseph Menn talk about privacy online. The team at the excellent TVO show The Agenda even showed up at Mesh to film a panel on the topic — which I was a member of — and host Steve Paikin did a terrific job with it as usual. Mesh put on its first spin-off conference in November as well, called MeshMarketing, which was also a great success.

In June, we had Zoe’s graduation from Grade 6, which was a star-studded event that involved a team of hair-dressers known as sister Caitlin and her friends. And Meaghan went off to spend the entire summer at a camp near the Ingram summer homestead in the Ottawa Valley, where she was a counsellor and kitchen staff and had a fantastic time. At one point during the summer, she had her little sister Zoe and about six of Zoe’s cousins and friends there as well, and she was so professional that she only tormented them a tiny bit here and there. Caitlin spent the summer taking courses at McMaster, since jobs in nursing proved to be elusive.

Becky and I spent the summer working at the cottage, sitting out on the porch overlooking the lake, with a laptop set up on a table on wheels — and we picked the perfect summer to do it, since the temperatures were in the 30s for weeks at a time. The downside was that we were working, but the upside was that during breaks we could go for a swim, or take a paddle in the new canoe we bought (to replace the one that got crushed by the same tree that took out the corner of the cottage last year). And in August we had a great party at the Farm for Becky’s 50th, with cake and ice cream and champagne down by the pond and a wonderful crowd of friends and family.

The fall saw Meaghan move into Grade 12, where she has been working like a trouper on the school musical, getting up early and staying late on school days and weekends, along with working at her job at the deli at the local Metro (which did not survive the year, unfortunately). Zoe moved into Grade 6, and seemed to go from being a child to being quite the young lady almost overnight — although she continues to play hockey on both a house league and a select team, where she is a great defenceman and a sometime goalie. And Caitlin started her third year of nursing, and even managed to squeeze in some time to see her family now and then.

The year ended with a fantastic retreat weekend at Blue Mountain near Collingwood organized as a working mini-vacation for the Mesh team and their families. We had a day of meetings at the Westin, but also a couple of days and nights of great foods and skiing and swimming in the outdoor heated pools and hot tubs, topped off by a great Scandinavian spa day for the husbands and wives, with a one-hour Swedish massage followed by a series of hot pools, cold plunges, steam rooms and sauna treatments. A pretty fantastic end to a great and challenging year.

We hope your year was just as good, and that all of your friends and loveds ones are happy and well, and that you get a chance to see them over the holidays. And if we haven’t seen you in a while, please know that you are in our thoughts and that we would love to get together sometime. Give us a ring or drop us a line at [email protected] or [email protected]. All the best.

Google Fights Growing Battle Over “Search Neutrality”

The European Union, which has been investigating Google’s dominance in web search as a result of complaints from several competitors, is broadening that investigation to include other aspects of the company’s business, EU officials announced today. The EU opened the original case last month, and has now added two German complaints to it — one made by a group of media outlets and one by a mapping company, both of whom claim that Google is favoring its own properties unfairly, and also has refused to compensate publishers for their content.

The original case was opened last month by EU competition commissioner Joaquin Almunia, and an official statement from the commision said that investigators would be looking at “complaints by search service providers about unfavourable treatment of their services in Google’s unpaid and sponsored search results, coupled with an alleged preferential placement of Google’s own services.”

It isn’t just the EU that has raised concerns about Google treating its own assets and services differently in search results: in a recent Wall Street Journal story on the same issue, a number of competitors in a variety of markets — including TripAdvisor, WebMD and CitySearch — complained about this preferential treatment by the web giant. They said **. Google responded with a blog post saying it was concerned only about producing the best results for users, regardless of whose service was being presented in those results.

Although competition laws are somewhat different in Europe than they are in the United States — where antitrust investigators have to show that consumers have been harmed by an abuse of monopoly power, not just that competitors have been harmed — the EU investigation is sure to increase the heat on the web giant. And it comes at an especially inopportune time, since Google is trying to get federal approval for its purchase of travel-information service ITA. Competitors have complained that if Google buys the company, it will be incorporated into travel-related search results in an unfair way.

Washington Post columnist Steve Pearlstein raised similar concerns about Google’s growing dominance in a recent piece, arguing that the company should be prevented from buying major players in other markets because it is so dominant in web search. Google responded by arguing that it competes with plenty of other companies when it comes to acquisitions, and there has been no evidence shown that consumers have been harmed by its growth (I think Pearlstein’s argument is flawed, as I tried to point out in this blog post). Pearlstein has since responded to Google here.

There seems to be a growing attempt to pin Google down based in part on the concept of “search neutrality” — the idea that the web giant should be agnostic when it comes to search results, in the same way net neutrality is designed to keep carriers from penalizing competitors. But should search be considered a utility in that sense? That’s a tough question. In many ways, the complaints from mapping companies and others seem to be driven in part by sour grapes over Google’s success and their own inability to take advantage of the web properly, as Om argues in a recent GigaOM Pro report (subscription required).

Let’s Be Careful About Calling This a Cyber-War

Terms like “cyber-war” have been used a lot in the wake of the recent denial-of-service attacks on MasterCard, Visa and other entities that cut off support for WikiLeaks. But do these attacks really qualify? An analysis by network security firm Arbor Networks suggests that they don’t, and that what we have seen from the group Anonymous and “Operation Payback” is more like vandalism or civil disobedience. And we should be careful about tossing around terms like cyber-war — some believe the government is just itching to find an excuse to adopt unprecedented Internet monitoring powers, and cyber-war would be just the ticket.

The “info-war” description has been used by a number of media outlets in referring to the activities of Anonymous, the loosely organized group of hackers — associated with the counter-culture website known as 4chan — who have been using a number of Twitter accounts and other online forums to co-ordinate the attacks on MasterCard and others over the past week. But the idea got a big boost from John Perry Barlow, an online veteran and co-founder of the Electronic Frontier Federation, who said on Twitter that:

The first serious infowar is now engaged. The field of battle is WikiLeaks. You are the troops.

As stirring an image as that might be, however — especially to suburban teenagers downloading a DDoS script from Anonymous, who might like to think of themselves as warriors in the battle for truth and justice — there is no real indication that Operation Payback has even come close to being a real “info-war.” While the attacks have been getting more complex, in the sense that they are using a number of different exploits, Arbor Networks says its research shows that they are still relatively puny and unsophisticated compared with other hacking incidents in the past.

Distributed denial-of-service attacks like the kind Operation Payback has been involved with have been ramping up in size, Arbor says, with large “flooding attacks” involving as much as 50 gigabytes of data or more, something that can overwhelm data centers and carrier backbones.

So were the Operation Payback strikes against Amazon, MasterCard, Visa and a Swedish bank (which cut off funds belonging to WikiLeaks) in this category? No, says Arbor.

Were these attacks massive high-end flooding DDoS or very sophisticated application level attacks? Neither. Despite the thousands of tweets, press articles and endless hype, most of the attacks over the last week were both relatively small and unsophisticated. In short, other than than intense media scrutiny, the attacks were unremarkable.

In other words, the most impressive thing about the attacks is the name of the easily downloadable tool they employ, which hackers like to call a “Low Orbit Ion Cannon” or LOIC for short (there are also a couple of related programs with minor modifications that are known as the “High Orbit Ion Cannon” and the “Geosynchronous Orbit Ion Cannon”). But unlike a real ion cannon, the ones used by Operation Payback only managed to take down the websites of their victims for a few hours at most.

As Arbor notes in its blog post on the attacks, however, real cyber-war is something the U.S. government and other governments are very interested in, for a variety of reasons — and it has a lot more to do with malicious worms such as Stuxnet, which seeks out and disables specific machinery in a deliberate wave of sabotage, than it does some DDoS attacks run by voluntary bot-nets such as the one organized by Anonymous. And among other things — as investigative journalism Seymour Hersh noted in a recent New Yorker piece entitled “The Online Threat: Should We Be Worried About a Cyber War?” — such a war would give the military even more justification for monitoring and potentially having back-door access to networks and systems, allegedly to defend against foreign attacks.

How Big Should We Let Google Get? Wrong Question

While Google is busy trying to compete with the growing power of Facebook, there are still those who believe that the government needs to do something to blunt the growing power of Google. Washington Post business columnist Steven Pearlstein is the latest to join this crowd, with a piece entitled “Time to Loosen Google’s Grip?,” in which he argues that the company needs to be prevented from buying its way into new markets and new technologies. Not surprisingly, Google disagrees — the company’s deputy general counsel has written a response to Pearlstein in which he argues that Google competes fair and square with lots of other companies, and that its acquisitions are not likely to cause any harm.

So who is right? Obviously the government has the authority to approve or not approve acquisitions such as Google’s potential purchase of ITA, the travel-software firm that the company agreed to acquire in July — which some have argued would give Google too much control over the online travel search-and-booking market (since ITA powers dozens of other sites and services in that market). But does Pearlstein’s argument hold water? Not really. More than anything, his complaint seems to be that Google is really big and has a lot of money, so we should stop it from buying things.

Pearlstein starts out by noting that Google isn’t just a web search company any more, but is moving into “operating system and application software, mobile telephone software, e-mail, Web browsers, maps, and video aggregation.” Not to be unkind, but did Pearlstein just notice that Google has a mapping service and is doing video aggregation? Surely those wars are long over now. But no, the WaPo columnist suggests the company shouldn’t have been allowed to buy YouTube, because it had a “dominant position” in its market. This, of course, ignores the fact that there wasn’t even a market for what YouTube had when Google bought it, which is why many people thought the deal was a bad idea.

Pearlstein’s motivation becomes obvious when he says things like “The question now is how much bigger and more dominant we want this innovative and ambitious company to become,” or that he has a problem with “allowing Google to buy its way into new markets and new technologies.” Since when do we decide how big companies are allowed to become, or whether they should be able to enter new markets? Antitrust laws were designed to prevent companies from using their monopoly power to negative effect in specific markets, not simply to keep companies from becoming large. But Pearlstein seems to be arguing that they should be broadened to cover any big company that buys other big companies:

Decades of cramped judicial opinions have so limited application of antitrust laws that each transaction can be considered only in terms of how it affects the narrowly defined niche market that an acquiring company hopes to enter.

The Washington Post columnist also trots out the “network effect” argument, which he says results in a market where “a few companies get very big very fast, the others die away and new competitors rarely emerge.” So how then do we explain the fact that Facebook arose out of nowhere and completely displaced massive existing networks like MySpace and Friendster? And while Google may be dominant in search and search-related advertising, the company has so far failed to extend that dominance into any other major market, including operating systems (where it competes with a company you may have heard of called Microsoft), mobile phone software and web-based application software. In fact, Google arguably has far more failed acquisitions and new market entries than it does successful ones.

Google’s deputy counsel also makes a fairly powerful point in his defence of the company’s acquisitions, which is that antitrust laws are meant to protect consumers, not other businesses or competitors, and — so far at least — there is virtually no compelling evidence that the company’s purchases have made the web or any of its features either harder to use or more expensive for consumers, or removed any choice. If anything, in fact, Google has been the single biggest force in making formerly paid services free. That’s going to make an antitrust case pretty hard to argue, regardless of what Mr. Pearlstein thinks.

Facebook Draws a Map of the Connected World

If there’s one thing you get when you have close to 600 million users the way Facebook does, it’s a lot of data about how they are all connected — and when you plot those inter-relationships based on location, as one of the company’s engineers found, you get a world map made up of social connections. There are gaps in the data, of course, with dark spots in China and other countries that block the social network (or have large competitors of their own, as Russia does), but the result is quite an amazing picture of a connected world. If that’s what an intern at Facebook can come up with, imagine what else would be possible with that data.

The visualization is the work of Paul Butler, an intern on Facebook’s data infrastructure engineering team. As he described in a blog post, he started by taking a sample of about ten million pairs of friends from the Facebook data warehouse, then combined that with each user’s current city and added up the number of friends between each pair of cities, and merged that with the longitude and latitude of each city. And then to make the data more visible, Butler says he “defined weights for each pair of cities as a function of the Euclidean distance between them and the number of friends between them.”

I was a bit taken aback by what I saw. The blob had turned into a surprisingly detailed map of the world. Not only were continents visible, certain international borders were apparent as well. What really struck me, though, was knowing that the lines didn’t represent coasts or rivers or political borders, but real human relationships.

(image)

What Butler did with the data is similar to — although much more elaborate than — what a programmer outside Facebook tried to do with some of the site’s profile data, before he was threatened with a lawsuit. Pete Warden scraped information from millions of profiles and then analyzed it to see the connections between states and between countries, and drew interactive maps based on the number of those connections. But Facebook threatened him with a lawsuit and he was forced to delete the data, because his scraping of user profiles was against the site’s terms of service.

Amazon, WikiLeaks and the Need For an Open Cloud Host

As the WikiLeaks saga continues, with founder Julian Assange facing potential extradition to Sweden (although not for leaking secret documents) and the U.S. considering espionage charges against him, it’s easy to overlook some of the key issues that have arisen out of the affair — particularly those raised by Amazon’s removal of WikiLeaks from its servers, out of concern about the legality of the content being hosted there. At least one senior technologist thinks that this could raise red flags about the utility of cloud computing, while programmer and open-web advocate Dave Winer believes that the incident reinforces the need for an open cloud host of some kind.

In the Wall Street Journal yesterday, Dr. Joseph Reger — chief technology officer for Fujitsu Technology Solutions — said that Amazon’s decision to withdraw hosting for WikiLeaks from its EC2 servers is “bad news for the new IT paradigm of of cloud computing,” and ultimately calls “the security and availability of cloud services into question.” Although Amazon maintained that it was simply enforcing its terms of service — which prevent companies from hosting content to which they do not have the rights, or content that will could lead to injury — Reger said that the company’s actions would cause many to lose faith in the cloud.

The Fujitsu executive also raised the issue of whether cloud providers should even be in the business of assessing the legality or morality of the content on their servers, asking: “Should providers of cloud services constantly review whether any of their customers are pursuing an unpopular or immoral activity and continually make value judgments as to whether they are willing to continue the service?” Deciding whether content is legal, he said, “is not the job of providers. It has to be judged by a court of law.” Reger has a point: is Amazon going to start reviewing all the content on its servers just in case someone has uploaded something to which they don’t own the rights?

As pointed out by Ethan Zuckerman and Rebecca Mackinnon — both of whom are affiliated with the Harvard Berkman Center for the Internet and Society, and are the co-founders of Global Voices Online — the Internet may seem like a giant open commons where we share our thoughts, but it is effectively the domain of large corporations. And any of them can cut off our access or our ability to host content whenever they wish, according to terms of service and service-level agreements that are often vague and easy to bend in whatever direction a company wants them to go.

All of this has led Winer, who developed the RSS syndication format and other web technologies, to call for a “web trust” that can reliably and safely store documents of all kinds — whether they are WikiLeaks cables or personal Twitter accounts — in such a way that they are free from both corporate and government intervention, an entity that is “part news organization, university, library and foundation.” Winer said in a blog post that he has been discussing this idea with Brewster Kahle, the founder of Archive.org, which has been building a public archive of the web for years as well as an Open Library of e-books.

When WikiLeaks was first removed from Amazon, and then had its DNS listing deleted by EveryDNS (ironically, it has since gotten support from Canadian provider EasyDNS, which many mistakenly assumed was its original host), we raised the idea of a “stateless, independent data haven” that could host the documents, something WikiLeaks has been trying to create in Iceland. Luckily for Assange, his organization has secure hosting from a Swedish company whose servers are located deep inside a mountain — and says it has no plans to stop providing service to the organization — as well as the country’s Pirate Party and other supporters.

But what about those who don’t have the kind of resources and support that WikiLeaks does? They are at the mercy of Amazon and other hosting companies — and while Google has refused requests to pull down information in the past, citing free speech, it could just as easily change its mind at some point down the road. Winer’s proposal may never get off the ground, but it is a worthwhile effort nonetheless.

Top Twitter Trend for 2010: No, It Wasn’t Justin Bieber

The year isn’t quite over yet, but Twitter has already come out with the top trending topics for 2010, and surprisingly enough Justin Bieber — the guy who is so popular that Twitter had to modify the way it calculates trending topics — did not take the top slot. That went to the Gulf oil spill. Soccer and movies were also top discussion topics, relegating Mr. Bieber to the number eight spot on Twitter’s list (although he did get number one on the people-related trend list). The numbers came from Twitter’s analysis of more than 25 billion tweets sent during the year.

Trending topics have been a somewhat controversial issue for Twitter over the past week or so, with a number of users accusing the company of censoring its trends to keep WikiLeaks from being a top discussion topic. Twitter eventually posted an explanation of how it arrives at the top trends, noting that the feature is designed to show topics that are being discussed more than they have been previously — in other words, if Bieber discussion is hot and heavy for days at a time, then that becomes the benchmark and it will not become a trending topic until it goes above that level.

(Please read the rest of this post at GigaOM here).

Now That We Have the Web, Do We Need Associated Press?

According to media analyst Clay Shirky, author of Here Comes Everybody, the list of things that the Internet has killed — or is in the process of killing — includes media syndication of the kind that the Associated Press and other newswires are built on. In a look at what 2011 will bring for media, written for the Nieman Journalism Lab, Shirky says this process, which is “a key part of the economic structure of the news business,” is next in line for widespread disruption.

In fact, as Shirky himself admits, the kind of distribution that a newswire engages in has been in decline for some time now. Newspapers still push content to The Associated Press, hoping to get the benefit of the syndication it offers, but the only ones getting any benefit are tiny newspapers and websites who rely on the wire because they can’t produce enough content by themselves. While the web and RSS and other digital syndication models are not perfect, the need to have a combination one-stop shop for content and Big Brother-style copyright cop is dwindling. Says Shirky:

Put simply, syndication makes little sense in a world with URLs. When news outlets were segmented by geography, having live human beings sitting around in ten thousand separate markets deciding which stories to pull off the wire was a service. Now it’s just a cost.

Even the newswire itself realizes this, of course, and it has been trying desperately for the past year or two to find some way of shoring up the crumbling walls of its former gatekeeper status. It has railed against Google News and threatened to file claims against everyone from the web giant to individual bloggers because of the use of even tiny excerpts of its content, but still its media castle continues to erode.

As Shirky notes in his piece, the AP has also been talking for some time now about changing the nature of its relationship with member papers, and keeping some of its content to itself — requiring members to link to that content on the AP website, rather than running it on their own sites. The wire service, which was originally formed to distribute content produced by its members, seems to want to become a destination, now that the Internet allows anyone to distribute content far and wide without the AP’s help.

One interesting sub-plot is that Google is working on developing better attribution for content that appears in Google News, according to a recent blog post entitled “Giving credit where credit is due.” The idea is that publishers will tag their content with special tags so that the search engine can recognize who originally created a story — and presumably use this as a way of determining which of those 45 carbon-copy versions of a story it should highlight in Google News. Shirky is right that this could improve things for users, but make things substantially worse for newspapers and wire services:

Giving credit where credit is due will reward original work, whether scoops, hot news, or unique analysis or perspective. This will be great for readers. It may not, however, be so great for newspapers, or at least not for their revenues, because most of what shows up in a newspaper isn’t original or unique. It’s the first four grafs of something ripped off the wire and lightly re-written, a process repeated countless times a day with no new value being added to the story.

The AP isn’t completely dead yet, mind you. The service has its own news staff, who generate their own stories, just as Reuters and Bloomberg and other wire services do. Google’s pending change to attribution rules could actually help the AP when it comes to these internally produced stories — but they could also do substantial damage to the service at the same time, by shifting the spotlight to member papers who create the original stories that AP would traditionally get credit for. In a world where syndication is available to anyone with an Internet connection, what is AP selling?