The Ingrams Christmas Letter for 2009

We started the year with a great fondue dinner and some pool in Buckhorn at the home of our friends Marc and Kris, and some skating and shinny on the pond. We went to Ottawa for our annual Winterlude visit and did some skating on the canal and had poutine and Beaver Tails, as usual. Then we went south to visit Becky’s mom and dad in Florida and did some playing in the waves and lying around soaking up the sun on the white beach at Siesta Key and some shuffleboard back at the Bay Indies park where Becky’s parents have a place. And we headed off with Becky’s family to Busch Gardens for some rollercoaster riding and other forms of assorted merriment.

Meaghan went on a school trip and saw the statue of the lady holding the fire, and had a birthday (something she almost always does once a year). Becky and I went out west because I was invited to be part of a panel at an arts conference at the Banff Springs Hotel. We stayed in a hotel in town that had a spa in the basement, and went for a run down by the river, and had a delicious dinner in a restaurant just down the hill from the Banff Springs that I think used to be the clubhouse for the golf course. There were the usual summer hijinks at the Farm weekend, as we call them, which included the next generation this time — Zoe made a new friend named Fox and we took the kids for a swim in the old quarry.

Continue reading “The Ingrams Christmas Letter for 2009”

Online collaboration tools like Mendeley are growing

The idea that the Internet might be used for scientific collaboration shouldn’t come as much of a surprise, since the Web’s predecessor was originally created as a way to connect researchers at different institutions so they could solve problems together. That said, however, collaboration has accelerated over the past several years, thanks in part to the increasing popularity of “social media” or Web 2.0 tools, which have collectively lowered the barriers to online interaction.

A number of social networks and services devoted specifically to scientific research have sprung up and are growing quickly, including one called Mendeley. An online collaboration tool, it allows scientists and researchers to upload research papers, which the software combs through looking for bibliographic data (author, title, etc.) which are then matched with any other research that already exists in the database.

“You can just drag and drop your collection of PDFs into the software and it’ll automatically extract all the bibliographic data – all of the stuff that you’d usually have to type in manually,” co-founder Victor Henning told the BBC. “What Mendeley is designed to do is give you recommendations which compliment your existing library.”

The software has become popular with some scientists at highly-ranked research institutions such as Stanford, Harvard and Cambridge, and Henning says the service has about 70,000 users, and is growing at a rate of 40 per cent every month.

Many scientists from different disciplines have also adopted the “open source” model favoured by the Linux free software movement and supporters of Wikipedia, the open-source encyclopedia. Project Polymath, for example, uses blogs and wikis to allow people to collaborate on solving complex mathematical problems.

In less than two months, Polymath participants “had worked out an elementary proof, and a manuscript describing the proof is currently being written,” Walter Jessen, a bioinformatician and cancer biologist at Cincinnati Children’s Hospital, told LinuxInsider. “The project demonstrated that many people could work together to solve difficult mathematical problems.”

Another open-source science project is Bizarro’s Bioinformatics Organization, which started in 1998 and uses wiki software to let researchers post models, questions, experiments and discoveries related to biology and informatics. Scientists were “looking for a central location for their open source projects,” founder Jeff Bizarro told LinuxInsider. Today, the organization has 27,000 members from all around the world.

If Bizarro is like Facebook or Wikipedia, a collaborative network called ResearchGate has aspects that are similar to LinkedIn, the corporate social network. While the service allows scientists to search for and connect research done by others to their own work in order to see patterns or relationships that are worth following, it also allows scientists to create profiles and search for or find relationships with other researchers in similar or related disciplines.

ResearchGate, which has 180,000 members, says it wants to create something called “Science 2.0” using social media tools. In this environment, “communication between scientists will accelerate the distribution of new knowledge. Without anonymous review processes, the concept of open-access journals will assure research quality. Science is collaboration, so scientific social networks will facilitate and improve the way scientists collaborate.”

Some scientists are using even newer tools to collaborate — including Google Wave, the new tool launched by the search giant that some describe as a combination between email, instant messaging and a wiki.

“Google Wave offers two specific things,” Cameron Neylon, senior scientist at Britain’s Science and Technology Facilities Council, told the BBC. “What it looks like is this cross of e-mail and instant-messaging, which is great fun. Where it really wins for science is that actually these documents or ‘Waves’ can be made automated so we can connect up documents and ideas with each other.” The power lies in allowing scientists to share a range of objects, he says, from pictures and text to raw data.

Will these new social tools help produce any penicillin or DNA-type breakthroughs? Scientists and researchers who use them say it’s just a matter of time.

FTC rules require disclosure of promotional relationships

Not that long ago, blogs were a mysterious animal that most reputable companies shied away from, an untrustworthy medium populated by cranks in their pyjamas. But at some point over the past year or two, blogs became mainstream, and companies like Microsoft and Hewlett-Packard and even Wal-Mart started turning to them for some good, old-fashioned “word of mouth” advertising. Reaching out to bloggers with offers of gifts seemed like an easy way to do this, and many bloggers were eager to accept — but not all of them disclosed their relationships with the company whose products they had received.

Doing this is now mandatory, the Federal Trade Commission said in newly released rules governing advertising and promotional material on the Internet. The new amendments are the first updates to the FTC’s guide for advertisers since 1980. According to the new rules, a post by a blogger who is paid cash or receives a gift is considered an endorsement and the relationship must be disclosed. According to the FTC, messages posted on Twitter are covered as well.

Richard Cleland, a staff attorney at the FTC’s Bureau of Consumer Protection, told a Fortune blog: “We’re required to update our rules periodically to ensure that they address relevant issues in the marketplace. Social media has become a relevant marketing force, so we started looking at it.” The FTC official said that the commission was concerned about so-called “pay-per-post” websites, where bloggers receive cash or in-kind gifts in return for positive blog reviews.

“The issue here,” he says, “is whether, if the consumer knew of the relationship between the advertisers and the blogger, would it affect the credibility of the blogger’s statements?” Cleland told the Wall Street Journal that the FTC looks at it “from the perspective of the consumer and the principle being that a consumer has the right to know when they’re being pitched a product. It doesn’t matter whether it’s an email or Twitter or someone standing on a street corner.”

One of the companies that likely gave rise to the FTC’s concerns is Izea — formerly known as PayPerPost. The startup’s business involves helping companies with products or services they want to market find bloggers who are willing to write about them in return for payment. Although Izea didn’t initially require bloggers to disclose these payments anywhere, it changed its rules after much criticism from other bloggers such as TechCrunch. Izea founder Ted Murphy says that his company is working with the FTC on standards for disclosure regarding both paid blog posts and Twitter messages.

Not everyone is enamored of the FTC’s new moves, however. Jeff Jarvis, who is both a blogger and a journalism professor at the City University of New York, says that while he despises companies like Izea that pay bloggers for their opinions, he sees the guides as “dreadful overreach that will drag a lot of innocent people into a bureaucratic dragnet.” Among other things, Jarvis says that he is concerned that the rules only apply to bloggers and Twitter, but other forms of publishing are considered able to regulate themselves. Dan Gillmor, director of the Knight Centre for Digital Media Entrepreneurship, said on Twitter that the new rules “are big-brotherish in extreme, unworkable and downright dangerous.”

As a number of observers have noted, however – and as Robert Cleland himself admitted in several interviews about the new rules – the FTC doesn’t have a huge staff available for enforcement, and certainly doesn’t have the kind of staffing it would need to police the entire blogosphere or Twittersphere. As a result, it will likely rely on customer complaints to bring cases forward for investigation.

Unloading Skype gets complicated for eBay

When eBay bought Skype for $2.6-billion in 2005, it seemed like a marriage made in heaven, at least for Janus Friis and Niklas Zennstrom, co-founders of the free phone-calling app. But there was trouble in paradise from the beginning, trouble that included the Skype duo’s dodgy track record (they co-founded Kazaa, a quasi-legal music sharing network) and a conspicuous lack of synergies between the two companies. Over the past year or so, however, the relationship has gone from merely troubled to outright toxic.

Friis and Zennstrom left the company in 2007, right around the time that eBay took an almost $1-billion writedown on the price it paid for Skype. Although there were no concrete reports at the time of any trouble between the two companies – the Skype founders had by this time launched a new Web-based TV venture called Joost, which has also run into difficulty – the peace and quiet didn’t last long. In March, a company owned by Friis and Zennstrom stopped licensing its peer-to-peer technology (which powers Skype) to eBay. The auction company sued, and the two have been locked in a court battle ever since.

That battle, not surprisingly, has complicated the sale of Skype, which eBay has been trying to unload for some time now. Several months ago, it announced that it had sold 65 per cent of the company for $1.9-billion to a consortium of investors, including several U.S.-based private equity firms – Silver Lake, Index Ventures and Andreessen Horowitz – as well as the Canada Pension Plan Investment Board. All of the prospective buyers have been named in the Skype suit, which according to a statement of claim is accumulating damages at the rate of $75-million every day.

In an interesting twist, the lawsuit also mentions Mike Volpi, a general partner at Index Ventures who was previously the chief executive officer of Joost and until recently was a member of its board of directors. He has since been removed from the board, and the company said last week that it is conducting an investigation into his actions while he was CEO.

With the launch of the Skype founders’ lawsuit, more than just the sale of Skype is in jeopardy. eBay admitted in a securities filing that the claims could leave it unable to operate the free phone-calling service, although it has said in the past that it is working on its own peer-to-peer networking system to replace the one that Skype’s founders still own the rights to. Even if it does create its own version of Skype’s peer-to-peer technology, however, it’s possible that Friis and Zennstrom could argue that it is an unlicensed copy of their software and keep eBay tied up in court for months, if not years.

The irony, of course, is that the money the Skype co-founders are using to fight this epic legal battle – over technology that eBay arguably should have locked up in the first place – is coming from the bags of loot they got from eBay four years ago.

Facebook and Google get their hands slapped

Should Facebook and Google users in the U.S. thank the Canadian government for protecting their privacy? A pretty good case could be made that they should. Both Internet giants have had their hands slapped by the Canadian Privacy Commissioner, and have had to alter their policies as a result (although Facebook is still considering its full response to the CPC complaint), and those changes have had the net result of protecting the privacy of U.S. users as well.

In the case of Facebook, the Privacy Commissioner’s office filed notice last week that the social-network provider’s protection of personal data didn’t meet federal standards on a number of points — 22 of them, to be exact. The government department advised Facebook to alter its practices to bring them into compliance, or possibly face court proceedings that would compel the company to abide by the rules.

One of the aspects of Facebook’s privacy protections that caught the Commissioner’s eye was the amount of personal data that is transmitted to or shared with the creators of third-party applications that Facebook users often agree to add to their profiles. Under the company’s rules, these third-party apps don’t have to provide much detail about what they plan to do with your personal data, and they collect a lot of data that isn’t really necessary, according to Privacy Commissioner Jennifer Stoddart.

This is something that many users have noted (and programmers as well), but in Canada that kind of personal data collection and retention isn’t just an irritation or curiosity, it’s potentially a breach of Canadian law. The law in question is the federal Personal Information Protection and Electronic Documents Act (or PIPEDA), which sets strict limits on what information can be collected, the amount of disclosure required, the purposes to which it can be put, and how long it can legally be retained. It is different in many key respects from U.S. privacy laws.

The Facebook investigation raised what the Commissioner’s office called “significant concerns around the sharing of users’ personal information with third-party developers creating Facebook applications such as games and quizzes.” The agency said that the company “lacks adequate safeguards to effectively restrict these outside developers from accessing profile information.”

The Commissioner’s report recommended a number of changes, including “technological measures to ensure that developers can only access the user information actually required to run a specific application” as well as taking steps to “prevent the disclosure of personal information of any of the user’s friends who are not themselves signing up for an application.” The investigation also found that Facebook has a policy of indefinitely keeping the personal information of people who have deactivated their accounts.

This is the second time that Canada has stepped in to advise a major Internet player of their neglect of privacy rules. Last year, Google came under fire from the Commissioner’s office over its Streetview” service, which hasn’t even launched in Canada yet. After reports emerged that cars belonging to Google had been seen filming in Toronto and other major cities, the federal agency released a statement calling on the company to change its methods to better protect people’s privacy.

In particular, the Commissioner said that revealing the faces of specific individuals without their consent was a breach of Canadian privacy laws, and so was revealing personal information such as car license plates. In the U.S., taking a photograph of someone in a public place without their consent is legal, but in Canada such photos are considered an invasion of privacy, unless they are taken for artistic or journalistic purposes, such as reporting on a news event.

Google responded by using automated technology to blur the faces of people in its Street View photo montages – a feature it is also rolling out in the U.S. and other jurisdictions as well.

Microsoft files a click-fraud lawsuit

Just as banks and credit-card companies routinely file lawsuits and press criminal charges against those who counterfeit money or use unauthorized credit cards, Microsoft has filed a lawsuit against three Canadians it accuses of the 21st-century equivalent: namely. “click fraud.” In the same way that cash — followed by checks and credit cards — were the currency that made capitalism function, the currency that drives value for giant Web businesses like Microsoft and Google is clicks.

According to Microsoft, its first-ever click fraud lawsuit was filed against a family from Vancouver — Eric Lam and Gordon Lam (who are believed to be brothers) and Melanie Suen (believed to be their mother) — because they engaged in repeated click-fraud attacks against online ads related to auto insurance and the multiplayer online game World of Warcraft. The company is asking for an injunction forcing the three to stop the behaviour, and is also asking the court to award it more than $750,000 in damages.

The crime itself isn’t nearly as straightforward as counterfeiting money or stealing credit cards. Click fraud involves rigging online advertising through the use of software scripts or other nefarious schemes such as “click farms,” and in a case like the one against the Lams, it involves jacking up the number of clicks on a competitor’s ads so that they have to pay more — since keyword-related search advertising such as that offered by Google and Microsoft is priced through an auction process. The more clicks, the more you pay for your ads.

Microsoft says in its suit that it had to pay some of its advertisers $1.5 million in ad credits in order to compensate them for the actions of the trio. The company is asking for an injunction as well as more than $750,000 in compensatory damages. Ironically, before it identified the Lams as the source of the click fraud, Microsoft says that it actually gave the family an advertising credit to compensate them for the click-fraud that was occurring.

“By engaging in a widespread scheme that generated invalid clicks on links to online ads that were displayed in response to search requests on Microsoft’s network, defendants disrupted the advertising campaigns of their competitors, obtained increased user traffic for their own ads at a much lower cost than they could have otherwise, and caused substantial damage to Microsoft,” the lawsuit alleges.

The software giant describes how it spent more than a year tracking the clicks on certain types of search-related keywords in markets such as auto insurance, and noticed that there were “a large number of exact match-type keywords being searched, and within a short period of time, the top sponsored site results were being clicked, which indicated that automated or ‘click farm-generated’ click fraud was occurring on the Microsoft network.” One of the difficult things about such a case is proving that the clicks were actually fraudulent, rather than just a coincidence.

Both Microsoft and Google (which has a dominant share of the market for online keyword-related advertising) have been on the other side of a click-fraud legal case in the past. In 2006, Google had to pay $90-million to settle a class-action suit launched by advertisers who claimed that they paid too much for their online advertising, while Microsoft has been sued for something very similar.

The CIA’s venture business is going strong

Dozens of venture capital firms have come and gone since the great Internet bubble of the late 1990s, but one relatively little-known firm has stuck around and continued to invest heavily in some of the Web’s leading technologies. It’s probably a lot easier to do this if you only have one investor to answer to, and In-Q-Tel Technologies has that in spades: it’s sole controlling shareholder is the U.S. government, in the form of the Central Intelligence Agency.

Created in 1999, In-Q-Tel is the venture capital unit of the federal spy agency, and was set up as a way for the CIA to stay in touch with “cutting-edge technology” that might be useful to its purpose as the U.S. government’s intelligence arm. According to the CIA site, the idea for such a venture came from Dr. Ruth David, a former CIA Deputy Director for Science and Technology. The agency’s rationale was that “as an information-based agency, the CIA must be at the cutting edge of information technology in order to maintain its competitive edge and provide its customers with intelligence that is both timely and relevant.”

As the CIA site notes, the agency has been involved in many leading technologies over the past few decades, including the U-2 and SR71 reconnaissance airplanes and Corona surveillance satellites — and, of course, the Internet itself was a spinoff of a research project developed by the Defense Advanced Research Projects Agency or DARPA. But the agency apparently realized that the pace of technology development was accelerating beyond its ability to keep up, and that investing in startups might be a way of keeping track of those new technologies, and possibly benefit from them as well.

As the CIA says itself: “One of the great leaps of faith the Agency took in this venture was to recognize, early on, that private sector businessmen were better equipped than it was to design the Corporation and create its work program.”

One of In-Q-Tel’s investments from several years ago has paid off in both senses: the company invested in a small geo-targeting and satellite-mapping company called Keyhole, which was bought by Google in 2004 and renamed Google Earth. Although Google did not say how much it paid for the company, In-Q-Tel sold more than 5,000 shares of Google following the deal and pocketed about $2.2-million. As of 2006, it had reportedly invested more than $130-million in about 90 businesses, including these ones.

The company invests in several broad areas, including: application software and analytics; bio-, nano- and chemical technologies; communications and infrastructure; digital identity and security; and embedded systems and power. Its investments include companies such as Asankya — which is working on a way of speeding up Internet traffic using a proprietary technology — and Decru, which makes highly secure data-storage products. The company has also invested in Attensity, which does advanced text analysis on large quantities of data, something the CIA no doubt does a fair bit of, and a company called Stratify that specializes in analyzing “unstructured data.”

All four of those investments have potential implications for the future of the Web, since both Asankya and Decru’s technologies are used in “cloud computing” infrastructure, and Attensity and Stratify’s products are useful for taking the existing Web and adding layers of meaning or understanding on top of it — along the lines of what Web creator Tim Berners-Lee has called the “semantic Web.”

Although not every investment the company has made has paid off, even formerly skeptical Silicon Valley investment watchers have grudgingly admitted that In-Q-Tel has done pretty well for itself, while one of its founders told the Washington Post as far back as 2005 that while he still saw the venture firm as an “unproved experiment, it was already “far more successful than I ever dreamed it would be.”

Islamic clerics consider issuing a fatwa against Facebook

As one of the top — if not *the* top — social-networking sites, Facebook tends to draw a disproportionate amount of criticism from those concerned about the effect the site has on young minds or the welfare of society as a whole. And no one is more concerned about those risks than the various religious groups who routinely try to ban, block or otherwise crack down on the scourge of modern social networking.

The latest to make this kind of move — or at least a threat in that direction — was a group of Islamic clerics who were meeting in Indonesia. They declared that they were contemplating a religious ruling that would issue a “fatwa” or declaration about Facebook, requiring observant Muslims to practice what amounts to “safe social networking” or suffer the wrath of their imam (priest).

In a nutshell, a spokesman for the group said that Facebook and similar sites could be used for appropriate purposes such as education about the faith or keeping in touch with one’s family and friends, but noted that they could also present a temptation.

“The clerics think it is necessary to set an edict on virtual networking, because this online relationship could lead to lust, which is forbidden in Islam,” said Nabil Haroen, a spokesman for the Lirboyo Islamic boarding school, which was hosting the event. The head of the council of imams said that the growing number of Facebook users in Indonesia was a controversial subject among Muslim leaders and that he favored a ban because of possible sexual content.

“People using Facebook can be driven to engage in distasteful, pornographic chatting,” said Amidan, who — like most Indonesians — uses only one name. Another spokesman for the clerics said that “spreading ill words about others, gossiping and other things that go against religious teaching on social networking sites in the virtual world are forbidden according to Islamic law.” Despite these protests, one Muslim group on Facebook has 48,000 members, while a Muslim fan page has 18,000 fans.

The Indonesian clerical group’s move would not be the first time the country has stepped in to block social-networking sites for religious reasons. Last year, the country ordered its largest ISPs to block YouTube and MySpace because they both carried an anti-Islam film called Fitna, created by Dutch filmmaker Geert Wilders, and the government said that seeing it might “disturb relations between faiths.”

And Muslims aren’t the only religious groups to be concerned about the Internet and the intrusion of social networking into the lives of the faithful. Although Pope Benedict and other senior members of the Catholic clergy have made positive statements about the benefits of the Internet — and even at one point created a Catholic version of Facebook called Xt3.com to appeal to young people — not everyone is quite so sanguine about these new services.

In a recent address to his flock, the Bishop of Paisley, Rt. Rev Philip Tartagliawarned the faithful that “In dialogue with others we need to be wary of the inane chatter that can go on in the digital world which does nothing to promote growth in understanding and tolerance.” He also raised concerns about who young people might contact through the networks, saying: “What parent has not wondered what their child is doing on the internet? What material are they accessing? Who are they talking to in social networking sites?”

Jewish groups have yet to raise any substantial concerns about social networking and its effects from a religious point of view — although there are Orthodox adherents that believe Facebook and similar sites can lure the faithful away from the path of righteousness — but they have become concerned about the use of Facebook as a tool to spread hatred about their faith, including a number of groups that deny the existence of the Holocaust. So far, Facebook has said that it believes the groups fall under the category of freedom of speech, and has resisted efforts to close them.

Facebook is making money on apps

By now, everyone seems to have become pretty comfortable with the idea that Facebook is a revenue-generating enterprise. Although originally there was a lot of skepticism about whether the social network would be able to produce much revenue, advertising deals with companies like Microsoft (which invested $240-million for 1.6 per cent of the company in 2007) have established that Facebook is definitely producing plenty — as much as $500-million in revenue this year.

What’s even more fascinating, however, is the amount of money that is being generated by the Facebook “ecosystem” — that is, the considerable number of applications, tools and games that are built using Facebook’s F8 platform. According to some estimates (and they are just estimates), app developers as a whole could bring in almost as much or possibly even more revenue this year than Facebook itself.

As Eric Eldon points out, coming up with an overall estimate of what Facebook app developers are making is difficult — if not impossible. Many developers don’t want to say publicly what they bring in, for competitive reasons. But some of the estimates that have appeared, particularly about the largest app developers, are accepted by most observers as being fairly accurate. Zynga, for example, which makes the Texas Hold ‘Em app, is believed to have a “run rate” that would produce revenue this year of about $100-million.

That’s a single developer (although it has multiple apps in its stable). About half of its revenue is estimated to come from its Facebook apps, and the other half from its MySpace businesses. Playfish is expected to make about $30-million this year. In all, Eldon estimates that the top handful of developers make about $150-million, followed by several other tiers of smaller developers who collectively make another $150-million or so. By the end of this year, several sources said that could hit $500-million.

Online gaming industry players told Advertising Age magazine the same thing, with many of them estimating developer revenue could hit $500-million or more. “It wouldn’t surprise me if apps on Facebook generate more revenue this year than Facebook,” LivingSocial CEO Tim O’Shaughnessy told the magazine. LivingSocial currently has the number one most popular Facebook app, “Pick Your Five.” Mark Pincus of Zynga recently wrote up some of his tips on how to make money through Facebook apps for the Facebook developers blog.

One interesting thing to note is that Zynga and several other developers make money in a variety of ways — not just through banner advertising, as many online media outlets do, but also through CPA (cost per action) payments, as well as the sale of virtual goods. In Zynga’s case, the company makes about a third of its revenue from the sale of poker chips for its Texas Hold ‘Em games. The virtual goods market is one that is already well established in other countries: in China, for example, gaming company Tencent made $1-billion in revenue from the sale of goods and services in its virtual world, including clothing for avatars.

Is there a lesson here for other companies such as Twitter, which is searching for revenue-generating opportunities for its fast-growing service? Clearly, there is. If you can build a social network that attracts the kind of devoted users that Facebook has, and in large enough numbers, you can generate substantial amounts of revenue both for yourself and your partners. And if Facebook is developing an integrated payment system (as it is rumoured to be), the revenue potential could be set to explode.

Europe protests the US control of ICANN

The popular perception of the Internet is that it is inherently global in nature, an international network that is just as open and accessible regardless of what country you happen to be in, or what language you speak. And for the most part (with the exception of some totalitarian states such as China) that is the case. However, the keys to this particular kingdom belong to one country: namely, the U.S., through its control of the Internet Corporation for Assigned Names and Numbers.

Some critics don’t like that state of affairs, and are trying to change it — including the Information Commission of the European Union, Viviane Reding. The EU official said this week that ICANN should not be overseen by the U.S. government (the agency operates under an agreement with the Commerce Department), but should be run democratically by a group of states. While the U.S. has done a good job of managing the process, she said, “in the long run, it is not defendable that the government department of only one country has oversight of an Internet function which is used by hundreds of millions of people in countries all over the world.”

ICANN, as it is called, is the non-profit entity that controls the Internet’s entire domain name structure — including the creation of new “top-level” domains such as .xxx and .mobi — and is in charge of handing out domain names and IP addresses. It manages this process through a separate entity known as IANA (Internet Assigned Numbers Authority), which controls the 13 root nameservers that translate domain names and URLs into IP addresses. Until relatively recently, these DNS servers were all located in the United States, in most cases inside government offices.

The EU commissioner said in her recent address that she hopes President Obama continues with the Clinton administration’s plan to privatize ICANN, and she suggested that a perfect opportunity for such a move is coming in September, when the agency’s agreement with the U.S. government expires. Among other things, Reding said that she would like ICANN to be regulated by an international tribunal (any legal disputes involving the agency are currently handled by the courts in California, since it is located there).

There have been discussions in the past about the United Nations taking over control of ICANN, and other EU critics have requested that the European Union investigate the U.S. agency for restraint of trade because of its control of the plumbing of the Internet. Consumer advocate Ralph Nader has also criticized ICANN.

Will the September expiry of the U.S. government’s deal with ICANN be the opportunity that Viviane Reding and others are hoping for? Will the international community — either the EU itself or the UN — take control over the reins of the Internet? There certainly seems to be increasing interest in doing so, and the Obama administration is ideologically a lot closer to the previous Clinton government, which had begun the process toward releasing ICANN from its government associations. All that remains is for the last ties to be cut, and for the agency to become as international as the Internet whose plumbing it manages.

It’s all about dematerialization

If climate-change experts are correct in thinking that we could be close to a “tipping point” that might accelerate the already substantial climate change we’ve seen to date, the pressure to find solutions is sure to intensify. One of the potential mitigating factors is something that economists and scientists like to call “dematerialization.” Simply put, this is the process by which products and services with greater environmental impact are gradually replaced by those with less impact. As Wikipedia defines it, the term refers to the absolute or relative reduction in the quantity of materials required to serve economic functions in society.

Theoretically, the IT industry should be a great contributor to this effect or process, for a variety of reasons. For example, computers have gotten a lot smaller, cheaper and more efficient, in terms of computing power per watt of energy, etc. As this piece from Hewlett-Packard notes, the fastest computer in 1946 performed about 5,000 operations a second, weighed more than 30 tons, and consumed almost 200 kilowatts of electricity. In contrast, an off-the-shelf laptop purchased today has thousands of times the processing power, weighs just a few pounds, and consumes less than one-thousandth the electricity.

This effect can have related “rebound” effects, however, as a number of researchers have noted. The fact that electronic devices including computers have become smaller and cheaper to both buy and operate means that there are also a lot more of them, as this paper points out, and can contribute to a consumer attitude of ”replace rather than repair.” As the researchers note, while dematerialization “may be the case on a per-unit basis, the increasing number of units produced can cause an overall trend toward materialization with time.” And then there are all-new devices as well: according to Apple, the lifecycle carbon footprint of an iPhone involves the emission of 121 pounds of CO2-equivalent greenhouse gas emissions over the course of a three year expected lifetime of use.

While IT services might lead to improvements in a number of ways — thanks to developments such as videoconferencing, telework, and electronic commerce — it’s difficult to draw a straight line connecting these services to an overall reduction in environmental impact, according to some scientists who have looked at the issue. While IT can improve the efficiency of existing operations and create new, more efficient ways of doing the same things, “having the means to use less materials does not mean that they will be adopted, nor does it guarantee that their adoption will actually lead to dematerialization.” As an example, despite the arrival of personal computers and other electronic services such as email, the total consumption of paper has doubled since 1950.

So while dematerialization has promise as a mitigating factor in climate change — and the contributions of the IT industry on that score are many — it is far from being a panacea, since the effects of such changes in the IT business can be more than counterbalanced by contrary activity elsewhere in the global economy.

America has a new Chief Technology Officer

So America has a new “Chief Technology Officer” (although the actual name of the position is Associate Director for Technology in the Office of Science and Technology Policy). Aneesh Chopra, formerly Virginia’s Secretary of Technology, didn’t get any resounding cheers from the hardcore geeks — in part because he isn’t from the center of the geek-o-sphere, Silicon Valley — but even having the position at all is a big step up from previous administrations, so most industry observers are fairly positive about the appointment. But will the position become simply a platform for simplistic solutions, such as a broadband-for-all policy, as some fear, or will it have broader and more far-reaching effects, as some are recommending?

Those who are hoping for more depth than a simple broadband rollout strategy will likely be pinning their hopes on the newly announced President’s Council of Advisors on Science and Technology (PCAST), a non-policy making body that President Obama named on Monday. Perhaps the most prominent name on the list is Eric Schmidt, the CEO of Google. As one observer noted, it wasn’t that long ago that the search-engine giant was seen as an outsider in Washington, with little or no ties to the government and no real influence. That has definitely changed with the arrival of Barack Obama and his technology-friendly White House.

The full PCAST roll call (the complete list is here) also includes Craig Mundie, the Chief Research and Strategy Officer at Microsoft Corporation; Shirley Ann Jackson, president of Rensselaer Polytechnic Institute and former Chair of the US Nuclear Regulatory Commission; James Gates Jr., director of the Center for String and Particle Theory at the University of Maryland, and Rosina Bierbaum, dean of the School of Natural Resources and Environment at the University of Michigan. Breadth is something that the Obama administration seems to have more or less covered, since the list Includes specialists and experts in chemistry, physics, biology, geology, computer science, engineering, environmental science, astrophysics and internal medicine.

The President said that the PCAST board would be charged with advising him “about national strategies to nurture and sustain a culture of scientific innovation.” Some hope that Schmidt and Mundie will be able to sway Obama on the topic of net neutrality, something both are interested in. And those hoping for a broad mandate can also take comfort in the fact that the government has pledged to spend 3 per cent of the country’s GDP, or about $415.2-billion, on science and technology research, development and education — the largest financial commitment in America’s history (larger even than the original moon-shot space program). Among other things, Obama has proposed doubling the budgets for the National Science Foundation, the Department of Energy’s Office of Science, and the National Institute of Standards and Technology.

Could these kinds of investments be jeopardized by the shaky economy? No doubt. But the President has made it clear that investment in science and technology research and development is a priority for his government, and he arguably has an advisory group with enough intellectual heft and real-world experience to make it a reality.

Britannica tries to eat a little of Wikipedia’s lunch

After years of more or less ignoring its open-source competitor, the venerable Encyclopedia Britannica will soon be taking a page from Wikipedia’s playbook and allowing members of the public to contribute to articles and other content at Britannica.com. That’s according to Jorge Cauz, president of the 240-year-old institution, which at one time was synonymous with knowledge in many Western households and schools. The Britannica head told the Sydney Morning Herald in Australia and The Times in the UK that Britannica plans to offer the new features on its website soon.

Mr. Cauz made it clear, however, that anything submitted by users will have to be vetted by one of the encyclopedia’s staff of paid researchers before it appears either on the website or in the actual print version of the EB. “We’re not trying to be a wiki – that’s the last thing we want to be,” he told The Times. “Britannica doesn’t offer that voyeuristic benefit. Users won’t be able to write anything they want and have it published.” The changes — which are just part of the creation of a larger Britannica community portal — were first described last June, and it was made clear then that Britannica didn’t plan on letting the whole “crowd-sourcing” thing get out of hand: “We are not abdicating our responsibility as publishers or burying it under the now-fashionable ‘wisdom of the crowds,” a blog post said at the time.

Of course, the Encyclopedia Britannica head probably knows that in many cases, users can’t just write anything they want and have it published in Wikipedia either. There are dozens of moderators and editors working for the open-source encyclopedia (although they are unpaid volunteers) who check page changes for accuracy and to make sure they uphold the Wikipedia principles of fairness and a “neutral point of view.” While there are some pages that can be edited freely, where mistakes might not be noticed quickly, other pages (including the one about President George Bush) are “locked,” and can’t be edited by anyone but a Wikipedia-sanctioned moderator. A well-known comparison between Britannica and Wikipedia found that the error rate in each case was roughly equivalent. Wikipedia founder Jimmy Wales has also recently proposed changes that would restrict editing even further.

It also seems pretty obvious from the Britannica president’s comments that he is a) more than a little jealous of Wikipedia’s traffic numbers (the open-source encyclopedia gets about 6 million visitors a day, while Britannica gets about 1.5 million a day) and b) irritated that Google features Wikipedia links so prominently in the search results for many common terms that people might otherwise go to Britannica.com for. “If I were to be the CEO of Google or the founders of Google I would be very [displeased] that the best search engine in the world continues to provide as a first link, Wikipedia,” Cauz told the Sidney Morning Herald. “Is this the best they can do?” He also made it clear that he sees Wikipedia as the fast-food version of knowledge, saying many people turn to it for answer, but that many people are also “happy to eat McDonalds every day.”

Britannica isn’t the only one to try and take the Wikipedia model and blend it with the authoritative voice of the expert: a project called Citizendium, which started up a little over a year ago, was created by Larry Sanger — a former co-founder of Wikipedia — as an attempt to create a “crowd-sourced” encyclopedia, but with input from subject-matter experts rather than just anyone. Google has taken some steps in that direction as well, with a service called Knol (derived from “knowledge”), which encourages experts to create Wikipedia-style entries on specific subjects. But neither Knol nor Citizendium have gotten much traction, and are certainly nowhere near challenging Wikipedia for the title of “the people’s encyclopedia.” Whether Britannica’s changes can put it back in the race or not remains to be seen.

China cracks down on Twitter

For a brief moment 10 years ago, a lone figure blocked the path of a giant tank in Tiananmen Square, and it seemed as though Chinese dissidents might be able to shake the pillars of power in that vast country and bring some semblance of democracy to China. In the wake of the pro-democracy protests, however, the Chinese government redoubled its crackdown on dissidents and crushed any hint of dissent. As the 20th anniversary of the Tiananment Square demonstration approaches, Chinese authorities seem more determined than ever not to allow even a hint of unrest to appear anywhere — including the Internet.

Early this week, journalists and bloggers in China started noticing that Twitter, the chat-style social networking application, was no longer available — and that users were also having difficulty accessing other popular websites and services such as Google-owned YouTube and the Yahoo-owned photo-sharing website Flickr. Blogging platforms such as WordPress and Blogger (also owned by Google) were said to be unavailable as well. According to several reports, the Chinese authorities had also shut down message boards on thousands of websites used by college students.

Blocking and/or filtering services is nothing new for the Chinese government. Even widely-used and official services such as Google, Yahoo and Microsoft’s MSN Search are filtered by the authorities to exclude any reference to the events in Tiananment Square in 1989, and all of the major search engines have in the past supplied names of specific users to the Chinese government. As more and more official services have been blocked or filtered, the popularity of social-networking sites such as Twitter has increased. As a number of observers have noted, Twitter takes on even more significance in a country like China, not just because it allows people to exchange messages easily, but because the use of Chinese characters allows Twitter posts to include far more information than a similar English message.

As the Committee to Protect Journalists details in a recent piece, the Chinese government has been cracking down on virtually every form of media: newspapers published in Hong Kong that might mention the anniversary are blocked from delivery to mainland China, and radio broadcasts are interrupted by commercials if the topic comes up; even a former Communist Party leader’s memoirs are no longer available through official channels (according to the New York Times), presumably because they mention the events of June 3 and 4, 1989. Several prominent Chinese citizens associated with the demonstration were either detained or warned to leave the capital before the anniversary.

Despite the official crackdowns, however, Twitter users seemed determined to find a way around what Chinese dissidents and other critics refer to as the “Great Firewall of China.” As the blockage moved into its second day, users were exchanging tips on using so-called “anonymous proxy” software such as Hotspot Shield (which hides the IP address of a user who runs the software) and other similar technological workarounds.

The RIAA switches tactics

The Recording Industry Association of America has been waging war against copyright infringement — in the form of illegal downloads — since the file-sharing app known as Napster first appeared on the scene a decade ago. For much of that time, the RIAA’s preferred form of attack has been the lawsuit, with the record-industry body filing claims against hundreds of thousands of individuals for financial damages, including several high-profile cases that have targeted single mothers, war veterans, and teenagers for sharing as few as a dozen songs. Now, the RIAA has apparently decided to switch tactics, according to a recent report in the Wall Street Journal, and will no longer sue individual downloaders.

Instead, the agency plans to strike deals with Internet service providers that will see the ISPs agree to contact individual file-sharers when notified by the RIAA that illegal activity is occurring. After two warnings about their behaviour, the RIAA is hoping that ISPs will agree to shut off their customers’ access to the Internet.

What this means is that the record-industry group is trading a head-on assault for a back-room negotiation with Internet providers, in the hope that it can do an end-run around the problems its previous strategy produced. Not only was the lawsuit approach a magnet for negative publicity, but it also ran into repeated legal roadblocks as well. In one of the latest, for example, a U.S. court ruled that the RIAA would have to show that songs had actually been downloaded before it could prove infringement. Unfortunately for the RIAA, however, its new strategy could cause some legal headaches as well, since the “three strikes” approach is almost certain to be challenged in court.

One of the biggest issues with such an approach is that it assumes the RIAA has correctly identified a) specific songs that are being infringed, and b) the individual doing the infringing. If it makes a mistake in any one of the three steps, an innocent user has their Internet access cut off without any recourse. And over the years that the record industry has been pursuing its legal strategy, there has been plenty of evidence to show that its ability to do either a) or b) is far from perfect. The agency has repeatedly mis-identified not only the individuals doing the sharing — who in many cases are distinct from the customers the Internet account belongs to — but has also mis-identified songs as infringing when they have been legally acquired.

The “three strikes” approach is not a new one. The French government has proposed new legislation that would require ISPs to cut off the access of users who repeatedly infringe on copyright, although the legislation has yet to become law. An amendment by the European Parliament earlier this year strongly urged France and other European Union countries not to adopt this kind of legislation, but it was later overturned by the EU Council of Ministers, and according to the most recent reports France is going ahead with it. Britain’s record-industry lobby group managed to convince several ISPs to warn their users when notified by the agency, but they have stopped short of actually cutting off their users’ Internet access after a third “strike.”

The RIAA may think that it has come up with a clever way around its earlier legal hassles, but it could find that trying to make ISPs into copyright police faces just as many hurdles, if not more.