If you spend a lot of time on Twitter, you’ve probably already seen Twitter spam — messages from normal-sounding accounts that pop up with links they would like you to click on, or marketing messages. If you check their profiles, they are usually being followed by no one, and in many cases have sent out thousands of tweets despite only having been on Twitter for a matter of months. More recently, Twitter has been hit by the political version of this phenomenon — also known as “astroturfing” — in which dummy accounts either re-tweet links to attack ads and articles, or post other comments designed to get a specific hashtag or link to trend or show up in search.
Researchers at the University of Indiana have set up a project to try and detect and analyze these kinds of tweets — and they clearly have a sense of humor, since they named the project Truthy, after the satirical idea of “truthiness” popularized by comedian Stephen Colbert on his faux news show The Colbert Report. As MIT’s Technology Review describes, the project was inspired by a research paper that earlier this year that looked at a 2008 special Senate election, and found almost a dozen Twitter accounts were repeating the same negative tweets about the candidates. Truthy’s algorithms come up with “diffusion networks” for individual memes, which can sense when they are likely to be real and when they could be manufactured.
The point of astroturfing, of course, is to create the illusion of grassroots support for something (hence the name), by generating what appear to be genuine messages about a candidate or an issue. So, for example, Truthy noticed during the runup to today’s mid-term elections that several Twitter accounts — including ones named @PeaceKaren_25 and @HopeMarie_25, both of which Twitter has since shut down — were tweeting and re-tweeting tens of thousands of identical messages aimed at linking to and/or promoting House minority leader John Boehner’s website.
In another case, a 10-user group was using the hashtag #ampat to promote the website Freedomist.com and to distribute negative attacks on Democrat Chris Coons, and an account called @GoRogueRunSarah promoted links to a website displaying pro-Palin and anti-Muslim propaganda. According to researcher Filippo Menczer, the Truthy team noticed that certain terms were tweeted so much they actually showed up in Google trend searches. Twitter, of course, would much rather have companies and political parties buy one of the company’s “Promoted Trends,” as the Washington Post did by paying for the hashtag #election during today’s elections.
That astroturf is occurring on Twitter shouldn’t really come as any surprise. Just as the rise of email gave birth to spam, and the rise of Google and other search engines produced an entire industry devoted to SEO rigging of search results, so Twitter’s rise as a real-time communications medium has made it a convenient platform for the same techniques — used in ways that take advantage of the social nature of the network. As Twitter continues to grow, it’s likely that we will see more of this kind of thing rather than less. Hopefully tools like the Truthy project will make it easier in the future to detect whether PeaceKaren_25 is a real person or a bot.
I asked Twitter for a comment on what Truthy has been discovering in terms of astroturf attempts, and Del Harvey — the service’s head of Trust & Safety — said the company routinely flags accounts that are either highlighted by its back-end spam detection systems or reported by users. Twitter evaluates these reports “independently of subject matter [as] part of our context-free policy,” she said, presumably so that there are no suspicions about the service playing favorites. Harvey also added that “we’ve been aware of Truthy’s work and have suspended some incriminated accounts [and] the investigation is ongoing.”