Twitter may be considering a Facebook-style feed — but would that help its growth or derail it?

After a couple of quarters that had analysts and investors concerned about its growth potential, Twitter managed to turn in a fairly strong performance in the most recent quarter — with more than 120-percent growth in revenue. Some power Twitter users, however, were more interested in something Twitter CEO Dick Costolo mentioned during the conference call: namely, the idea that the company might introduce an algorithmically-filtered feed like Facebook’s.

What Costolo actually said was he “isn’t ruling out” an algorithmic approach — and he also said the company is considering ways of “surfacing the kinds of great conversations that pop up in peoples’ timelines.” That doesn’t mean Twitter is suddenly going to convert its stream into a Facebook-style curated feed, but it was enough to make some users nervous, especially those who have come to dislike the Facebook experience because the social network keeps tweaking its algorithm.

Facebook has managed its newsfeed this way from the beginning, but it seems to have gotten more irritating for some, especially since the changes seem to be designed to appeal to advertisers rather than actual users — and because some say they have lost much of the reach they used to have (a problem Facebook is happy to solve if you pay to promote your content). Is that the kind of future that Twitter has in mind? And will it ruin the experience?

[tweet 494240846015782912 hide_thread=’true’]

[tweet 494240286558932992 hide_thread=’true’]

When I asked the question (on Twitter, naturally) after the company’s earnings report, a number of users said they would either quit the service altogether or dramatically scale back their usage if Twitter implemented something like the Facebook newsfeed, with a black-box algorithm determining what they saw or didn’t see. Several said that a big part of the appeal of Twitter was that it showed them everything their friends and social connections posted — even if the volume of those posts was sometimes overwhelming.

[tweet 494480221190770688 hide_thread=’true’]

[tweet 494251082621943808 hide_thread=’true’]

Just because it implements some kind of algorithmic curation or filtering doesn’t mean Twitter is going to turn into Facebook overnight, of course. The company might confine that kind of approach to an updated or improved version of the “Discover” tab — which is designed to appeal to new users and increase engagement, but so far doesn’t seem to have had much impact. Or it might use algorithms in order to create beginner streams for new users, as a way of helping with “on-boarding,” while allowing existing users to remain unaffected.

The impetus for using algorithms is fairly obvious: while its user-growth and engagement numbers may have assuaged investors’ concerns for the most recent quarter, Twitter is still behind some of the targets that Costolo has reportedly set in the past — including the one where he said the network would have 400 million users by the end of last year (it has about 250 million now). And if it is ever going to reach those levels, it’s going to have to make the service a lot more intuitive and a lot less work. Algorithms are one way of doing that, because they do the heavy lifting, instead of forcing users to spend time pruning their streams.

As Facebook has shown, however, the algorithm is a double-edged sword: for every new user it appeals to, it is going to irritate — and potentially drive away — some indeterminate number of existing users. And as Twitter itself has acknowledged, those users are the ones who create and post the majority of the content that spurs engagement by the rest of the network. Pissing them off could leave Twitter with nothing but a resting place next to MySpace in the social networking Hall of Shame.

Post and thumbnail images courtesy of Thinkstock / rvlsoft

It’s complicated: Why we need a new etiquette for handling what’s private and what’s public

The private vs. public divide used to be relatively straightforward: things remained private unless you disclosed them to someone, either deliberately or accidentally — but even in the case of accidental disclosure, there was no way for your information to reach the entire planet unless it appeared on the evening news. Now, a tweet or a photo or a status update could suddenly appear on a news website, or be retweeted thousands of times, or be used as evidence of some pernicious social phenomenon you may never even have heard of before.

But you posted those things, so they must be public, right? And because they are public, any use of them is permitted, right?

A universe filled with nuance and slippery ethical slopes is contained in those questions. And while many of us have gotten used to the back-and-forth with Facebook (s fb) over what is private and what is public — a line that has remained fluid throughout the company’s history, and still continues to shift — it’s more than just Facebook. If this was a war, the entire web would be the battleground.

In a recent post on Medium, blogging veteran and ThinkUp co-founder Anil Dash did a good job of describing the shifting terrain around what’s private and what’s public. Although we may be convinced that we appreciate the difference between those two, and that there is some kind of hard dividing line, Dash notes: “In reality, the vast majority of what we do exists on a continuum that is neither clearly public nor strictly private.” And that makes it much harder to decide how to treat it:

“Ultimately, we rely on a set of unspoken social agreements to make it possible to live in public and semi-public spaces. If we vent about our bosses to a friend at a coffee shop, we’re trusting that no one will run in with a camera crew and put that conversation on national TV.”

Twitter: Private, public, or in between?

We’ve seen ample evidence of this tension in recent months with a number of Twitter-related debates. In March, a Twitter discussion got started among women who had suffered sexual abuse, and they used the hashtag #yesallwomen to share their stories. A number of sites, including BuzzFeed, collected these tweets and embedded them in a news story about the topic, something that has become fairly standard behavior — but some of those who participated in the discussion were outraged that this was done without their permission.

debate tonight about what qualifies as being a public figure today in the eyes of the media. Simple: If you use social *media* you opted in.

Should the authors of those articles have had to get permission from the users whose tweets they embedded? After all, Twitter is a public network by default — as Gawker writer Hamilton Nolan pointed out — and so those messages were designed to be publicly available. From a legal standpoint, posting things to networks such as Twitter and Facebook without using the various privacy features built into those networks makes them public. But some of the participants in the #yesallwomen discussion seemed to see their tweets as being more like a conversation with friends in a public place, not something designed to be broadcast.

“The things you write on Twitter are public. They are published on the world wide web. They can be read almost instantly by anyone with an internet connection on the planet Earth. This is not a bug in Twitter; it is a feature. Twitter is a thing that allows you to publish things, quickly, to the public.” — Hamilton Nolan

In another case, high-school students who posted racist comments on Twitter after President Barack Obama was re-elected in 2012 were singled out and identified by Gawker in a news article that included their tweets, as well as their full names and what schools they attended. Was that an appropriate response to messages that were clearly designed for a small group of friends, as unpleasant as they might be, or was it a form of bullying? What about the response to a single tweet from Justine Sacco that many took to be racist?

Blurring the line between personal and public

As sociologist danah boyd has pointed out during the endless debates about Facebook and privacy, we all have different facets of ourselves that we present in different contexts online — a work identity, a personal identity we display to our friends and family, and so on. The problem is that so many apps and services like Twitter and Facebook encourage us to blur the lines between those different personas (and benefit financially from us doing so, as Dash points out). And so information and behavior that belongs in one sphere slides into another.

identity

The response from Gawker and others to the #yesallwomen incident was to argue that the participants in that discussion simply don’t understand how Twitter works, or were being deliberately naive about how public their comments were — the same kind of response that users get when their embarrassing Facebook posts become more public than they intended. “If you don’t want people to see it, don’t put it on the internet” is the usual refrain. But as Dash points out, there is a whole spectrum of behavior that exists in the nether world between private and public:

“What if the public speech on Facebook and Twitter is more akin to a conversation happening between two people at a restaurant? Or two people speaking quietly at home, albeit near a window that happens to be open to the street? And if more than a billion people are active on various social networking applications each week, are we saying that there are now a billion public figures?”

The right to remain obscure

In some ways, this debate is similar to the one around search engines and the so-called “right to be forgotten,” a right that is in the process of being enshrined in legislation in the European Union. While advocates of free speech and freedom of information are upset that such legislation will allow certain kinds of data to be removed from view (as Google has now done with some news articles involving public figures), supporters of the law say ordinary individuals shouldn’t be forever tarred by comments or behavior that were intended to be ephemeral, but are now preserved for eternity for everyone to see.

[pullquote person=”” attribution=””]To what extent do we have a right to keep certain content obscure?[/pullquote]

In a piece they wrote for The Atlantic last year, Evan Selinger and Woodrow Hartzog argued that instead of privacy or a right to be forgotten, what we are really talking about is obscurity: so certain information may technically be public — gun-registry data, for example — but is usually difficult to find. Search engines like Google have removed the barriers to that kind of obscurity, and that’s great when the information is of significant public interest. But what about when it’s just high-level gossip or digital rubbernecking at the scene of a social accident? To what extent do we have a right to keep certain content obscure?

As Dash points out in his post, media companies and technology platforms like Facebook have a vested interest in keeping the definition of “public” as broad as possible, and our laws are woefully behind when it comes to protecting users. At the same time, however, some attempts to bridge that gap — including the right to be forgotten, and restrictions on free speech and freedom of information in places such as Britain and Germany — arguably go too far in the other direction.

In many ways, what we’re talking about are things that are difficult (perhaps even impossible) to enshrine in law properly, in the same way we don’t look for the law to codify whether we should be allowed to use our cellphones at the dinner table. Some kinds of behavior may benefit from being defined as illegal — posting revealing photos of people without their knowledge, for example, or audio/video recordings they haven’t agreed to — but the rest of it is mostly a quicksand of etiquette and judgment where laws won’t help, and can actually make things worse. We are going to have to figure out the boundaries of behavior ourselves.

Post and thumbnail images courtesy of Flickr user Alexandre Vialle and Thinkstock / rvlsoft as well as Shutterstock / Andrea Michele Piacquadio

Social media has changed the way that war reporting works — and that’s a good thing

We’ve been writing for a long time at Gigaom about the ways in which the web and social media have changed the practice of journalism, so it’s nice to see the New York Times recognizing some of that. In a recent piece, media writer David Carr notes that real-time social tools like Twitter (s twtr) and YouTube (s goog) have altered the way many of us experience events like the civil war in Ukraine or the violence in Gaza. He doesn’t really address whether this is positive or negative, but it’s easy to make the case that we are much better off now.

If Israeli rockets had hit Gaza or Ukrainian rebels had shot down a commercial airliner before the arrival of the social web, most of us would have been forced to rely on reports from traditional journalists working for a handful of mainstream media sources — some of whom would have been parachuted into the region with little to no advance warning, and in some cases with just a sketchy grasp of the context behind the latest incident — and the news would be filtered through the lens of a CNN anchor or NYT editor. But as Carr points out:

“In the current news ecosystem, we don’t have to wait for the stentorian anchor to arrive and set up shop. Even as some traditional media organizations have pulled back, new players like Vice and BuzzFeed have stepped in to sometimes remarkable effect. Citizen reports from the scene are quickly augmented by journalists. And those journalists on the ground begin writing about what they see, often via Twitter, before consulting with headquarters.”

More personal, and more chaotic

There are downsides to this approach, obviously: In some cases, journalists say things in the heat of the moment that draw negative attention from readers and viewers — or managers and owners of the media outlets they work for — and there are repercussions, as there were for NBC reporter Ayman Mohyeldin and CNN journalist Diana Magnay after they both made comments about the attacks in Gaza. Two years ago, the Jerusalem bureau chief for the New York Times was called on the carpet for remarks she made on Twitter and for a time was assigned a social-media editor to check her tweets before they were published.

Reporter's notebook

Although Carr doesn’t get into it, the other downside that some have mentioned is that the news environment has become much more chaotic, now that everyone with a smartphone can upload photos and report on what is happening around them — including the terrorist groups and armies that are involved in the conflict that is being reported on, and the ultimate victims of their behavior. Hoaxes and misinformation fly just as quickly as the news does, and in some cases are harder to detect, and those mistakes can have real repercussions.

The democratization of news is good

At the same time, however, there are some fairly obvious benefits to the kind of reporting we get now, and I would argue that they outweigh the disadvantages. For one thing, as Carr notes, we get journalism that is much more personal — and while that personal aspect can cause trouble for reporters like Mohyeldin and Magnay when they stray over editorial lines, in the end we get something that is much more moving than mainstream news has typically been. As Carr says:

“It has made for a more visceral, more emotional approach to reporting. War correspondents arriving in a hot zone now provide an on-the-spot moral and physical inventory that seems different from times past. That emotional content, so noticeable when Anderson Cooper was reporting from the Gulf Coast during Hurricane Katrina in 2005, has now become routine, part of the real-time picture all over the web.”

The other major benefit of having so many sources of news is that the process of reporting has become much more democratized, and that has allowed a whole new ecosystem of journalism to evolve — one that includes British blogger Brown Moses, who has become the poster child for crowdsourced journalism about Syria, as well as Storyful’s Open Newsroom and efforts like Grasswire and Checkdesk (I collected some other resources in a recent post abut fact-checking).

In the end, things have definitely become much more confusing — and not just for news consumers but for journalists as well — with the explosion of pro and amateur sources and the sheer speed with which reports flow by in our various social streams. But I would argue that the fact we no longer have to rely on a handful of mainstream outlets for our news and analysis is ultimately a good thing.

Post and thumbnail images courtesy of Flickr users Petteri Sulonen and sskennel

Newspaper companies need to stop lying to themselves, says longtime newspaper editor

Media theorist Clay Shirky isn’t the only one telling newspaper companies and print-oriented journalists that they need to wake up and pay attention to the decline of their industry before they run out of time. Former Seattle Times editor David Boardman — who also happens to be president of the American Society of News Editors — wrote in a recent essay that the newspaper business spends too much of its time sugar-coating the reality of what’s happening.

Boardman described listening to a presentation that the president of the Newspaper Association of America gave at the World Newspaper Congress in Turin, Italy. In her speech, Caroline Little painted an uplifting picture of the state of affairs in her industry, a picture that Boardman called “a fiction where papers could invent a new future while holding on tightly to the past” — something similar to what Shirky called “newspaper nostalgia,” in a piece he wrote recently.

In his post, Boardman took each statement made by Little and presented the opposite viewpoint, or at least put each in a little more context: for example, the NAA president noted that total revenue for the U.S. newspaper industry was about $38 billion in 2013 — but what she didn’t mention is that this is about $12 billion or 35 percent lower than it was just seven years ago:

“What she said: The printed newspaper continues to reach more than half of the U.S. adult population. What she didn’t say: But the percentage of Americans who routinely read a printed paper daily continues its dramatic decline, and is somewhere down around 25 percent. ‘Reaching’ in Little’s reference can mean those people read one issue in the past week; it doesn’t mean they are regular daily readers of the printed paper.”

Should newspapers stop printing?

In a separate post, Allan Mutter — also a longtime newspaper editor who writes a blog called The Newsosaur — collected some of the depressing statistics about the decline of print, most of which were also apparently never mentioned by Little, including the fact that combined print and digital revenues have fallen by more than 55 percent in the past decade, and the industry’s share of the digital advertising market has been cut in half over the same period.

What’s Boardman’s solution? It’s not one that most newspapers will like: He suggests that most should consider giving up their weekday print editions altogether at some point over the next few years, and focus all of their efforts on a single print version on Saturday or Sunday, while pouring all of their resources into digital and mobile. Weekend papers account for a large proportion — in some cases a majority — of the advertising revenue that newspapers bring in, so giving up everything but the Saturday paper wouldn’t be as much of a loss, he argues.

In a recent piece at the Columbia Journalism Review about the New York Times, writer Ryan Chittum argued that the newspaper can’t afford to simply stop printing because the physical version brings in so much revenue. But could it stop printing everything but the Sunday paper? Chittum thinks it might be able to, and so does long-time online journalism watcher Steve Outing. Perhaps new digital-strategy head Arthur Gregg Sulzberger — a co-author of the paper’s much-publicized “innovation report” — is already crunching those numbers for a presentation to his father, the publisher, whose family controls the company’s stock.

What happens when free-speech engines like Twitter and Facebook become megaphones for violence?

Social networks and platforms like Facebook (s fb), Twitter (s twtr) and YouTube (s goog) have given everyone a megaphone they can use to share their views with the world, but what happens — or what should happen — when their views are violent, racist and/or offensive? This is a dilemma that is only growing more intense, especially as militant and terrorist groups in places like Iraq use these platforms to spread messages of hate, including graphic imagery and calls to violence against specific groups of people. How much free speech is too much?

That debate flared up again following an opinion piece that appeared in the Washington Post, written by Ronan Farrow, an MSNBC host and former State Department staffer. In it, Farrow called on social networks like Twitter and Facebook to “do more to stop terrorists from inciting violence,” and argued that if these platforms screen for things like child porn, they should do the same for material that “drives ethnic conflict,” such as calls for violence from Abu Bakr al-Baghdadi, the leader of the Jihadist group known as ISIS.

“Every major social media network employs algorithms that automatically detect and prevent the posting of child pornography. Many, including YouTube, use a similar technique to prevent copyrighted material from hitting the web. Why not, in those overt cases of beheading videos and calls for blood, employ a similar system?”

Free speech vs. hate speech — who wins?

In his piece, Farrow acknowledges that there are free-speech issues involved in what he’s suggesting, but argues that “those grey areas don’t excuse a lack of enforcement against direct calls for murder.” And he draws a direct comparison — as others have — between what ISIS and other groups are doing and what happened in Rwanda in the mid-1990s, where the massacre of hundreds of thousands of Tutsis was driven in part by radio broadcasts calling for violence.

In fact, both Twitter and Facebook already do some of what Farrow wants them to do: for example, Twitter’s terms of use specifically forbid threats of violence, and the company has removed recent tweets from ISIS and blocked accounts in what appeared to be retaliation for the posting of beheading videos and other content (Twitter has a policy of not commenting on actions that it takes related to specific accounts, so we don’t know for sure why).

_75635188_isisnew

The hard part, however, is drawing a line between egregious threats of violence and political rhetoric, and/or picking sides in a specific conflict. As an unnamed executive at one of the social networks told Farrow: “One person’s terrorist is another person’s freedom fighter.”

In a response to Farrow’s piece, Jillian York — the director for international freedom of expression at the Electronic Frontier Foundation — argues that making an impassioned call for some kind of action by social networks is a lot easier than trying to sort out what specific content to remove. Maybe we could agree on beheading videos, but what about other types of rhetoric? And what about the journalistic value of having these groups posting information, which has become a crucial tool for fact-checking journalists like British blogger Brown Moses?

“It seemed pretty simple for Twitter to take down Al-Shabaab’s account following the Westgate Mall massacre, because there was consistent glorification of violence… but they’ve clearly had a harder time determining whether to take down some of ISIS’ accounts, because many of them simply don’t incite violence. Like them or not… their function seems to be reporting on their land grabs, which does have a certain utility for reporters and other actors.”

Twitter and the free-speech party

As the debate over Farrow’s piece expanded on Twitter, sociologist Zeynep Tufekci — an expert in the impact of social-media on conflicts such as the Arab Spring revolutions in Egypt and the more recent demonstrations in Turkey — argued that even free-speech considerations have to be tempered by the potential for inciting actual violence against identifiable groups:

It’s easy to sympathize with this viewpoint, especially after seeing some of terrible images coming out of Iraq. But at what point does protecting a specific group from theoretical acts of violence win out over the right to free speech? It’s not clear where to draw that line. When the militant Palestinian group Hamas made threats towards Israel during an attack on the Gaza Strip in 2012, should Twitter have blocked the account or removed the tweet? What about the tweets from the official account of the Israeli military that triggered those threats?

What makes this difficult for Twitter in particular is that the company has talked a lot about how it wants to be the “free-speech wing of the free-speech party,” and has fought for the rights of its users on a number of occasions, including an attempt to resist demands that it hand over information about French users who posted homophobic and anti-Semitic comments, and another case in which it tried to resist handing over information about supporters of WikiLeaks to the State Department.

Despite this, even Twitter has been caught between a rock and a hard place, with countries like Russia and Pakistan pressuring the company to remove accounts and use its “country withheld content” tool to block access to tweets that are deemed to be illegal — in some cases merely because they involve opinions that the authorities don’t want distributed. In other words, the company already engages in censorship, although it tries hard not to do so.

Who decides what content should disappear?

Facebook, meanwhile, routinely removes content and accounts for a variety of reasons, and has been criticized by many free-speech advocates and journalists — including Brown Moses — for making crucial evidence of chemical-weapon attacks in Syria vanish by deleting accounts, and for doing so without explanation. Google also removes content, such as the infamous “Innocence of Muslims” video, which sparked a similar debate about the risks of trying to hide inflammatory content.

[tweet 487569374300360704 hide_thread=’true’]

What Farrow and others don’t address is the question of who should be left to make the decision about what content to delete in order to comply with his desire to banish violent imagery. Should we just leave it up to unnamed executives to remove whatever they wish, and to arrive at their own definitions of what is appropriate speech and what isn’t? Handing over such an important principle to the private sector — with virtually no transparency about their decision-making, nor any court of appeal — seems unwise, to put it mildly.

What if there were tools that we could use as individuals to remove or block certain types of content ourselves, the way Chrome extensions like HerpDerp do for YouTube comments? Would that make it better or worse? To be honest, I have no idea. What happens if we use these and other similar kinds of tools to forget a genocide? What I think is pretty clear is that handing over even more of that kind of decision making to faceless executives at Twitter and Facebook is not the right way to go, no matter how troubling that content might be.

Post and thumbnail images courtesy of Shutterstock / Aaron Amat

New Snowden leaks show NSA collected the private data of tens of thousands of Americans

It’s been a number of months since there were any new revelations based on the massive trove of top-secret NSA surveillance documents that former security contractor Edward Snowden took with him when he left the service, but the Washington Post came out with a big one on Saturday: according to files that Snowden provided to the newspaper, NSA agents recorded and retained the private information of tens of thousands of ordinary Americans — including online chats and emails — even though they were not the target of an official investigation.

According to the Post‘s story, nine out of 10 account holders who were found in a large cache of intercepted conversations were not the actual surveillance target sought by the NSA, but in effect were electronic bystanders caught in a net that the agency had cast in an attempt to catch someone else. Many were Americans, the newspaper said, and nearly half of the files contained names, email addresses and other details. Although many had been redacted or “minimized,” almost 900 files still contained unmasked email addresses.

“Many other files, described as useless by the analysts but nonetheless retained, have a startlingly intimate, even voyeuristic quality. They tell stories of love and heartbreak, illicit sexual liaisons, mental-health crises, political and religious conversions, financial anxieties and disappointed hopes. The daily lives of more than 10,000 account holders who were not targeted are catalogued and recorded nevertheless.”

As the paper explains, the NSA is only legally allowed to target foreign nationals located overseas unless it obtains a warrant from a special surveillance court — a warrant that must be based on a reasonable belief that the target has information about a foreign government or terrorist operations. The government has admitted that American citizens are often swept up in these dragnets, but the scale with which ordinary people are included was not known until now. The NSA also appears to keep this information even though it has little strategic value and compromises the privacy of the users whose data is kept on file.

Are you an American who writes emails in a language other than English? You are a foreigner to the NSA w/o rights. http://t.co/Xl9VpoAnKZ

— Christopher Soghoian (@csoghoian) July 6, 2014

The Post story describes how loosely NSA agents seem to treat the theoretical restriction on collecting information about American citizens: participants in email threads and chat conversations are considered foreign if they use a language other than English, or if they appear to be using an IP address that is located outside the U.S. And there is little to no attempt to minimize the number of unrelated individuals who have their information collected:

“If a target entered an online chat room, the NSA collected the words and identities of every person who posted there, regardless of subject, as well as every person who simply ‘lurked,’ reading passively what other people wrote. In other cases, the NSA designated as its target the Internet protocol, or IP, address of a computer server used by hundreds of people. The NSA treats all content intercepted incidentally from third parties as permissible to retain, store, search and distribute to its government customers.”

The Snowden documents come from a cache of retained information that was gathered under the Foreign Intelligence Surveillance Act — despite the fact that for more than a year, government officials have stated that FISA records were beyond the reach of the rogue NSA contractor, according to the PostThe paper said it reviewed about 160,000 intercepted e-mail and instant-message conversations, some of them hundreds of pages long, and 7,900 documents taken from more than 11,000 online accounts.

[tweet 485602315223564289 hide_thread=’true’]

Post and thumbnail images courtesy of Flickr user Thomas Leuthard