Venture-capital funding turns out to be a mixed blessing for media

Venture-capital funding can often be a double-edged sword for startups. It allows them to grow quickly without having to worry about profitability, but it also arguably encourages them to take irrational risks–including some that ultimately turn out to be fatal–in order to produce the kind of large returns that VC funds rely on.

The dilemma that this can create for media companies in particular was thrown into sharp relief earlier this month when a trifecta of news came out about some of the most high-profile digital-media ventures of the last decade. Here are the highlights:

— BuzzFeed is on track to miss its revenue targets by as much as 20 percent, according to a recent report by the Wall Street Journal. The company had been talking about a public share offering next year, but analysts say an IPO is likely on hold due to its lackluster financial performance. After its most recent financing round in 2016, an investment of $200 million from NBCUniversal that doubled the Comcast subsidiary’s holdings in the company, BuzzFeed had a valuation of $1.7 billion. As analysts noted at the time, this number wasn’t much larger than what the company was worth in 2015, which suggested that it wasn’t growing quickly enough to justify a higher value.

— Mashable has agreed to sell itself to Ziff Davis for about $50 million, according to reports from both the Journal and Bloomberg. That’s less than one quarter of what the company was worth as recently as last year, when it closed a $15-million round of funding from Time Warner. Not long afterward, Mashable laid off most of its news team, and “pivoted” to focus on video, a change driven in part by Facebook’s seemingly insatiable demand for video content. Mashable, which founder Pete Cashmore started in his home in Aberdeen in 2005 at the age of 19, has been rumored to be looking for a buyer for some time.

— Vice is also likely to miss revenue targets for this year, according to several reports. It had a market value of $5.7 billion earlier this year after private equity firm TPG invested $450 million in the company. Disney also has a significant stake, having invested $400 million in 2015 (giving Vice a market value of about $4 billion at the time), as in addition to a $250-million investment made in 2014 through A&E Networks, a partnership between Disney and Hearst. Vice has talked in the past about possibly doing an initial public share offering, but it has also named Disney as a potential acquirer.

Amid all the angst fuelled by these revelations, there was a glimmer of good news from Axios, a startup run by Politico co-founder Jim VandeHei, which said it had raised $20 million from investors including Lerer Partners (also an investor in BuzzFeed) and NBCUniversal. But will Axios’s funding ultimately lead to disappointment?

Obviously, BuzzFeed and Vice aren’t failures by any normal definition of that word. They have hundreds of millions of dollars in revenue and are theoretically worth billions of dollars. Skeptics, however, will note that those billions are private-market valuations–notional value that can disappear in an instant, as it has in Mashable’s case–and that neither one appears to be anywhere close to turning a profit.

Is any of this venture capital’s fault? That depends on who you talk to. Although CUNY journalism professor Jeff Jarvis celebrated Axios taking venture funding, others were not quite so quick to say VC is always good for media startups.

Talking Points Memo founder Josh Marshall says much of the investment in media companies was driven by false expectations, but now “investors are realizing that scale cannot replicate the kind of business model lock-in, price premiums and revenue stability people thought it would.” The bottom line, Marshall says, is that “the future that VCs and other investors were investing hundreds of millions of dollars in probably doesn’t exist.”

BuzzFeed, for example, built a business dedicated at least in part to producing content, including video, that would work well on Facebook. But the returns on that content appear to be much lower than expected. Is that because the expectations BuzzFeed and its investors had were too high, or did Facebook make changes that undermined those expectations? Or did the landscape change in other ways?

At one point, the company was said to be projecting revenues of as much as $500 million for last year, but it was forced to scale those forecasts back and likely pulled in about half that amount. For this year, BuzzFeed executives were reportedly looking for growth of 35 percent but the company appears to have achieved dramatically less than that.

If the Journal is correct, BuzzFeed likely increased its revenues by less than 10 percent to about $280 million. That’s not a great performance for a company that is seen as a fast-growing digital superstar, and it makes its alleged $1.7 billion value look awfully rich. One of the bets that VCs made was that digital-media companies like BuzzFeed could grow at rates similar to technology startups, and could therefore justify the same kinds of valuations, but that doesn’t appear to be the case.

As for Vice, co-founder and CEO Shane Smith has said multiple times over the past year that the company had a $1 billion “run rate,” meaning it was on track to generate that much in annual revenue. But according to the Journal, it is expected to only have revenues of about $800 million this year.

As a number of observers noted after the BuzzFeed and Mashable news broke, the reality could be that these businesses are not failures at all, but simply aren’t worth as much as either their founders or investors might have hoped. Part of that could be Facebook’s fault, or the dominance that it and Google exert over the advertising industry. But part of it could also be over-inflated expectations of a pot of gold at the end of the digital-media rainbow.

In some countries, fake news on Facebook is a matter of life and death

Misinformation distributed by social platforms like Facebook has become a major issue in the United States, thanks to all the attention focused on Russian troll armies trying to influence the 2016 presidential election. But in some countries, “fake news” doesn’t just interfere with people’s views about who to vote for—it leads to people being arrested, jailed, and in some cases even killed. And Facebook doesn’t seem to be doing a lot about it.

Southeast Asia is one place where the social network is fomenting ethnic and political tensions in dangerous ways, according to a number of journalists who cover the region. This effect can be seen in countries like Thailand and Cambodia, but it has become increasingly severe in Myanmar, where the Rohingya people are being persecuted, driven from their homes and in some cases raped and killed.

“As complicated as Facebook’s impacts on the politics of the United States are, the impact in Asia may be even more tricky, the consequences more severe and the ecosystem less examined, both by Facebook and most people in the US,” says Christina Larson, who has written about the region for a number of outlets including Bloomberg and The Atlantic.

As the situation has escalated over the past six months, observers in Myanmar have reported waves of Facebook-based misinformation and propaganda aimed at fomenting anti-Rohingya fervor, including fabricated reports that families were setting fire to their own homes in an attempt to generate sympathy. More than 600,000 people have been forced from their homes so far, and an untold number have died in the process.

One of the main sources of anti-Rohingya propaganda is Ma Ba Tha, a group of radical Buddhist monks who have been preaching that the Rohingya are less than human, or that they are trying to over-run the country and make everyone into a Muslim. The leader of the group, Ashin Wirathu, has been banned from preaching, but he has been able to spread his message far and wide thanks to an orchestrated Facebook campaign.

Larson and others say the problem is compounded by the fact that a majority of Myanmar residents rely on Facebook for their news. And yet, the level of media literacy is shockingly low, primarily because smartphones and social media are still relatively new.

Until 2014, the digital SIM cards required to use smartphones were prohibitively expensive, because they were only available from the country’s government-controlled telecom carrier. After the industry was opened up, cheap smartphones and $1 SIM cards flooded the market, available from every street vendor—and almost all had Facebook installed by default.

“Facebook has basically become the way that people do everything,” says Paul Mozur, a New York Times reporter who covers Myanmar. “It replaces newspapers, it displaces outreach campaigns by NGOs and other agencies trying to reach people especially in remote areas, it replaces just about everything.”

Wirathu, the leader of the anti-Rohingya movement, used to print out paper pamphlets and flyers to spread his incendiary messages, Mozur says, but now he just posts fake images on Facebook and gets 100 times the reach.

Many of those who have been thrust into this new world of smartphones and social networks in Myanmar “just aren’t used to the level of misinformation or disinformation that’s happening on Facebook,” says Mozur. “Suddenly they’re subject to the full force of an information war coming out of Yangon, orchestrated by much more sophisticated sources, and it’s easy for them to become pawns in that war.”

And what is Facebook doing to help? Not much, some observers say. The social network has relationships with non-government agencies, but only a couple of actual staffers on the ground. “It’s become a bit like an absentee landlord in Southeast Asia,” according to Phil Robertson, deputy director of Human Rights Watch in Asia.

A Facebook spokesman told CJR the company works hard to keep hate speech and content that celebrates or incites violence off the platform, that it is working with non-profit groups in Myanmar to raise awareness of its community standards policies, and that it has local-language pages that offer tips on safety and security.

But still, the problem continues. and it is arguably far more serious than any safety tip guidelines could cover. At one point, Mozur says, messages were spreading on Facebook Messenger that said Muslims were planning an attack on 9/11, and at the same time a separate chain letter said that the Buddhists were planning to attack on the same day.

“I don’t know who was behind those messages, it could have been like four people, but it literally brought the country to a standstill,” he says. “A lot of times these rifts are there already, and so in a certain sense I guess Facebook is a mirror, holding itself up to the differences in society. But social media can also become a real catalyst for the violence.”

Christina Larson says there’s a debate to be had about what hate speech means in a particular context and how to define it, “but what I would consider dangerous speech is advocating that the Rohingya need to leave Myanmar, and sharing doctored images of them supposedly burning their own houses to create a media spectacle.”

In a way, she says, these images—which were liked and shared tens of thousands of times—”gave cover for military action and human rights violations, including violence and rape. You can’t say social media kills people any more than you can say guns kill people, but certainly social media shaped public opinion in a way that seems to have played a part in the escalation of violence against the Rohingya population.”

Facebook’s approach to countries like Myanmar and others in the region often strikes those on the ground as not just out of touch but actively cavalier. In its recent split-feed tests, for example, users in countries like Cambodia and Slovakia had news articles moved to a completely separate feed, which local non-profit groups and media outlets say significantly impacted their ability to reach people with crucial information.

It’s one thing to tread carefully around issues like free speech, Larson says, “but if you’re going to run A/B testing, where you change an algorithm and see what you think consumers like best, for God’s sake, stick to stable democracies. Don’t pick a place where there’s an authoritarian regime that is busy locking up opposition leaders, and Facebook is a primary way that activists communicate about their government.”

In many ways, Myanmar is an example of the future Mark Zuckerberg seems to want: A country in which most people are connected through the social network and get virtually all of their news from it. And yet, the outcome of that isn’t a utopian vision of a better world, it’s exactly the opposite—a world where ethnic and cultural tensions are inflamed and weaponized. And Facebook’s response looks almost completely inadequate to the dangers it has helped unleash.

 

————————————————————————-

“Facebook has become a bit like an absentee landlord in Southeast Asia,” says Phil Robertson, deputy director of Human Rights Watch’s Asia division. “When Buddhist extremists start instigating action against Muslims [in Myanmar], looking around for the local Facebook representative is hopeless — there isn’t one. Instead, it’s sort of, complain into the void and hope some relief arrives before it’s too late

Why is Facebook so useful to the junta? First, its insistence on a “real name-only” policy makes for easy tracking of dissidents. Even in cases where people successfully mask their names, their web of social connections makes them potentially easy to identify. (In the U.S., sex workers have already found themselves inadvertently exposedby Facebook’s data-aggregation and friend suggestions.) Hard-to-navigate privacy settings can mean that what people mistakenly think of as private speech, limited to a small group of friends, is often anything but. “If you make a certain kind of comment online, you can quickly be sent to prison in Thailand,” says iLaw researcher Anon Chawalawan.

But the BBC has reported that one unintended impact was dramatically shrinking the number of people who would see published items. “Out of all the countries in the world, why Cambodia? This couldn’t have come at a worse time,” a Cambodian blogger told the BBC, explaining that the number of people who saw her public video had dropped by more than 80 percent. “Suddenly I realized, wow, they actually hold so much power.… [Facebook] can crush us just like that if they want to.”

http://foreignpolicy.com/2017/11/07/facebook-cant-cope-with-the-world-its-created/

 

The crackdown has already claimed two NGOs, more than a dozen radio stations, and the local offices of two independent media outlets, Radio Free Asia and The Cambodia Daily. Hun Sen’s main opposition, the Cambodian National Rescue Party (CNRP), could be dissolved entirely at a Supreme Court hearing on 16 November.

“Out of all the countries in the world, why Cambodia?” Ms Harry asks of Facebook’s experiment. “This couldn’t have come at a worse time.”

Facebook surpassed TV as Cambodians’ most popular source of news last year, according to a survey from the Asia Foundation, with roughly half of respondents saying they used the social media network.

The platform helped power the CNRP’s gains against the governing Cambodian People’s Party (CPP) in the 2013 national elections and has been one of the only places for dissent in a country ranked 132nd out of 180 countries in Reporters Without Borders’ 2017 World Press Freedom Index.

Hun Sen’s longtime rival, Sam Rainsy, the exiled former president of the CNRP who runs a popular page of his own, said his traffic had dipped 20% since the start of the Facebook test. Unlike the prime minister, whom he accused of buying Facebook supporters from foreign “click farms”, Mr Rainsy said he could not pay to sponsor his posts to put them in front of more users in their usual News Feeds.

“Facebook’s latest initiative would possibly give an even stronger competitive edge to authoritarian and corrupt politicians,” he said.

Leang Phannara, web editor for Post Khmer, the Khmer-language version of independent English daily the Phnom Penh Post, said Khmer Facebook posts were reaching 45% fewer people, while web traffic was down 35%. The only way to recapture that audience was to pay to sponsor posts, he said.

“It’s a pay-to-play scenario,” Mr Phannara said.

http://www.bbc.com/news/world-asia-41801071

 

Phil Robertson, deputy director of Asia Division of Human Rights Watch, said the Rohingya were forced to get the word out about their cause on Facebook and Twitter because the few media outlets in Myanmar that exercise independence in reporting on the situation in Rakhine face threats of boycotts and retaliation.
Not many media outlets in the country, he said, were willing to take the risk of alienating their readers, advertisers, and in some cases, their staff, by calling out the Burmese government for the campaign of ethnic cleansing they are involved in.

“Of course, the problem with social media is that their policing mechanisms can be used for harassment by those willing to mount a concerted campaign of filing complaints against specific Facebook pages or Twitter feeds,” Robertson added. “We’ve seen an explosion of Rakhine and Burman nationalists using Twitter, retweeting hateful messages and gory images, so it would not surprise me at all if some of those nationalists, using bot accounts and pages apparently set up en masse, are now going on the attack against Rohingya on Facebook.”

(Many Rohingya refugees and activists said their pages had been blocked or banned from Facebook because they were posting photos and videos of anti-Rohingya violence. Facebook said it was leaving some such posts up for news purposes but was removing those it said were promoting or celebrating violence).

“I believe [Facebook] is trying to suppress freedom [of] expression and dissent by colluding with the genocidaires in Myanmar regime,” the activist and journalist Mohammad Anwar told the Guardian. Anwar, whose allegations of censorship were first reported by the Daily Beast, shared screenshots of numerous posts that had been removed by Facebook for violating community standards. Several of the posts comprised only text, he said, and described military operations against Rohingya villages in Rakhine.

The Kuala Lumpur-based journalist, who works for the site RohingyaBlogger.com, said that his reports come from a network of 45 correspondents and citizen journalists in Rakhine.

https://www.theguardian.com/technology/2017/sep/20/facebook-rohingya-muslims-myanmar

 

Laura Haigh, Amnesty International’s Burma researcher, told The Daily Beast there appears to be a targeted campaign in Burma to report Rohingya accounts to Facebook and get them shut down.

Mohammad Anwar, a Kuala Lumpur-based Rohingya activist and journalist with the site RohingyaBlogger.com, told The Daily Beast that Facebook has repeatedly deleted his posts about violence in Rakhine State, and has threatened to disable his account.

https://www.thedailybeast.com/exclusive-rohingya-activists-say-facebook-silences-them

 

“In a lot of these countries, Facebook is the de facto public square,” said Cynthia Wong, a senior internet researcher for Human Rights Watch. “Because of that, it raises really strong questions about Facebook needing to take on more responsibility for the harms their platform has contributed to.”

 

Fake news demonizing Muslims, particularly reports spreading fears of terrorism or Islamic fundamentalism, has sometimes led to disastrous consequences. Those reports have spread like wildfire on Facebook, where Buddhist nationalist groups like Ma Ba Tha have gained prominence by building legions of followers.

That’s what happened in the region of Bago, north of Yangon, on June 23, when a Buddhist mob reportedly destroyed homes and forced dozens of villagers to flee after rumors spread online that a new building in the village was going to be a Muslim school.

MIDO, which regularly monitors Burmese hate speech on Facebook as part of a research project with Oxford University, found that only 10% of the postings it reported according to its own definition of hate speech were eventually taken down by Facebook. The reporting mechanism is clunky, and the process is opaque, said MIDO’s Htaike Htaike Aung.

https://www.buzzfeed.com/meghara/how-fake-news-and-online-hate-are-making-life-hell-for?utm_term=.rtwBAO7JM#.hjg18qx63

 

Much of India’s false news is spread through WhatsApp, a popular messaging app. One message that made the rounds in November, just after the government announced an overhaul of the country’s cash, claimed that a newly released 2,000 rupee bank note would contain a GPS tracking nano-chip that could locate bank notes hidden as far as 390 feet underground. Another rumor, about salt shortageslast November, prompted a rush on salt in four Indian states. In southern India, a rumor about a measles and rubella vaccine thwarted a government immunization drive.

Many false stories have led to violence. In May, rumors about child abductors in a village triggered several lynchings and the deaths of seven people. In August, rumors about an occult gang chopping off women’s braids in northern India spread panic, and a low-caste woman was killed.

Some stories exacerbate India’s rising religious and caste tensions. This week, for instance, images purportedly showing attacks against Hindus by “Rohingya Islamic terrorists” in Burma circulated on social media in India, stoking hatred in Hindu-majority India against Muslim Rohingya.

“There was one video with two people being beheaded, and the text was saying these were Indian soldiers being killed in Pakistan. When I found the original video, it was actually taken from footage of a gang war in Brazil,” said Pankaj Jain, founder of SMHoaxSlayer.com, a website that fact-checks circulating rumors on social media in India. “They’ll tell you this is fresh, these are images the media is not showing you, if you’re a true Indian patriot, you will forward this message.”

https://www.washingtonpost.com/world/asia_pacific/indias-millions-of-new-internet-users-are-falling-for-fake-news%E2%80%94sometimes-with-deadly-consequences/2017/10/01/f078eaee-9f7f-11e7-8ed4-a750b67c552b_story.html?utm_term=.d2317b71eac3

 

New York Times technology reporter Paul Mozur says in Myanmar, Facebook is everywhere.

“The entire internet is Facebook and Facebook is the internet. Most people don’t necessarily know how to operate or get on and navigate regular websites. They live, eat, sleep and breathe Facebook.” Facebook users in Myanmar grew from about 2 million in 2014 to more than 30 million today.

Which is why the misinformation spread on Facebook can be so dangerous.

Mozur says Facebook has become a breeding ground for pernicious posts about the Rohingya. “In particular, the ones that seem most problematic are government channels that have put a lot of propaganda out there, saying everything from the Rohingya are burning their own villages, to showing bodies of soldiers who may be from other conflicts but saying this is the result of a Rohingya attack, to more nuanced stuff like calling the Rohingya ‘Bengalis’ and saying they don’t belong in the country.”

These posts are widely shared and generate thousands of likes.

https://www.pri.org/stories/2017-11-01/myanmar-fake-news-spread-facebook-stokes-ethnic-violence

 

Social media messaging has driven much of the rage in Myanmar. Though widespread access to cellphones only started a few years ago, mobile penetration is now about 90 percent. For many people, Facebook is their only source of news, and they have little experience in sifting fake news from credible reporting.

One widely shared message on Facebook, from a spokesman for Ms. Aung San Suu Kyi’s office, emphasized that biscuits from the World Food Program, a United Nations agency, had been found at a Rohingya militant training camp. The United Nations called the post “irresponsible.”

 

(Craig Mod) Almost all of the farmers we spoke with were Facebook users. None had heard of Twitter. How they used Facebook was not dissimilar to how many of us in the West see and think of Twitter: as a source of news, a place where you can follow your interests. The majority, however, didn’t see the social platform as a place to be particularly social or to connect with and stay up to date on comings and goings within their villages.

https://www.theatlantic.com/technology/archive/2016/01/the-facebook-loving-farmers-of-myanmar/424812/

 

Stevan Dojcinovic, who runs an independent non-profit investigative news agency in Serbia, wrote a piece for the NYT saying “Hey Mark Zuckerberg, My Democracy Isn’t Your Laboratory” — he says: “for us, changes like this can be disastrous. Attracting viewers to a story relies, above all, on making the process as simple as possible. Even one extra click can make a world of difference. This is an existential threat, not only to my organization and others like it but also to the ability of citizens in all of the countries subject to Facebook’s experimentation to discover the truth about their societies and their leaders.”

That’s why Mark Zuckerberg’s arbitrary experiments are so dangerous. The major TV channels, mainstream newspapers and organized-crime-run outlets will have no trouble buying Facebook ads or finding other ways to reach their audiences. It’s small, alternative organizations like mine that will suffer. A private company, accountable to no one, has taken over the world’s media ecosystem. It is now responsible for what happens there. By picking small countries with shaky democratic institutions to be experimental subjects, it is showing a cynical lack of concern for how its decisions affect the most vulnerable.

 

 

Congress is trying to do an end run around one of the pillars of online free speech

Free speech on the Internet is a controversial topic these days, thanks to Russian-backed troll armies distributing misinformation on Twitter and Facebook, Nazi sympathisers preaching hate, and the daily harassment that women and people of colour get subjected to on many social platforms.

For all of its flaws, however, the freedom that the web allows is a critical part of what makes it such a powerful tool, not just for tweeting or sharing baby photos but for journalism of all kinds, including “citizen journalism,” crowdsourcing, eyewitness reporting and collaborative journalism. The web gives anyone the ability to publish, in some cases anonymously, and while that can facilitate hateful behavior, it can also reveal important secrets.

Free-speech advocates — including the Electronic Frontier Foundation and the Center for Democracy and Technology — are afraid that a bill currently making its way through Congress could significantly weaken those freedoms, and that the repercussions for online speech could be severe.

In the United States, one of the most critical planks supporting free expression online is a section of the 1996 Communications Decency Act known as Section 230, often referred to as the “safe harbor” clause, which the EFF describes as “the most important law protecting Internet speech.”

Section 230 states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In a nutshell, this clause gives any online service provider immunity from legal liability for the content that its members or users post (unless it involves either criminal activity or intellectual property).

This means that platforms like Facebook and Twitter and Amazon can’t be sued if one of their users publishes something that is libellous or offensive. But it also protects much smaller companies and platforms and online communities from similar kinds of liability, and it protects digital news companies and online publishers from being taken to court for the comments that readers post on articles.

The bill that the EFF and others are so concerned about is called the Stop Enabling Sex Traffickers Act or SESTA, which would amend Section 230. The bill was approved by the Senate commerce committee this week.

According to its main sponsor, Republican Senator Bob Portman from Ohio, the legislation is supposed to make it easier to crack down on sex trafficking, which is facilitated in some cases through online services like Backpage, a provider of adult classified-ad listings that is currently facing a potential grand jury indictment.

Most people would agree that bringing an end to sex trafficking is a noble goal — although there are those who disagree about whether SESTA will be able to do so (some experts believe it could actually expose sex trafficking victims to more harm, and make it more difficult to stop the practice). But in the process of reaching that goal, the proposed law could blast a large hole right through the free-speech protections of Section 230.

“An Internet without Section 230 is one that diminishes the voice of the individual online, limits our access to information and diverse platforms for our speech, and pressures all intermediaries to act as gatekeepers and judge user content,” says Nuala O’Connor, president and CEO of the Center for Democracy and Technology.

While it is celebrated by free-speech advocates, however, not everyone is a fan of Section 230. Some observers say there is a growing belief in Washington that the law gives Internet companies too much freedom, and that its protections should be loosened so the government can hold Facebook and Google accountable for things like fake news and hate speech.

There appears to be “increasing skepticism about Section 230 inside the Beltway, and in fact increasing skepticism about Silicon Valley,” says Eric Goldman, an expert in Internet law at Santa Clara University. “There’s a widespread fear that Internet companies are causing society’s ills rather than just holding a mirror up to them.”

SESTA’s critics warn that the proposed law could lead to a significant smothering of online speech of all kinds, not just speech about sex trafficking. That’s because the bill creates a new kind of liability by making it a crime to “knowingly facilitate, assist or support” any such activity.

Daphne Keller of the Stanford Center for Internet and Society says that the new law could push some platforms and  publishers to crack down on a wide variety of speech, to avoid the threat of lawsuits. It would give them “a reason to err on the side of removing Internet users’ speech in response to any controversy,” she says, “and in response to false or mistaken allegations, which are often levied against online speech.”

Cindy Cohn, executive director of the EFF, said in an interview that she fears the bill will put pressure on small websites and online communities in particular, and some might decide to shut down for fear of lawsuits, while others might never get into the market at all. And the web in general would ultimately be the poorer for it.

“I worry about this a lot, because we’re already in a place where only a few places are hosting people’s speech, and now there’s a lot more pressure on them to limit what people can say on these platforms,” says Cohn. “It will shrink the number of voices because it will shrink the number of places that are willing to host those voices. Ultimately it won’t be worth it to host a bulletin board or comments, and that will just entrench the big guys.”

Goldman says that even after an amendment this week that tried to tighten up the definition of what constitutes “knowledge of conduct,” the language in the bill is still far too broad, and could wind up catching all kinds of other activity in its net.

Not only that, but he says SESTA could potentially create a kind of boomerang effect, by creating a perverse incentive for some sites to ignore all sexually related posts or behavior — since doing anything about them would suggest knowledge, and therefore liability if they miss something.

“If a site decides the best strategy is to dial back its efforts to moderate content” so that they can claim not to have knowledge, he says, “the bill could have the counterproductive result of exacerbating other types of antisocial behavior, because some companies won’t bother to moderate at all.”

Senator Ron Wyden, who co-wrote the original Section 230 clause into the Communications Decency Act, has said he opposes SESTA because of the damage it could do to online speech, and to startups who rely on Section 230’s protections to remain in business, or to even make their business viable at all.  “The bill that we’re looking at today is the wrong answer to a serious problem,” he told the Senate commerce committee in September.

This week, Wyden put a hold on the bill, in the hope that some senators might reconsider their support. But the pressure to do something about sex trafficking is intensifying, and with industry groups like the Internet Association behind it and widespread support in Congress, observers say SESTA stands a good chance of becoming law.

And if it does, it could significantly curtail speech online, in ways that will affect not just large social platforms like Facebook and Twitter but media sites and online publishers of all kinds.

Twitter bots are interfering in more than just elections, and Google isn’t helping

By now, most of us are probably familiar with the idea that large numbers of fake and automated Twitter and Facebook accounts, many of them run by trolls linked to the Russian government, created and amplified misinformation in an attempt to interfere with the 2016 election. But this wasn’t just a one-off incident—trolls of all kinds continue to use bots to try and influence public opinion in all kinds of ways.

To take one of the most recent examples, there is some evidence that automated Twitter accounts have been distributing and promoting controversial race-related content during the gubernatorial race in Virginia, which is currently underway. According to a study by Discourse Intelligence, whose work was financed by the National Education Association, more than a dozen either partially or fully automated bots were involved.

The activity relates to a video advertisement produced by the Latino Victory Fund, which shows a child having a nightmare in which a supporter of Republican candidate Ed Gillespie chases immigrant children in a pickup truck that is decorated with a Confederate flag. The study said the accounts had the potential to reach over 650,000 people.

One of the biggest problems with this kind of misinformation, from a media point of view, is that because of the way the media industry functions now—and particularly the focus on traffic-generating clickbait and other revenue-based behavior—if the message being promoted by fake and automated becomes loud or persistent enough, it is often picked up by traditional media outlets, which can exacerbate the problem by giving it legitimacy.

In one prominent case, a fake and largely automated Twitter account belonging to someone who pretended to be Jenna Abrams, a Trump-loving young woman, was widely quoted not just on right-wing news sites such as Breitbart or on conservative-leaning networks like Fox News, but in plenty of other places as well, such as USA Today and even the Washington Post. The account was created by a Russian “troll factory.”

In each of these kinds of cases, the life-cycle or trajectory of such bits of misinformation reinforces just how fragmented and chaotic the media landscape has become: Misinformation from notorious troll playgrounds like 4chan or Reddit makes its way to Twitter and/or Facebook, gets promoted there by both automated accounts and unwitting accomplices, and then gets highlighted on news channels and websites.

Mainstream media outlets like Fox News, for example, helped promote the idea that “anti-fa” or anti-fascist groups were planning a weekend uprising in an attempt to overthrow the US government, an idea that got traction initially on Reddit and 4chan and appears to have been created by alt-right and fake news sites such as InfoWars.

After the Texas church shooting on the weekend, tweets from alt-right personality Mike Cernovich—who was also instrumental in promoting the so-called “Pizzagate” conspiracy theory that went viral during the 2016 election—were highlighted in Google search (in the search engine’s Twitter result “carousel” that appears at the top of search results). The tweets contained misinformation about the alleged shooter’s background, including reports he was a member of an “anti-fa” group and that he had recently converted to Islam.

Google has come under fire—and deservedly so—for a number of such cases, including one in which a misleading report from 4chan appeared at the top of search results for information on the mass shooting in Las Vegas. The company apologized, and senior executives have said privately that they are trying hard to avoid a repeat of such behavior, but the misinformation showing up about the Texas shooting tweet shows there is still much work to be done.

The search giant got off relatively easily at the recent hearings before both the Senate and the House intelligence committees, with most of the criticism and attention focused on the behavior of social networks like Twitter and Facebook. And while Google might argue that it’s Twitter’s fault if misinformation is promoted by trolls during election, if those results show up high up in search, then that also means it’s Google’s problem.

The giant tech platforms all say that they are doing their best to make headway against misinformation and the fake and automated accounts that spread it, but critics of the companies note that until recently they denied that much of this activity was even occurring at all. Facebook, for example, initially denied that Russian-backed accounts were involved in targeting fake news and divisive ads at US voters.

At the Congressional hearings, representatives for Google, Facebook and Twitter all maintained that fake and automated activity is a relatively small part of what appears on their networks, but some senators were skeptical.

Twitter, for example, reiterated to Congress the same statistic it has used for years, which is that bots and fake accounts represent less than 5% of the total number of users, or about 15 million accounts. But researchers have calculated that as much as 15% of the company’s user base is made up of fake and automated accounts, which would put the total closer to 50 million. And a significant part of their activity appears to be orchestrated.

Whether any of this activity is actually influencing voters in one direction or another is harder to say. Some Russian-influenced activity during the 2016 election appeared to be designed to push voters towards one candidate or another, but much of it—as described in Facebook’s internal security report, released in April—seemed to be designed to just cause general chaos and uncertainty, or to inflame political divisions on issues like race.

As with most things involving this kind of behavior, it’s also difficult (if not impossible) to say exactly how much of this was organized by malicious agents intent on disrupting the election in favor of one candidate or another, and how much of it was simply random bad actors trying to cause trouble.

The Internet Research Agency, a Kremlin-linked entity that employed a “troll army” to promote misleading stories during the election, is the most well-known of the organized actors employing these methods. But there are undoubtedly more, both within and outside Russia, and all three of the tech giants admitted at the Congressional hearings that they have only scratched the surface when it comes to finding or cracking down on this kind of behavior.

 

Tech platforms would like to have their cake and eat it too

The major tech companies did what they probably hoped was the requisite amount of bowing and scraping before the assembled members of both the House and the Senate intelligence committees on Wednesday, after being called on the carpet for their role in distributing Russian-backed ads and fake news during the 2016 election. But tangible commitments from the tech giants were few and far between.

Once the political rhetoric was swept away, representatives from Facebook, Google, and Twitter admitted they make money (in some cases quite a lot of it, as Facebook reported a record profit of $4.7 billion for the latest quarter) from their advertising businesses. And because of the structure of their platforms, all admitted that some of that money inevitably comes from fake accounts, including—as it turns out—agents of the Russian government.

Google, in fact, said that while Twitter has banned the Russian government-backed media outlet RT from its platform, the search giant had no plans to stop RT from advertising on YouTube, which has reportedly become a significant part of the Russian outlet’s media campaign. Why? Because its behavior hasn’t breached Google’s rules, the company said.

In a nutshell, the trio were adamant (in a deferential way, of course) that while they look and behave very much like media companies, they will resist attempts to force them to abide by the same kinds of rules. Each committed to taking steps to add disclosure to their ads, in the hope that doing so might blunt the need for legislation, which the Senate is currently working on.

The three repeated many of the same tropes in their testimony that they trotted out in the Senate judiciary committee meeting on Tuesday. Namely, that malicious behavior by fake accounts created by Russian troll farms was relatively minor in scope compared to the size of their vast platforms, that they recognize how disturbing these incidents were—and they feel terrible about it—and that they are working hard to prevent it from happening again.

In all three cases, the companies appear to be trying desperately to have their cake and eat it too: Arguing that the number of fake accounts or dubious ads or malicious actors represents only a tiny fraction of the activity on their platforms (0.004 percent, according to Facebook) while telling advertisers and corporate users how effective their advertising and reach is.

As more than one senator pointed out during the interview portion of the hearings, one of the best advertisements for the effectiveness of the platforms is the amount of influence that Russia’s troll farms were able to purchase for so little money. And advertisers are clearly getting that message loud and clear.

Facebook, for example, admitted at the hearings that almost 150 million users were exposed to the fake ads and accounts that were created by the Kremlin-backed entity known as the Internet Research Agency, after initially saying just a few million were exposed (and even earlier claiming there was no evidence of Russian involvement at all). And what did all of that exposure cost the Russian outfit? About $100,000.

In some cases, campaigns that cost just $1,200—several of which were displayed by senators during the hearings and released to the public afterwards, including pages with names like South United and Blacktivists—got the fake accounts huge numbers of followers and engagement.

https://twitter.com/dnvolz/status/925796721002799105

While Facebook in particular tried hard to keep the conversation focused on the advertising issue, several members of the senate committee pointed out that a far larger problem is the reach and influence of so-called “organic” posts—which don’t cost anyone anything, and as a result are far more difficult to track (according to Facebook’s general counsel).

This is a crucial point. Unlike traditional media outlets, where advertising and editorial are kept relatively separate, one of the core features of a social network like Facebook is that virtually any piece of content on the platform can become an ad. That feature has helped the company pull tens of billions of dollars of advertising away from traditional media entities, to the point where it and Google now control a majority of the digital ad business.

And what exactly are the platforms doing to try and prevent similar problems in the future? This was a question repeated over and over throughout the proceedings, but the answer isn’t at all clear, and in fact it got murkier and murkier as the hearings continued.

All three of the companies said they are working on improving their automated systems so they could detect potential fake or malicious accounts better and faster—Twitter claimed it has gotten twice as good as it used to be, and now challenges 4 million potentially fake accounts every week. Facebook talked about partnering with other companies on a cyber-threat team, and said it is doubling the number of people it has working on security to 20,000.

When pressed, however, all three admitted that they probably haven’t discovered all of the malicious activity on their platforms, and that there is likely to be much more to come—including more Russian-linked activity. And what, if anything, should Congress be doing about that? Shrugs all around (but deferential shrugs, of course).

Each of the platforms also demurred when pressed on some of the steps that senators and members of the House committee thought might be worthwhile, such as notifying users who had been the target of fake ads and accounts. Too difficult, Facebook said.

The tech platforms each have their own reasons for trying to ju jitsu their way out of the government’s clutches. In Twitter’s case, it is desperately trying to hang onto its status as an anonymous network that stresses free speech, something that came under fire repeatedly during the hearings. But a lack of action could inflame the desire of some legislators to regulate the tech giants, since many believe they are already too powerful.

And what form might that legislation take? That remains to be seen, but proposals from critics have so far run the gamut from requiring better advertising disclosure to subjecting some or all of the tech giants to the full weight of US antitrust legislation, or fine-tuning the “safe harbor” that internet giants currently enjoy when it comes to offensive content.

As Senator Dianne Feinstein put it during the hearings: “You created these platforms, and now they’re being misused. And you have to be the ones who do something about it—or we will.” And that is likely to strike fear into the hearts of even the most powerful tech giant.