Facebook shuts down potential Russian troll network ahead of midterms

The memory of what happened during the 2016 election is likely still fresh for Mark Zuckerberg—how he failed to take action against a Russian troll network running a misinformation campaign aimed at influencing the election, and was ordered to appear before Congress for a dressing down. This time around, the Facebook CEO is doing his best to crack down on similar behavior before it becomes a problem: The company said Tuesday it has shut down more than 30 accounts and pages that were exhibiting behavior similar to that of the former Russian troll farm known as the Internet Research Agency.

In its blog post announcing the move, Facebook said it couldn’t confirm whether the disinformation tactics it identified (which it called “coordinated inauthentic behavior”) came from Russian sources, but some observers appear to have already jumped to that conclusion. Democratic Senator Mark Warner, vice-chairman of the Senate Intelligence Committee, said in a prepared statement and on Twitter that he believes the campaign was also the work of Russian intelligence agencies. “More evidence the Kremlin continues to exploit platforms like Facebook to sow division and spread disinformation,” Warner said.

According to Facebook, the accounts in question were “more careful to cover their tracks” than the Internet Research Agency was, by using virtual private networks to disguise their location, or paying third parties to run ads. “As we’ve told law enforcement and Congress, we still don’t have firm evidence to say with certainty who’s behind this effort,” Facebook said. The company admitted, however, that some of the activity was consistent with what they saw during the election, and that there was some evidence of a connection between the latest group of accounts and the Internet Research Agency accounts disabled last year.

Continue reading “Facebook shuts down potential Russian troll network ahead of midterms”

Congressional white paper proposes sweeping changes to how tech platforms are regulated

There have been multiple sessions in Congress over the past year looking at the failures of digital platforms such as Facebook, Google and Twitter, including their failure to properly limit the actions of trolls spreading misinformation during the 2016 election. But there have been very few concrete proposals from the government on how to deal with those failures, how to blunt the virtual monopoly some of the platforms have on certain kinds of information, or how to handle user privacy.

Democratic Senator Mark Warner hopes to fill that gap with a white paper he has been circulating in governmental and tech circles over the past few weeks, according to a report from Axios (which obtained a copy of the paper from an unknown source). The proposals contained in the paper are wide-ranging, and in some cases may even be politically impossible, but at least someone has started an official discussion about some of the potential decisions Washington could take.

The paper states that the revelations of the past year, including evidence that Russian trolls manipulated Facebook, have “revealed the dark underbelly of an entire ecosystem.” It goes on to say:

“The speed with which these products have grown and come to dominate nearly every aspect of our social, political and economic lives has in many ways obscured the shortcomings of their creators in anticipating the harmful effects of their use. Government has failed to adapt and has been incapable or unwilling to adequately address the impacts of these trends on privacy, competition, and public discourse.”

When it comes to misinformation, the Warner paper proposes that platforms be required to label automated or “bot” accounts, and also do more to identify who is behind anonymous or pseudonymous accounts, and it proposes that a failure to do these things might be bet by Federal Trade Commission sanctions. But would either of these things actually help solve the issues Congress is concerned about?

Experts in misinformation say bots are just one part of the problem, and that the behavior of what are sometimes called “cyborgs”—partially automated accounts run by human beings—is also important. And while anonymity can be a shield for some trolls, others are more than happy to engage in all kinds of bad behavior under their real names. The paper also admits that identifying users could backfire if it invades the privacy of journalists or others who have real reasons for wanting to remain anonymous.

One other significant change the Warner paper proposes is a change to Section 230 of the Communications Decency Act, which gives the platforms immunity from prosecution for content that is uploaded by their users. Since some users complain that harassing material is often re-uploaded after being removed because of defamation, etc., the white paper recommends that Section 230 be amended so that the platforms could face sanctions if they don’t prevent such material from reappearing.

In addition, the paper argues that the US should pass privacy-protection legislation similar to the General Data Protection Regulation now in force in Europe, including the right to data portability and what is often called “the right to be forgotten.” It notes, however, that in order to have a GDPR-like regime, the US would have to create a central body to administer the law, something it doesn’t currently have.

Republicans still convinced Facebook and Twitter are biased against them

If there’s one thing we can count on in these uncertain times, it’s that no matter what evidence they are presented with, many Republican members of Congress will remain convinced that the major social platforms are in league against them and are secretly using their algorithms to down-rank conservative content. The Judiciary Committee of the House of Representatives held a hearing in April on this topic—one that spent most of its time trying to decide whether Facebook had somehow censored the right-wing YouTube duo known as Diamond & Silk—and it held a second hearing on Tuesday.

The executives testifying before the committee were Monika Bickert, Head of Global Policy Management at Facebook; Juniper Downs, YouTube’s Global Head of Public Policy and Government Relations; and Nick Pickles, Senior Strategist for Public Policy at Twitter. Committee chairman Bob Goodlatte said the hearing’s purpose was to “look at concerns regarding a lack of transparency and potential bias in the filtering practices of social media companies [and] how they can be better stewards of free speech.” But as with the first hearing, most of the discussion on Tuesday focused on individual claims by members of Congress that one or more of the social platforms was censoring conservative views.

Republican Lamar Smith of Texas asked why Google censored search terms like “Jesus, Chick-Fil-A and the Catholic religion,” although he couldn’t provide any evidence for his claim. Iowa Republican Steve King asked Facebook why right-wing news site Gateway Pundit had seen its traffic drop. Neither comment drew much response from the platforms (Facebook said it couldn’t comment on individual pages). On a more serious note, Goodlatte and others also raised the question of whether the social platforms should still be protected by Section 230 of the Communications Decency Act.

For the most part, the platforms stuck to their argument that they are neutral when it comes to content, and that they don’t deliberately prejudice their algorithms against conservative posts. But it was clear the repeated allegations of bias have hit their mark, and the platforms seem nervous. As The Washington Post reported last month, both Facebook and Twitter had back-room meetings with conservative celebrities and pundits to reassure them they aren’t biased, and at the beginning of Tuesday’s committee hearing, Monika Bickert of Facebook apologized for “mishandling” the Diamond & Silk situation.

Some Democratic members said the platforms weren’t doing enough to remove offensive content, including sites peddling dangerous conspiracy theories such as Infowars. Ted Lieu of California, meanwhile, said the hearing was a waste of time, and that members of the committee should have been investigating the Russian infiltration of the NRA, instead of “how many Facebook likes Diamond & Silk should be entitled to have.” He said the only thing worse than a video from Alex Jones of the conspiracy site Infowars was the idea of the US government holding a hearing about content published on a private platform.

Here’s more on the social platforms and their struggles with Congress:

  • Facebook as utility: In addition to criticizing Facebook for allegedly restricting the traffic of Gateway Pundit, Republican Steve King mused during the hearing about whether the social network and other massive tech platforms should be subject to the ultimate penalty. “What about converting the large behemoth organizations that we’re talking about here into public utilities?” he asked.
  • No Infowars ban: One theme that Democratic members returned to multiple times during the hearing was why Facebook wouldn’t just ban misinformation providers such as Infowars. “How many strikes does a conspiracy theorist who attacks grieving parents and student survivors of a mass shooting get?” asked Ted Deutch of Florida. Bickert said fake news doesn’t breach the site’s terms of service, but tweaks to the News Feed algorithm are designed to down-rank such sites.
  • Appeasement: Just days before the hearing, a group of senior media executives met with Facebook and some criticized the company for bending over backwards to appease conservatives, according to The Wall Street Journal. BuzzFeed editor Ben Smith said the number of conservative news sites at the meeting suggested Facebook had bought into the idea “that mainstream outlets such as the New York Times are liberal and should be counterbalanced by right-leaning opinion outlets.”
  • Three strikes: While Juniper Downs of YouTube was fairly straightforward on how many strikes a news outlet had before being blocked for posting offensive content (three), Facebook was not nearly as forthcoming. When asked how many times a site like Infowars would be able to post content that breached the site’s guidelines, Bickert waffled and would only tell the committee that “the threshold varies depending on the severity of the infractions.”
  • QAnon fans: New York Times writer Kevin Roose pointed out on Twitter that the live comments on YouTube’s stream of the Judiciary hearing were filled with conspiracy theorists who appeared to believe the QAnon conspiracy, a series of rumors spread on various Internet forums about an alleged coup against the “deep state.” Roose called this ironic juxtaposition “kind of perfect.”

Other notable stories:

A man who worked for a Facebook contractor in Dublin moderating content on the social network said in a documentary aired on Britain’s Channel 4 network that the company lets far-right fringe groups get away with posting content that others are banned for, including hate speech. Facebook posted a response that said these examples were mistakes and that it would retrain its moderators so they wouldn’t happen again.

Karen Ho and Alexandria Neason write for CJR about the return of former long-time WNYC radio host Leonard Lopate, who has a new show on WBAI, a progressive station based in Brooklyn. Lopate was suspended from WNYC and eventually fired after reports of inappropriate conduct.

A federal judge lifted a controversial order that would have required the Los Angeles Times to remove information it had published in a story about a former Glendale police detective who was accused of working with the Mexican mafia. The information was supposed to have been sealed by the court, but was posted to a public database of court documents by mistake. The judge said the paper could publish the information but he warned it to be careful because of the danger it might put the defendant in.

Isaac Lee, the head of content for Univision and architect of the company’s Fusion expansion, is stepping down from his position and plans to start his own TV production company, according to a report in Variety magazine. Univision recently changed CEOs and said it is looking to sell some of its holdings, including Gizmodo Media Group and The Onion, acquisitions Lee spearheaded.

Andy Kroll writes for California Sunday Magazine about Congressman Adam Schiff, the highest-ranking Democrat on the House Intelligence Committee, and how he has gone from being a mild-mannered politician without much of a public profile to the unlikely hero of the Democratic party for his role in pushing for an investigation of the Trump campaign’s ties to the Russian government.

What if you had to reinvent the media ecosystem from the ground up? Civil is trying

Imagine, for a moment, that the media ecosystem as we know it has ceased to exist. There are still journalists and readers, but all the traditional distribution methods and revenue streams are unavailable. How would you design a new ecosystem from scratch? How would you build a financially viable publishing platform that would also inherently support journalistic values?

This, in a nutshell, is what Civil founder Matthew Iles is trying to build: A global platform for independent journalism, powered by blockchain technology and cryptocurrency, governed by an open-source constitution—including an advisory council that will act as a kind of Supreme Court to adjudicate disputes—and run as a non-profit foundation. In addition, there’s a related for-profit company called Civil Media, which will sell services of various kinds to platform users and publishers.

Civil hasn’t launched its cryptocurrency yet, but the platform already hosts newsrooms like Popula, which is run by writer Maria Bustillos, and Block Club Chicago (a reboot of the Chicago version of the former DNA Info network) as well as a New York-based project called Documented that is tracking immigration issues. Each one has gotten seed funding from a $1-million pool provided by Civil. Civil itself is funded by a $5 million grant from ConsenSys, a developer working with the Ethereum blockchain.

It’s an ambitious goal, to not only launch a new blockchain platform, but also a crowdsourced constitution, a foundation and a council of expert advisers all at the same time. It’s a bit like creating a virtual country, complete with citizens who vote, an economy, a court system and a government—but the structure of this country is unlike anything that has come before it. As Vivian Schiller, the former NPR and Twitter executive who recently joined Civil to run its non-profit foundation, put it in a Medium piece about her new job:

Continue reading “What if you had to reinvent the media ecosystem from the ground up? Civil is trying”

Elon Musk’s transition from hero to zero is almost complete

If you have an anti-Elon Musk take, you should probably publish it soon, because they are piling up. The latest was triggered by his attempt to help rescue a group of young soccer players trapped in a flooded cave in Thailand. A piece at Gizmodo said Musk’s attempt was a classic example of his empty promises, and made fun of the fact that no one wanted the mini-sub he developed. But the piece itself seems like a great example of something else: Namely, a desire to see the worst in Elon Musk, no matter what. It says:

The weird difference between some of Musk’s famous vaporiffic moonshots and the kid-sized submarine is that Musk actually built the sub. But it’s nothing more than a useless stunt. Not only did Musk show up too late to help, he showed up with a tool that wasn’t even helpful.

A similar sentiment triggered dozens of scathing Twitter memes about the dumb and publicity-hungry billionaire showing up after something is all over with the stupid invention that isn’t even necessary. But at the end of the Gizmodo piece, an update notes that Musk posted part of an email exchange he had with the man co-ordinating the rescue effort, in which the man encouraged Musk to hurry up developing the mini-sub. In other words, it wasn’t just some billionaire’s feeble attempt at PR.


Did this change anyone’s mind about Musk or the cave resuce? Not appreciably. After his tweet explaining the email exchange, new pieces appeared taking shots at him for denigrating **, because he said in his email that the man was a ** rather than an expert in cave rescues.

How did Musk suddenly become the poster child for bad billionaires? Not that long ago, he was a little-known engineering nerd working on an affordable electric car. What a great idea, everyone thought. He did a small cameo in the first Iron Man movie, and it seemed cute. Then it turned out he was building a reusable rocket that might go to Mars. Another great idea! Especially when it actually worked.

So what happened? The electric car turned out to be the Tesla, which is unaffordable for most normal people but took off with wealthy tech executives. Then Musk—who seems incapable of not doing five things at once—started a bunch of crazy-sounding side projects, like the Hyperloop, or his plan to dig tunnels underneath Los Angeles to avoid traffic. Almost all of these projects were seen as expensive toys designed by a short-attention-span billionaire, like his desire to shoot a Tesla into space.

Musk has also taken fire for the amount of debt he has raised to fund Tesla, even as he has come up short on production of the latest model, and he responded in a somewhat childish way by attacking the media for reporting on him. At one point, he even proposed starting a service that would automatically rank sources of trustworthy journalism, a service he sarcastically said would be called Pravda—which of course is the name of a notoriously unreliable Russian government newspaper.

When he was still a plucky, little-known entrepreneur, Musk’s try-anything attitude and somewhat wacky and combative Twitter persona seemed endearing. But now that he is running several billion-dollar enterprises and dating an Internet celebrity (singer **, also known as Grimes), the way he shoots from the lip on almost any topic makes his Twitter account a target-rich environment for anyone wanting to cut him down to size. And there is no shortage of people who seem eager to do so.

YouTube rolls out a plan to crack down on misinformation and fund journalism

While Facebook has taken the brunt of the criticism over fake news, YouTube has also become a target of late for those who believe the video-sharing site isn’t doing enough to stem the flow of misinformation. Sociologist Zeynep Tufekci has called it “an engine for radicalization,” because the YouTube algorithm continually suggests extreme content, and a former Google engineer who worked on the algorithm agrees, telling CJR this behavior was designed as a way of boosting engagement.

The Google-owned site appears to have heard some of these criticisms, because it just announced new features it says should help cut down on the spread of misinformation through the platform, along with a $25 million funding program the company says is aimed at fostering innovation at news organizations—money that comes from the recently announced $300-million Google News Initiative. The first feature being rolled out is an “information panel” that will pop up on top of search results involving breaking news stories, with links to news articles about the event from “authoritative sources.”

And who defines what qualifies as an authoritative source? YouTube, of course. According to the announcement, Fox News fits into that category, something a number of observers say is problematic at best. In any case, the feature is designed to help avoid some of the embarrassing moments YouTube has suffered in the past, when conspiracy theories and hoaxes popped up among the top recommended videos for news events such as the shooting at a high school in Parkland, Florida. At one point, most of the top 10 recommended videos about that event said the victims were “crisis actors.”

Continue reading “YouTube rolls out a plan to crack down on misinformation and fund journalism”

Twitter finally ramps up its crackdown on fake and automated accounts

For years now, Twitter has been accused of being too soft on trolls, spam, and fake accounts. But the service appears to be trying to make up for lost time: According to a report from The Washington Post, based on anonymous sources with knowledge of the company’s inner workings, Twitter has dramatically ramped up the rate at which it suspends fake accounts. It is now suspending as many as one million every day, and has shut down over 70 million since April. But the process could backfire.

One reason why the company hasn’t taken concerted action against such fakes in the past is that they boost the service’s user numbers, which makes it look more popular, and therefore satisfies investors. That helps explain why Twitter’s share price dropped by as much as 8 percent on Monday, the first trading opportunity after the Post report came out: Investors were concerned that weeding out fakes might hit Twitter’s user numbers. But Twitter’s chief financial officer sought to reassure them:

Whether most of the accounts suspended so far were largely inactive or not, the moves are still likely to have an impact on Twitter’s user base, and market watchers say that could trim some of the recent enthusiasm about the stock. If the inactive accounts were the low-hanging fruit, then future suspensions could have even more impact on its numbers, as Twitter has to suspend some of the more active accounts (assuming it wants to continue its campaign to root them out).

One interesting aspect of the Post story is that a group of Twitter employees concerned about trolls and fakes appears to have mounted a kind of guerilla campaign within the company to get it to see the dangers of allowing such accounts to run rampant. The story describes a “white hat” attempt to call attention to the problem, and says the project—code-named Operation Megaphone—remained secret from Twitter executives, including head of trust and safety Del Harvey.

“The name of the operation referred to the virtual megaphones — such as fake accounts and automation — that abusers of Twitter’s platforms use to drown out other voices. The program, also known as a white hat operation, was part of a broader plan to get the company to treat disinformation campaigns by governments differently than it did more traditional problems such as spam, which is aimed at tricking individual users as opposed to shaping the political climate.”

Another reason Twitter  might have been reluctant to pursue an all-out campaign against fakes and trolls is that doing so could open the company up to further charges of being biased against conservatives, something it has already been fighting hard to deny at private dinners between CEO Jack Dorsey and prominent conservative commentators and celebrities. If someone showed that many of the suspended accounts appear to be right-wing, that would give critics even more ammunition.

Meanwhile, at least one conservative Twitter user believes the company should use its expanded suspension powers on other kinds of fakes, such as the “fake news” purveyors he believes are covering his administration unfairly, including The New York Times and the Post. After the Washington Post report was published, president Donald Trump tweeted:

What should Facebook be doing to stop the WhatsApp rumor mill?

A wave of mob violence continues to roll across India—beatings and lynchings that appear to be related to conspiracy theories circulating on WhatsApp. In the most recent episode last Sunday, five people were lynched by a mob who believed they were child kidnappers. As CJR has described, one problem with trying to stop the spread of misinformation on the service is that it is encrypted end-to-end, so neither WhatsApp nor its parent company Facebook ever see the messages they distribute.

It’s like trying to stop conspiracy theories being spread by people calling each other on the phone. Are there ways to stop such things? Yes, but the solution could turn out to be worse than the problem.

The Indian government, however, doesn’t see it that way. The country’s information ministry sent a strongly-worded letter to WhatsApp this week, saying it “cannot evade accountability and responsibility” for the abuse on its platform. The government also ordered the company to “take immediate action to end this menace.” In a response, WhatsApp executives argued that they can’t solve the problem alone, and that false news, misinformation and the spread of hoaxes “are issues best tackled collectively by government, civil society and technology companies working together.”

WhatsApp said it is “horrified by these acts of violence,” and that it has taken a series of steps recently to try to cut down on misinformation, including giving WhatsApp group administrators more power over who gets to send messages. The company also said it will give up to $50,000 to researchers to study the problem. But is this enough? Nikhil Pahwa doesn’t think so. The publisher of a site called Medianama, Pahwa wrote about some of the steps he thinks WhatsApp should take:

Change #1: Users can make messages either public (media) or private (P2P message). The default setting for all messages should be private. This will impact virality on the platform, but that’s a price it will have to pay for bringing in accountability. This will create a level of friction while forwarding: they will be frustrated when they cannot forward certain messages.”

Pahwa also argued that WhatsApp could make it easier for users to flag certain messages as misinformation or hoaxes, and they could then be reviewed by WhatsApp moderators the same way spam is. Other users responding to his post said it should be easy enough to delete these messages not just in a few accounts but anywhere they were shared across the network. A proposal from Pahwa that suggested every public message should have a unique ID tagged to its creator, however, got some pushback:

Much like other Internet-related issues, WhatsApp’s deadly rumor mill is not an easy problem to solve. The anonymity and encryption of WhatsApp are two features that make the app so appealing for many, in particular for dissidents and others who want to communicate with fear of being identified. And yet, those same features also enable or empower trolls and bad actors to misuse the platform for their own purposes. How do you stop one without also crippling the other? Meanwhile, some believe that in this case, India itself is more to blame for the misinformation problem than WhatsApp.

Quartz sale to Japanese company doesn’t give media outlets much to cheer about

There are a couple of different ways to look at the acquisition of Quartz, which announced early Monday morning that parent company Atlantic Media is selling the site to a Japanese financial information provider called Uzabase. On the one hand, for a media startup to get between $75 million and $110 million after only six years in business is probably cause for some celebration, given an industry environment in which even giants like BuzzFeed are missing revenue targets, and one-time superstars like Mashable are selling themselves for a fraction of their previous value (The Atlantic magazine was sold last year for an undisclosed price to Emerson Collective, which is controlled by billionaire Laurene Powell Jobs).

That said, however, the Quartz deal doesn’t give media insiders much to celebrate from a financial point of view. Based on figures from a Uzabase slide presentation about the acquisition, even the higher end of the proposed sale price (which is based on certain subscription targets being met over the next five years) amounts to about 2.5 times Quartz’s projected revenues for this year, and less than four times last year’s revenue. That’s not quite at the low end of prior deals, but neither is it at the high end, which was set by Axel Springer’s acquisition of Business Insider in 2015, for six times projected revenues.

To be fair, the ABusiness Insider deal is widely seen as an anomaly. The German media giant had reportedly gotten board approval to spend as much as $1 billion on an acquisition of the Financial Times, but when the magazine was bought by Japan’s Nikkei instead, Springer was left with a bag of cash and a hunger to expand. The muted price for Quartz could also be a result of what appears to be a revenue decline last year of about 10 percent (to $27 million), as well as a significant loss of $8 million.

Another reason the Quartz deal is likely to spark only muted celebration in broader media circles is that Atlantic Media has been shopping the site around to potential acquirers off and on since 2015, and it wound up being sold to a little-known Japanese media startup not much older than itself. Uzabase was founded in 2008 by two former investment bankers from UBS. Their original mission was to build a financial information service that could compete with Bloomberg, a service now known as Speeda, and more recently Uzabase launched a news curation/aggregation app called NewsPicks.

Although the name may not be that familiar to a lot of North American users, the company says NewsPicks has more than 64,000 users who pay $15 a month for a premium version of the app, which allows members to share and comment on news articles. That’s an income stream of almost $1 million a month, something many media companies would no doubt like to have coming in as digital advertising wanes. And Uzabase itself appears to have a strong business: The company went public in 2016 and the stock price has climbed sharply since then, giving it a market cap of almost $1 billion.

Much like the Nikkei deal for the Financial Times, the Uzabase/Quartz acquisition suggests there is a continuing appetite from Asian media entities—particularly financially-oriented ones—for outlets that have a presence in English-speaking markets. But based on the terms of the Quartz sale, no one in the media industry should get their hopes up about that translating into a massive windfall.