Blog posts for CJR

March 12: Apple announced on March 12 that it has acquired Texture for an undisclosed sum. Often called “the Netflix of magazines,” Texture gives readers access to over 200 popular magazines through its app and website for a single monthly fee. It was originally called Next Issue Media when it launched in 2012, and had raised $130 million in venture funding before the acquisition. Said Apple executive Eddy Cue:

“We’re excited Texture will join Apple, along with an impressive catalog of magazines from many of the world’s leading publishers. We are committed to quality journalism from trusted sources and allowing magazines to keep producing beautifully designed and engaging stories for users.”

In an interview at the South by Southwest conference following the news, Cue said that Apple would be integrating Texture into Apple News, and that the company is committed to curating the news to remove fake news. Part of the goal of Apple News and acquiring Texture, he said, is to avoid “a lot of the issues” happening in the media today, such as the social spread of inaccurate information.


March 12: The European Union released the final report from its High Level Expert Group on Fake News, entitled “A Multi-Dimensional Approach to Disinformation,” on March 12. Several of the experts involved in fact-checking and tracking disinformation, including Claire Wardle of First Draft and Alexios Mantzarlis of  the International Fact-Checking Network, summed up the main points of the report in a Medium post, which said the report’s contributions include:

“Important definitional work rejecting the use of the phrase ‘fake news’; an emphasis on freedom of expression as a fundamental right; a clear rejection of any attempt to censor content; a call for efforts to counter interference in elections; a commitment by tech platforms to share data; calls for investment in media and information literacy and comprehensive evaluations of these efforts; as well as cross-border research into the scale and impact of disinformation.”

Among other things, the group notes that at a time when many governments are trying to pass laws aimed at stamping out fake news, this is not the right approach. “Many political bodies seem to believe that the solution to online disinformation is one simple ‘fake news’ law away, [but] the report clearly spells out that it is not. It urges the need for caution and is sceptical particularly of any regulation of content.”


March 11: Joshua Geltzer, executive director of Georgetown Law’s Institute for Constitutional Advocacy and Protection and former senior director for counterterrorism at the National Security Council, writes in Wired that the Russian trolls who tried to manipulate the 2016 election didn’t abuse Facebook or Twitter, they simply used those platforms in the way that they were designed to be used:

“For example, the type of polarizing ads that Facebook admits Russia’s Internet Research Agency purchased get rewarded by Facebook’s undisclosed algorithm for provoking user engagement. And Facebook aggressively markets the micro-targeting that Russia utilized to pit Americans against each other on divisive social and political issues. Russia didn’t abuse Facebook—it simply used Facebook.”

Geltzer says the major web platforms need to do a much better job of removing or blocking malicious actors who try to use their systems for nefarious purposes, and he also says that Facebook, Google and Twitter need to be much more transparent about their algorithms and how they operate. That kind of openness, he says, “could yield crowd-sourced solutions rather than leaving remedies to a tiny set of engineers, lawyers, and policy officials employed by the companies themselves.”


March 10: Sociologist Zeynep Tufekci wrote in an essay published in the New York Times on March 10 about experiments she performed on YouTube during the 2016 election, where she noticed that no matter what kind of political content she searched for, the recommended videos were always more extreme and inflammatory, whether politically or socially. This is a vicious circle, she writes:

“In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.”

Tufekci mentions research done by former YouTube engineer Guillaume Chaslot, who worked on the video platform’s recommendation algorithm and spoke to CJR recently about his conclusions. Like Tufekci, he found that the videos being recommended on the site were overwhelmingly contentious and inflammatory, including many that promoted conspiracy theories, because that kind of content makes people click and spend more time on the site, and that serves Google’s business interests.


March 9: NewsWhip, an analytics company that measures social-media activity, looked at its data and came up with a list of news reporters who get the most engagement on Facebook in February, and number one was Ryan Shattuck, of the satirical news site The Onion. Number 2 was Jonah Urich, who works for a left-wing site called Truth Examiner, known for posting sensationalized political news.Daily Wire, another hyper-partisan political news site, also took several spots in the top 10. As NewsWhip described it:

Beyond the Onion, the top authors were primarily from hyper-partisan sources like the Daily Wire, Truth Examiner, Breitbart, Washington Press, and several small but politically-charged sites. Horrifyingly enough, two authors from fake news sites featured. An author from the fake news site Your Newswire was towards the top of our list, ranking in at #12. Baxter Dmitry wrote 81 articles in February, driving more than 1.7 million Facebook interactions.

Facebook has said it plans to change its algorithm so that more “high quality” news shows up in the News Feed, but that could be easier said than done. The company said it would rank news sources based in part on whether they drive engagement and discussion, and what NewsWhip’s data reinforces is that the most engaging content is often fake, or at least highly sensationalized.


March 9: Most of the attention around fake news has focused on Facebook and YouTube, but other apps and services can also play a role in spreading misinformation, as Wired points out in a March 9 piece on the use of Facebook-owned messaging app WhatsApp in Brazil. Use of the app is apparently complicating the country’s attempts to deal with an outbreak of yellow fever, because of false reports about vaccinations:

In recent weeks, rumors of fatal vaccine reactions, mercury preservatives, and government conspiracies have surfaced with alarming speed on the Facebook-owned encrypted messaging service, which is used by 120 million of Brazil’s roughly 200 million residents. The platform has long incubated and proliferated fake news, in Brazil in particular. With its modest data requirements, WhatsApp is especially popular among middle and lower income individuals there, many of whom rely on it as their primary news consumption platform.

According to Wired, among the conspiracy theories circulating about the vaccination program are an audio message from a woman claiming to be a doctor, warning that the vaccine is dangerous, and a fake-news story connecting the death of a university student to the vaccine. As similar reports about the impact of Facebook in countries like Myanmar have shown, social-media driven conspiracy theories in the US can be annoying but in other parts of the world they can actually endanger people’s lives.


March 8: Renee DiResta, a researcher with New Knowledge and a Mozilla fellow specializing in misinformation, argues that by using Facebook to spread fake news during the 2016 election, the “Russian troll factory” known as the Internet Research Agency was duplicating a strategy initially developed by ISIS, which used digital platforms and social-media methods to spread its message.

The online battle against ISIS was the first skirmish in the Information War, and the earliest indication that the tools for growing and reaching an audience could be gamed to manufacture a crowd. Starting in 2014, ISIS systematically leveraged technology, operating much like a top-tier digital marketing team. Vanity Fair called them “The World’s Deadliest Tech Startup,” cataloging the way that they used almost every social app imaginable to communicate and share propaganda.

Most of the major platforms made half-hearted attempts to get rid of this kind of content, but they were largely unsuccessful. What this showed, DiResta writes, was that the social platforms could be gamed in order to spread political messages, and that the same kinds of targeting techniques that worked for advertising could be turned to political use. And among those who were also learning this lesson, it seems, were some disinformation architects on a troll farm in Russia.

Leave a Reply

Your email address will not be published. Required fields are marked *