Facebook admits connecting the world isn’t always a good thing

One of the defining tenets of Facebook’s corporate philosophy has been the idea that connecting people around the world, both to each other and to issues that matter to them, is inherently a good thing. Co-founder and CEO Mark Zuckerberg has said the social network’s mission is “to give people the power to share and to make the world more open and connected.”

Lately, however, the company seems to be prepared to admit that doing this doesn’t always produce a world of sunshine and rainbows.

The United Nations recently criticized the company for its role in distributing fake news and misinformation about the persecuted Rohingya people in Myanmar, who have been driven from their homes, attacked and in some cases killed. In an interview on Slate’s If Then podcast, Adam Mosseri—the Facebook executive in charge of the News Feed—bluntly admitted that this is a serious problem.

“Connecting the world isn’t always going to be a good thing. Sometimes it’s also going to have negative consequences. The most concerning and severe negative consequences of any platform potentially would be real-world harm. So what’s happening on the ground in Myanmar is deeply concerning in a lot of different ways. It’s also challenging for us for a number of reasons.”

Mosseri went on to say that Facebook is thinking long and hard about to solve this kind of problem. “We lose some sleep over this,” he said. Which is encouraging, because it has to be at least a little disturbing to find that the tool you created to connect the world so people could share baby photos is being used to spread conspiracy theories that encourage violence against an already persecuted minority.

For more background on how Facebook came to play this role in Myanmar, and the challenges that it faces, please see my recent piece in CJR, in which I talked to reporters who work in the region about the social network’s role in the violence there.

 

Spotlight on fake news and disinformation turns toward YouTube

So far, Facebook has taken most of the heat when it comes to spreading misinformation, thanks to revelations about how Russian trolls used the network in an attempt to influence the 2016 election. But now YouTube is also coming under fire for being a powerful disinformation engine.

At Congressional hearings into the problem in November, where representatives from Facebook, Google and Twitter were asked to account for their actions, Facebook took the brunt of the questions, followed closely by Twitter. Google, however, argued that since it’s not really a social network in the same sense that Facebook and Twitter are, it therefore doesn’t play as big a role in spreading fake news.

This was more than a little disingenuous. While it may not run a social network like Facebook (its attempt at doing so, known as Google+, failed to catch on), Google does own the world’s largest video platform, and YouTube has played—and continues to play—a significant role in spreading misinformation.

This becomes obvious whenever there is an important news event, and especially one that has a political aspect to it, such as the mass shooting in Las Vegas last October—where fake news showed up at the top of YouTube searches—or the recent school shooting in Parkland, Florida where 17 students died.

After the Parkland shootings, YouTube highlighted conspiracy theories about the incident in search results and in its recommended videos. At one point, eight out of the top 10 recommended videos that appeared for a search on the name of one of the students who survived the shooting either promoted or talked about the idea that he was a so-called “crisis actor” and not a real student.

When this was mentioned by journalists and others on Twitter, the videos started disappearing one by one, until the day after the shooting there were no conspiracy theories in the top 10 search results. But in the meantime, each of those videos got thousands or possibly tens of thousands of views they might otherwise not have gotten if they hadn’t been recommended.

This kind of thing isn’t just a US problem. YouTube has become hugely popular in India with the arrival of cheap data plans for smartphones, and after a famous actress died recently, the trending list on YouTube for that country was reportedly filled with fake news.

In part, the popularity of such content is driven by human nature. Conspiracy theories are often much more interesting than the real facts about such an event, in part because they hint at mysteries and secrets that only a select few know about. That increases the desire to read them, and to share them, and social platforms like Facebook, Twitter and YouTube play on this impulse.

Human nature, however, is exacerbated by the algorithms that power these platforms, creating a vicious circle. YouTube’s algorithm tracks people clicking and watching conspiracy theory videos and assumes that this kind of content is very popular, and that people want to see more of it, and so it moves those videos higher in the rankings. That in turn causes more people to see them and click on them.

The result is that users are pushed towards more and more polarizing or controversial content, regardless of the topic, as sociologist Zeynep Tufekci described recently in The New York Times. The platform has become “an engine for radicalization,” she says.

“In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.”

Guillaume Chaslot, a programmer who worked at Google for three years, told CJR recently he noticed a similar phenomenon while working on the YouTube recommendation algorithm. He says he tried to get the company interested in implementing fixes to help solve it, but was told that what mattered was that people spent lots of time watching videos, not what kind of videos they were watching.

“Total watch time was what we went for—there was very little effort put into quality,” Chaslot says. “All the things I proposed about ways to recommend quality were rejected.”

After leaving Google, Chaslot started collecting some of his research and making the results public on a website called Algotransparency.org. Using software that he created (and has made public for anyone to use), he tracked the recommendations provided for YouTube videos and found that in many cases they are filled with hoaxes, conspiracy theories, fake news and other similar content.

Jonathan Albright, research director at the Columbia University’s Tow Center for Digital Journalism, has done his own research on YouTube, including a study in which he catalogued all of the recommended videos the site suggested after a hypothetical user clicked on a “crisis actor” video. What he found was a network of more than 9,000 conspiracy-themed videos, all of which were recommended to users as the “next up” video after they watched one involving the alleged Parkland shooting hoax.

“I hate to take the dystopian route, but YouTube’s role in spreading this ‘crisis actor’ content and hosting thousands of false videos is akin to a parasitic relationship with the public,” Albright said in a recent blog post about his research. “This genre of videos is especially troublesome, since the content has targeted (individual) effects as well as the potential to trigger mass public reactions.”

Former YouTube head of product Hunter Walk said recently that at one point he proposed bringing in news articles from Google News or even tweets to run alongside and possibly counter fake news or conspiracy theories, rather than taking them down, but that proposal was never implemented—in part because growing Google+ became more important than fixing YouTube.

Google has taken some small steps along those lines to try and resolve the problem. This week, YouTube CEO Susan Wojcicki said at the South by Southwest conference that the service will show users links to articles on Wikipedia when they search for known hoaxes about topics such as the moon landing. But it’s not clear whether this will have any impact on users’ desire to believe the content they see.

Google has also promised to beef up the number of moderators who check flagged content, and has created what it calls an “Intelligence Desk” in order to try and find offensive content much faster. And it has said that it plans to tweak its algorithms to show more “authoritative content” around news events. One problem with that, however, is it’s not clear how the company plans to define “authoritative.”

The definition of what’s acceptable also seems to be in flux even inside the company. YouTube recently said it had no plans to remove a channel called Atomwaffen, which posts neo-Nazi content and racist videos, and that the company believed adding a warning label “strikes a good balance between allowing free expression and limiting affected videos’ ability to be widely promoted on YouTube.”

After this decision was widely criticized, the site removed the channel. But similar neo-Nazi content reportedly still remains available on other channels. There have been reports that Infowars, the channel run by alt-right commentator Alex Jones, has had videos removed, and that the channel is close to being removed completely. But at the same time, some other controversial channels have been reinstated after YouTube said that they were removed in error by moderators.

In her talk at South by Southwest, Wojcicki said that “if there’s an important news event, we want to be delivering the right information,” but then added that YouTube is “not a news organization.” Those two positions seem to be increasingly incompatible, however. Facebook and YouTube both say they don’t want to become arbiters of truth, and yet they want to be the main news source for information about the world. How much longer can they have it both ways?

 


 

YouTube wants the news without the responsibility

 

After coming under fire for promoting fake news, conspiracy theories and misinformation around events like the Parkland school shooting, YouTube has said it is taking a number of steps to try and fix the problem. But the Google-owned video platform still appears to be trying to have its cake and eat it too when it comes to being a media entity.

This week, for example, YouTube CEO Susan Wojcicki said at the South by Southwest conference in Texas that the service plans to show users links to related articles on Wikipedia when they search for videos on topics that are known to involve conspiracy theories or hoaxes, such as the moon landing or the belief that the earth is flat.

Given the speed with which information moves during a breaking news event, however, this might not be a great solution for situations like the Parkland shooting, since Wikipedia edits often take awhile to show up. It’s also not clear whether doing this will have any impact on users’ desire to believe the content they see.

In addition to those concerns, Wikimedia said no one from Google notified the organization (which runs Wikipedia) of the YouTube plan. And some of those who work on the crowdsourced encyclopedia have expressed concern that the giant web company—which has annual revenues in the $100-billion range—is essentially taking advantage of a non-profit resource, instead of devoting its own financial resources to the problem.

Google seems to want to benefit from being a popular source for news and information without having to assume the responsibilities that come with being a media entity. In her comments at SXSW, Wojcicki said “if there’s an important news event, we want to be delivering the right information,” but then quickly added that YouTube is “not a news organization.”

This feels very similar to the argument that Facebook has made when it gets criticized for spreading fake news and misinformation—namely, that it is merely a platform, not a media entity, and that it doesn’t want to become “an arbiter of truth.”

Until recently, Facebook was the one taking most of the heat on fake news, thanks to revelations about how Russian trolls used the network in an attempt to influence the 2016 election. At Congressional hearings into the problem in November, where representatives from Facebook, Google, and Twitter were asked to account for their actions, Facebook took the brunt of the questions, followed closely by Twitter.

At the time, Google argued that since it’s not a social network in the same sense as Facebook and Twitter, it therefore doesn’t play as big a role in spreading fake news. This was more than a little disingenuous, however, since it has become increasingly obvious that YouTube has played and continues to play a significant role in spreading misinformation about major news events.

Following the mass shooting in Las Vegas last October, fake news about the gunman showed up at the top of YouTube searches, and after the Parkland incident, YouTube highlighted conspiracy theories in search results and recommended videos. At one point, eight out of the top 10 results for a search on the name of one of the students either promoted or talked about the idea that he was a so-called “crisis actor.”

When this was mentioned by journalists and others on Twitter, the videos started disappearing one by one, until the day after the shooting there were no conspiracy theories in the top 10 search results. But in the meantime, each of those videos got thousands or tens of thousands of views they might otherwise not have gotten.

Misinformation in video form isn’t just a problem in the US. YouTube has also become hugely popular in India with the arrival of cheap data plans for smartphones, and after a famous actress died recently, YouTube’s trending section for India was reportedly filled with fake news.

Public or media outrage seems to have helped push Google to take action in the most recent cases, but the subject of controversial content on YouTube has also become a hot-button issue in part because advertisers have raised a stink about it, and that kind of behavior has a very real impact on Google’s bottom line, as opposed to just affecting its public image.

Last year, for example, dozens of major-league advertisers—including L’Oreal, McDonald’s and Audi—either pulled or threatened to pull their ads from YouTube because they were appearing beside videos posted by Islamic extremists and white supremacists. Google quickly apologized and promised to update its policies to prevent this from happening.

The Congressional hearings into Russian activity also seem to have sparked some changes. One of the things that got some scrutiny in both the Senate and House of Representatives hearings was the fact that Russia Today—a news organization with close links to the Russian government—was a major user of YouTube.

Google has since responded by adding warning labels to Russia Today and other state broadcasters to note that they are funded by governments. This move has caused some controversy, however: PBS complained that it got a warning label, even though it is funded primarily by donations and only secondarily by government grants.

As well-meaning as they might be, however, warning labels and Wikipedia links aren’t going to be enough to solve YouTube’s misinformation problem, because to some extent it’s built into the structure of the platform, as it is with Facebook and the News Feed.

In a broad sense, the popularity of fake news is driven by human nature. Conspiracy theories and made-up facts tend to be much more interesting than the real truth about an event, in part because they hint at mysteries and secrets that only a select few know about. That increases the desire to read them, and to share them. Social services like Facebook, Twitter, and YouTube tend to promote content that plays on this impulse because they are looking to boost engagement and keep users on the platform as long as possible.

Human nature, however, is exacerbated by the algorithms that power these platforms. YouTube’s algorithm tracks people clicking and watching conspiracy theory videos and assumes that this kind of content is very popular, and that people want to see more of it, so it moves those videos higher in the rankings. That in turn causes more people to see them.

The result is that users are pushed towards more and more polarizing or controversial content, regardless of the topic, as sociologist Zeynep Tufekci described in a recent New York Times essay. The platform, she says, has become “an engine for radicalization.”

“In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.”

Guillaume Chaslot, a programmer who worked at Google for three years, told CJR recently he noticed a similar phenomenon while working on the YouTube recommendation algorithm. He says he tried to get the company interested in solving it, but was told that what mattered was that people spent lots of time watching videos, not what kind of videos they were watching.

Former YouTube head of product Hunter Walk said recently that at one point he proposed bringing in news articles from Google News or even tweets to run alongside and possibly counter fake news or conspiracy theories, rather than taking them down, but that proposal was never implemented—in part because Google executives made it clear that growing Google+ was a more important goal than fixing YouTube.

In addition to adding Wikipedia links, Google has also promised to beef up the number of moderators who check flagged content, and has created what it calls an “Intelligence Desk” in order to try and find offensive content much faster. And it has said that it plans to tweak its algorithms to show more “authoritative content” around news events. One problem with that, however, is it’s not clear how the company plans to define “authoritative.”

The definition of what’s acceptable also seems to be in flux even inside the company. YouTube recently said it had no plans to remove a channel called Atomwaffen, which posts neo-Nazi content and racist videos, and that the company believed adding a warning label “strikes a good balance between allowing free expression and limiting affected videos’ ability to be widely promoted on YouTube.”

After this decision was widely criticized, the site removed the channel. But similar neo-Nazi content reportedly still remains available on other channels. There have been reports that Infowars, the channel run by alt-right commentator Alex Jones, has had videos removed, and that the channel is close to being removed completely, although YouTube denies this. But at the same time, some other controversial channels have been reinstated after YouTube said that they were removed in error by moderators.

Facebook and YouTube both say they want to be the main news source for information about the world, but they also say they don’t want to be arbiters of truth. How long can they continue to have it both ways?

Anti-terrorism and hate-speech laws are catching artists and comedians instead

One of the risks whenever governments try to curb what they see as offensive speech is that other kinds of speech are often caught in the same net, and that poses a very real risk for freedom of speech and for freedom of the press. One of the most recent examples comes from Spain, where a vague anti-terrorism law has been used to charge and even imprison musicians and other artists.

In a new report on the phenomenon, entitled “Tweet… If You Dare,” Amnesty International looked at the rise in prosecutions under Article 578 of the country’s criminal code, which prohibits “glorifying terrorism” and “humiliating the victims of terrorism.” The law has been around since 2000, but was amended in 2015 and since then prosecutions and convictions have risen sharply.

Freedom of expression in Spain is under attack. The government is targeting a whole range of online speech–from politically controversial song lyrics to simple jokes–under the catch-all categories of “glorifying terrorism” and “humiliating the victims of terrorism.” Social media users, journalists, lawyers and musicians have been prosecuted [and] the result is increasing self-censorship and a broader chilling effect on freedom of expression in Spain.

Among those who have been hit by the law are a musician who tweeted a joke about sending the king a cake-bomb for his birthday and was sentenced to a year in prison, and a rapper who was sentenced to three-and-a-half years in jail for writing songs that the government said glorified terrorism and insulted the crown. A filmmaker and a journalist have also been charged under the anti-terrorism law, and a student who tweeted jokes about the assassination of the Spanish prime minister in 1973 was also sentenced to a year in prison, although her sentence was suspended after a public outcry.

Some free-speech advocates are afraid that new laws either in force or being considered in Germany, France and even the United Kingdom could accelerate this problem. In all three countries, legislators say they are concerned about hate speech, online harassment and fake news, but the definition of those problems is so vague there is a risk that other kinds of speech could also be criminalized—especially when enforcement of those rules gets outsourced to platforms like Facebook, Google and Twitter.

Google offers olive branch to newspapers, YouTube relies on Wikipedia

Google is planning to highlight content from newspapers with paywalls for users who are paying subscribers, according to a report from Bloomberg on Tuesday, March 14. So when users search for articles on a topic, results from sites they subscribe to will show up higher than results from regular websites. Google also plans to share data with publishers about who is most likely to sign up, Bloomberg said.

Google executives plan to disclose specific details at an event in New York on March 20, according to the people. The moves could help publishers better target potential digital subscribers and keep the ones they’ve already got by highlighting stories from the outlets they’re paying for. The initiative marks the latest olive branch from Silicon Valley in its evolving relationship with media companies.

This is the latest in a series of moves that both Google and Facebook have been making around subscriptions. Facebook has been experimenting with adding paywall support to its mobile-friendly Instant Articles feature, and also recently set up a trial project to try and help local publishers figure out how to get more subscription revenue. The main reason why publishers are being forced to rely on subscriptions, of course, is that Google and Facebook have taken control of most of the world’s digital advertising revenue.

Google also recently changed its policy on search results from sites with subscription models. It used to encourage publishers with paywalls to let searchers read at least three articles free under its “First Click Free” model, and those who didn’t comply were ranked lower in search results. But the company dropped the FCF approach last year, and now subscription-based publishers can choose to provide whatever number of free articles they wish to non-subscribers, including providing none at all.

 


 

YouTube, which has been taking a considerable amount of heat for promoting hoaxes and conspiracy theories in search results, will start highlighting articles from Wikipedia when users are looking for what is clearly fake news about topics such as the moon landing, CEO Susan Wojcicki said at the South by Southwest conference in Austin on Tuesday, March 14.

The Wikipedia links will not appear solely on conspiracy-related videos, but will instead show up on topics and events that have inspired significant debate. A YouTube spokesperson used videos about the moon landing (a historical topic with many conspiracy theories surrounding it) as an example and noted that moon landing videos would appear with Wikipedia links below to provide additional information, regardless of whether the video was a documentary or a video alleging the landing was staged.

As a number of people noted on Twitter following this announcement, it’s a little ironic that a giant company with $100 billion in revenues is relying on a donation-funded volunteer organization to do fact-checking for its videos. YouTube said Wikipedia links are just the first step in solving the problem and that it plans to do more, but it seems a little unfair to take advantage of a free resource when Google itself could be trying harder to flag or identify disinformation.

In part, this is because YouTube—like Facebook—seems to be trying to walk a very fine line with its approach to misinformation. Wojcicki said at the SXSW conference that “if there’s an important news event, we want to be delivering the right information,” but also added: “we are not a news organization.” Those two views seem to be increasingly incompatible, and at some point both of the major web platforms will have to come to grips with what that implies.

 

 

Blog posts for CJR

March 12: Apple announced on March 12 that it has acquired Texture for an undisclosed sum. Often called “the Netflix of magazines,” Texture gives readers access to over 200 popular magazines through its app and website for a single monthly fee. It was originally called Next Issue Media when it launched in 2012, and had raised $130 million in venture funding before the acquisition. Said Apple executive Eddy Cue:

“We’re excited Texture will join Apple, along with an impressive catalog of magazines from many of the world’s leading publishers. We are committed to quality journalism from trusted sources and allowing magazines to keep producing beautifully designed and engaging stories for users.”

In an interview at the South by Southwest conference following the news, Cue said that Apple would be integrating Texture into Apple News, and that the company is committed to curating the news to remove fake news. Part of the goal of Apple News and acquiring Texture, he said, is to avoid “a lot of the issues” happening in the media today, such as the social spread of inaccurate information.


March 12: The European Union released the final report from its High Level Expert Group on Fake News, entitled “A Multi-Dimensional Approach to Disinformation,” on March 12. Several of the experts involved in fact-checking and tracking disinformation, including Claire Wardle of First Draft and Alexios Mantzarlis of  the International Fact-Checking Network, summed up the main points of the report in a Medium post, which said the report’s contributions include:

“Important definitional work rejecting the use of the phrase ‘fake news’; an emphasis on freedom of expression as a fundamental right; a clear rejection of any attempt to censor content; a call for efforts to counter interference in elections; a commitment by tech platforms to share data; calls for investment in media and information literacy and comprehensive evaluations of these efforts; as well as cross-border research into the scale and impact of disinformation.”

Among other things, the group notes that at a time when many governments are trying to pass laws aimed at stamping out fake news, this is not the right approach. “Many political bodies seem to believe that the solution to online disinformation is one simple ‘fake news’ law away, [but] the report clearly spells out that it is not. It urges the need for caution and is sceptical particularly of any regulation of content.”


March 11: Joshua Geltzer, executive director of Georgetown Law’s Institute for Constitutional Advocacy and Protection and former senior director for counterterrorism at the National Security Council, writes in Wired that the Russian trolls who tried to manipulate the 2016 election didn’t abuse Facebook or Twitter, they simply used those platforms in the way that they were designed to be used:

“For example, the type of polarizing ads that Facebook admits Russia’s Internet Research Agency purchased get rewarded by Facebook’s undisclosed algorithm for provoking user engagement. And Facebook aggressively markets the micro-targeting that Russia utilized to pit Americans against each other on divisive social and political issues. Russia didn’t abuse Facebook—it simply used Facebook.”

Geltzer says the major web platforms need to do a much better job of removing or blocking malicious actors who try to use their systems for nefarious purposes, and he also says that Facebook, Google and Twitter need to be much more transparent about their algorithms and how they operate. That kind of openness, he says, “could yield crowd-sourced solutions rather than leaving remedies to a tiny set of engineers, lawyers, and policy officials employed by the companies themselves.”


March 10: Sociologist Zeynep Tufekci wrote in an essay published in the New York Times on March 10 about experiments she performed on YouTube during the 2016 election, where she noticed that no matter what kind of political content she searched for, the recommended videos were always more extreme and inflammatory, whether politically or socially. This is a vicious circle, she writes:

“In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.”

Tufekci mentions research done by former YouTube engineer Guillaume Chaslot, who worked on the video platform’s recommendation algorithm and spoke to CJR recently about his conclusions. Like Tufekci, he found that the videos being recommended on the site were overwhelmingly contentious and inflammatory, including many that promoted conspiracy theories, because that kind of content makes people click and spend more time on the site, and that serves Google’s business interests.


March 9: NewsWhip, an analytics company that measures social-media activity, looked at its data and came up with a list of news reporters who get the most engagement on Facebook in February, and number one was Ryan Shattuck, of the satirical news site The Onion. Number 2 was Jonah Urich, who works for a left-wing site called Truth Examiner, known for posting sensationalized political news.Daily Wire, another hyper-partisan political news site, also took several spots in the top 10. As NewsWhip described it:

Beyond the Onion, the top authors were primarily from hyper-partisan sources like the Daily Wire, Truth Examiner, Breitbart, Washington Press, and several small but politically-charged sites. Horrifyingly enough, two authors from fake news sites featured. An author from the fake news site Your Newswire was towards the top of our list, ranking in at #12. Baxter Dmitry wrote 81 articles in February, driving more than 1.7 million Facebook interactions.

Facebook has said it plans to change its algorithm so that more “high quality” news shows up in the News Feed, but that could be easier said than done. The company said it would rank news sources based in part on whether they drive engagement and discussion, and what NewsWhip’s data reinforces is that the most engaging content is often fake, or at least highly sensationalized.


March 9: Most of the attention around fake news has focused on Facebook and YouTube, but other apps and services can also play a role in spreading misinformation, as Wired points out in a March 9 piece on the use of Facebook-owned messaging app WhatsApp in Brazil. Use of the app is apparently complicating the country’s attempts to deal with an outbreak of yellow fever, because of false reports about vaccinations:

In recent weeks, rumors of fatal vaccine reactions, mercury preservatives, and government conspiracies have surfaced with alarming speed on the Facebook-owned encrypted messaging service, which is used by 120 million of Brazil’s roughly 200 million residents. The platform has long incubated and proliferated fake news, in Brazil in particular. With its modest data requirements, WhatsApp is especially popular among middle and lower income individuals there, many of whom rely on it as their primary news consumption platform.

According to Wired, among the conspiracy theories circulating about the vaccination program are an audio message from a woman claiming to be a doctor, warning that the vaccine is dangerous, and a fake-news story connecting the death of a university student to the vaccine. As similar reports about the impact of Facebook in countries like Myanmar have shown, social-media driven conspiracy theories in the US can be annoying but in other parts of the world they can actually endanger people’s lives.


March 8: Renee DiResta, a researcher with New Knowledge and a Mozilla fellow specializing in misinformation, argues that by using Facebook to spread fake news during the 2016 election, the “Russian troll factory” known as the Internet Research Agency was duplicating a strategy initially developed by ISIS, which used digital platforms and social-media methods to spread its message.

The online battle against ISIS was the first skirmish in the Information War, and the earliest indication that the tools for growing and reaching an audience could be gamed to manufacture a crowd. Starting in 2014, ISIS systematically leveraged technology, operating much like a top-tier digital marketing team. Vanity Fair called them “The World’s Deadliest Tech Startup,” cataloging the way that they used almost every social app imaginable to communicate and share propaganda.

Most of the major platforms made half-hearted attempts to get rid of this kind of content, but they were largely unsuccessful. What this showed, DiResta writes, was that the social platforms could be gamed in order to spread political messages, and that the same kinds of targeting techniques that worked for advertising could be turned to political use. And among those who were also learning this lesson, it seems, were some disinformation architects on a troll farm in Russia.

The media today: Facebook tries to woo publishers with video promises, again

Despite all the dashed hopes from some of its other ventures, including short-form video and mobile-friendly Instant Articles, Facebook appears to be trying again to woo publishers with promises of future video riches. According to a report from Axios on Tuesday morning, the giant social network is reaching out to media companies and asking them to become partners in a news vertical that Facebook plans to add to its Watch video portal — .

Watch, which was launched last year with much fanfare, consists of a new tab with a dedicated stream of longer-form video programming that is much more like regular TV than much of the video that usually shows up on Facebook. Instead of 10-second loops of cat or dog antics, Watch carries shows from established media outlets like A&E and National Geographic. There’s also a fairly heavy emphasis on sports programming, including Major League Baseball games and behind-the-scenes content from the NBA.

It’s easy to see why a number of media companies jumped on board the Watch train: Facebook paid some of the publishers up front for their content, and said that once it was up and running, Watch partners would be able to keep 55 percent of any revenue generated by the videos, with the rest going to the network. Now the company is focusing on news programming, and working with 10 publishing partners, Axios says. Videos have to be a minimum of three minutes in length, and the news vertical is expected to launch this summer.

What’s surprising is that so many media companies would rush to partner with Facebook when there are so many examples of such hopes not panning out. The company’s initial short-form video push also came with much fanfare, and millions of dollars in payments both to publishers like The New York Times and to celebrities like comedian Kevin Hart. But Facebook’s desire for short-form video quickly waned, and some companies that had pushed a “pivot to video” strategy were left high and dry. Some missed revenue estimates and others have shut down.

Here’s more on Facebook’s somewhat tangled relationship with media companies and video:

  • Campbell Brown, Facebook’s head of news partnerships, says that despite past hiccups with video, the social network is committed to its latest venture. “Timely news video is the latest step in our strategy to make targeted investments in new types of programming on Facebook Watch,” she told Axios. “As part of our broader effort to support quality news on Facebook, we plan to meet with a wide-range of potential partners to develop, learn and innovate on news programming tailored to succeed in a social environment.”
  • When Facebook first launched Watch, it pushed its news partners for as much short video as possible. But later, the company found that the quality level of much of the content was lackluster, and as a result it drove little to no engagement, and therefore advertisers weren’t interested in being part of it. The social network then pushed for higher-quality video and started restricting who could monetize their shows and who couldn’t.
  • Some media partners may have overcome their skepticism about Watch because Face has made it clear it intends to devote some major resources to building a presence in video programming. According to a report by The Wall Street Journal earlier this year, co-founder and CEO Mark Zuckerberg has said he plans to spend as much as $1 billion on original video content this year. Facebook’s ultimate goal appears to be an all-out assault on YouTube’s status as the largest digital video platform.
  • In one of the company’s latest moves to lock up the rights to lucrative content, Facebook signed a deal with Major-League Baseball worth between $30 million and $35 million that gives the social network the exclusive right to stream 25 weekly baseball games through Facebook Watch this year. The games will be produced by MLB but will be optimized for the Facebook site and its mobile apps.

Other notable stories:

  • UN experts looking into ongoing human-rights abuses in Myanmar, where Rohingya Muslims have been persecuted and killed, pointed a finger at Facebook, saying fake news and conspiracy theories spread via the giant social network have put Rohingya lives at risk. Several journalists who work in the region talked with CJR about this problem earlier this year, saying Facebook has replaced the traditional news media for many users in developing countries like Myanmar, and false reports spread rapidly.
  • Some British MPs are calling for Russia Today’s license to be revoked after reports that Russia was behind the recent poisoning of a former Russian double agent and his daughter, who were found unconscious on a park bench in Salisbury, England on March 4. A Labour MP asked the government to stop Russia Today from “broadcasting its propaganda,” but RT said it was being unfairly singled out, and noted that it had a better regulatory track record than many other British broadcasters.
  • There’s been a shakeup at the top of Vice Media: Former A&E Networks head Nancy Dubuc was named the new CEO, replacing co-founder Shane Smith, who becomes executive chairman. Dubuc was already a board member of Vice because A&E owns a stake in the company, after Disney — which co-owns A&E with Hearst — invested $400 million in Vice in 2015. Vice has been hit by sexual harassment allegations, as well as criticism that its corporate culture ignored the obvious warning signs of such behavior.
  • BuzzFeed co-founder and CEO Jonah Peretti talked with Digiday about the future of the company and its commitment to news. Peretti said that he thinks Google and Facebook are going to do more to support news because “if they don’t, they’ll be regulated.” He spoke with CJR recently about a range of similar topics, saying the company is committed to remaining in the news business despite somewhat lower returns.
  • Sabrina Toppa writes for CJR about a movement in Pakistan to get legislation passed that would prevent attacks on journalists. In the past 15 years, 117 Pakistani journalists have been killed on the job, and attacks on reporters lead to self-censorship by media outlets, which puts press freedom at risk. According to the World Press Freedom Index, the country ranks 139th out of 180 worldwide.

Project Veritas catfished Twitter staffers for ambush videos

When the political-action group known as Project Veritas came out with hidden-camera videos of a number of Twitter employees talking about the company’s practices last year, one of the mysteries was how the organization — infamous for its supposed “investigative” pieces on groups like Planned Parenthood — managed to record the videos. According to Kashmir Hill, in a piece published by Gizmodo on March 13, for at least some of the interviews the group created a fake startup and pretended it was interested in talking with staffers for potential jobs:

For four months last year, Norai thought he had a new job. He was in regular communication with his new colleagues, meeting up with them for dinner, drinks, and a baseball game, but they kept pushing his start date back, saying they were securing office space and finalizing funding. But in fact, there was no job. Tech Jobs Box wasn’t a real company.

In other cases, Hill says, male employees believed they were going on dates with potential romantic partners, who were actually secretly recording them. But isn’t this illegal in California, where both parties to a recording are supposed to agree before a recording is allowed? Veritas founder James O’Keefe said his organization believes that so long as the conversations occurred in public spaces where the other party had a reasonable expectation they might be overheard, the recordings aren’t illegal. A law professor tells Hill, however, that this applies to video but not audio.

Blog posts for CJR archive

March 7: New York Times reporter Farhad Manjoo spent two months consuming news only via print newspapers, and says his life was better as a result. After the Parkland school shooting, he writes:

“A friendly person I’ve never met dropped off three newspapers at my front door. That morning, I spent maybe 40 minutes poring over the horror of the shooting and a million other things the newspapers had to tell me. Not only had I spent less time with the story than if I had followed along as it unfolded online, I was better informed, too. Because I had avoided the innocent mistakes—and the more malicious misdirection—that had pervaded the first hours after the shooting, my first experience of the news was an accurate account of the actual events.”

It’s difficult to argue with Manjoo’s point, which is that the algorithmic incentives built into Twitter and Facebook “reward speed over depth, hot takes over facts and seasoned propagandists over well-meaning analyzers of news.” That said, however, newspapers also frequently get things wrong, distort the facts and engage in the old-fashioned version of clickbait, and much of that behavior gets revealed by thoughtful people on social media, provided you follow the right people. Trying to put the digital genie back in the bottle may be appealing in some ways, but it doesn’t really seem like a workable long-term strategy.

—————————-

March 7: The Trump campaign’s use of Facebook to connect with right-wing supporters has been widely credited with helping them win the 2016 election (along with the activities of some Russian trolls) and now another conservative politician is thanking social media for his victory. Italy’s new political star, Matteo Salvini of the far-right Lega party, gave credit to Facebook in a speech celebrating his party’s success:

“Local journalists said Salvini — a member of the European Parliament and leader of the far-right Lega party, which now stands to act as a kingmaker in the coming coalition negotiations — had shaken up the election with the now notorious populist strategy of attacking the traditional media and adopting a hyper-personal and hyper-partisan Facebook strategy. “Facebook was a huge part of his surge in the polls,” Il Post’s Davide Maria De Luca told BuzzFeed News.”

—————————

March 7: Sri Lanka blocked Facebook and WhatsApp for three days because of posts on the social networks that the government said were encouraging violence against Muslims:

“Social media websites such as Facebook, Whatsapp, and Viber — which were created to bring us closer to our friends and family and make communication free and convenient — have been used to destroy families, lives and private property,” said Telecommunications, Digital Infrastructure, and Foreign Employment Minister Harin Fernando according to local media.

—————————-

March 7: Newspaper companies have gotten their wish — a bill introduced by Democratic senator David Cicilline (D-Rhode Island) would give them an exemption from antitrust so they could collude and seek collective action against Facebook and Google, something that News Media Alliance head David Chavern has been calling for for some time:

“Chavern says the alliance is seeking changes in five areas: platforms should share data about the publishers’ readers; better highlight trusted brands; support subscriptions for publishers; and potentially share more ad revenue and consider paying for some content. Silicon Valley companies swallowed a number of industries on their way to the top of the stock market. But Chavern believes the news business warrants intervention because of its role in a healthy democracy. “The republic is not going to suffer terribly if we have bad cat video or even bad movies or bad TV. The republic will suffer if we have bad journalism,” he says.

The senator says the bill would limit the action that the companies could take — for example, it would theoretically prevent them from colluding on price. But that seems to be exactly what Chavern has in mind, judging by his comments. And while Google and Facebook may have an advertising duopoly, is giving more power to a failing oligopoly really the best way to deal with that?

——————————

March 5: Many digital-media startups have been cutting back or downsizing, but The Athletic is going in the opposite direction: The two-year-old subscription-based sports media startup has raised a $20-million round of funding and is preparing to more than double its staff and expand to new markets.

“Two years after launching as ‘the new sports page,’ the Athletic has raised $20 million, according to Athletic co-founder and Chief Executive Alex Mather. The funding round, the company’s third, was led by Evolution Media, the growth-stage investment company founded by TPG Growth and Creative Artists Agency. Before this round, the Athletic raised $10 million in two rounds led by Courtside Ventures. The Athletic plans to use most of the financing to continue its expansion across the U.S., establishing a presence in every market with a professional sports team by the end of the year.”

By the end of 2018, The Athletic says it plans to have between 200 and 350 employees, up from its current staff of 120. It is currently in 23 markets across the U.S. and Canada, and plans to expand to roughly 45 markets by the end of the year. Focusing on a news vertical with passionate fans seems to be making the difference for the company, which gets 100 percent of its revenues from subscriptions and therefore isn’t dependent on the shrinking digital advertising market.

——————————-

March 5: Facebook’s latest changes to its news-feed algorithm seem to be taking their toll on companies that have built their businesses on “viral” content for the social network: The latest victim is Rare, Cox Media’s conservative-focused news site, which the company said is shutting down after traffic evaporated. The site was set up in 2013 and worked its way up to 2.3 million FB fans and about 22 million uniques at its peak. Another Facebook-focused publisher, Little Things, also shut down recently after saying its Facebook traffic had fallen by about 70 percent following the latest algorithm tweak, and media industry watchers say viral-video companies like Jukin Media could also be threatened.

——————————–

March 5: Senate investigators are broadening their search for information about Russian trolls infiltrating social networks, and have asked Reddit and Tumblr for more details on their platforms. The Daily Beast reported last week that at least 21 accounts on Tumblr had ties to the Internet Research Agency, and Reddit CEO Steve Huffman said in a post on the site that his team had “found and removed a few hundred accounts.” But he also acknowledged that Reddit more broadly suffered from propaganda that was posted and shared by thousands of users who “appear to be unwittingly promoting Russian propaganda.”

———————————

March 5: It’s hard to believe that this actually happened, given all the problems Facebook has been having, but the company admitted to running a survey with some users that asked whether it would be acceptable for an adult man to ask a 14-year-old girl for sexual photos.

“There are a wide range of topics and behaviours that appear on Facebook,” one question began. “In thinking about an ideal world where you could set Facebook’s policies, how would you handle the following: a private message in which an adult man asks a 14-year-old girl for sexual pictures.” The options available to respondents ranged from “this content should not be allowed on Facebook, and no one should be able to see it” to “this content should be allowed on Facebook, and I would not mind seeing it.”

Facebook’s vice president of product, Guy Rosen, said the surveys were a mistake. “We run surveys to understand how the community thinks about how we set policies,” he said. “But this kind of activity is and will always be completely unacceptable on FB. We regularly work with authorities if identified. It shouldn’t have been part of this survey. That was a mistake.” That seems like the understatement of the year.

———————————–

March 5: Media consultant Simon Galperin wants to create a system whereby local communities could use tax revenue to create a news and information entity called a Community Information Cooperative. The idea is that a fee levied on residents — similar to fees for fire services, water, sanitation, etc. — would allow a community to essentially self-fund their own local reporters. Galperin has set up a Kickstarter campaign to raise $2,000 to create a non-profit entity that would put the idea into action. He recently wrote at the CJR about how this might work in his home town:

“My hometown of Fair Lawn, New Jersey, has a population of 32,000 people. An annual $40 contribution per household could deliver a $500,000 operating budget to a newsroom devoted to understanding and serving the local news and information needs of its community. That budget could support print or online newspapers, or livestreaming town council meetings. A special service district for local journalism could convene community forums or media literacy classes, launch a text message and email alert system, or pay for chatbots that answer locally relevant questions, like “Is alternate side parking in effect?”

———————————–

March 4: Serial media entrepreneur Steven Brill and former WSJ publisher Gordon Crovitz have launched a startup called NewsGuard, which they hope will create a ranking system for the credibility of news. NewsGuard is hiring human journalists and editors to evaluate 7,500 news sites that account for 98% of engagement with news online in the U.S.

Websites will receive green, yellow, or red ratings based on how credible they are according to a range of factors, and there will also be what the company is calling “nutrition labels,” with more detailed information about each site. Crovitz says the idea is to let readers know whether “they need to take particular brands they see online with a grain of salt — or with an entire shaker.” The company plans to try and license its ranking system to Google, Facebook and Twitter.

————————————

March 2: Twitter CEO Jack Dorsey acknowledged—not for the first time—that harassment and abuse are a problem on the platform, and said he is committed to helping “increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress.” How exactly the company plans to do that isn’t clear, but Dorsey said Twitter is working with a number of groups and services to try and identify both healthy and unhealthy conversation and find ways of decreasing the latter.

As well-meaning as Dorsey statements are, it’s hard to feel optimistic about Twitter’s chances of actually removing all the abuse, or of creating some kind of utopian ideal of “healthy conversation.” For one thing, the company has been promising to do this for the past year or more, without much sign of success. On top of that, healthy conversation is something that typically occurs between small groups of people—it’s not at all clear that such a thing can even exist on a platform that connects hundreds of millions of people instantaneously. And even if it can, it’s not going to be easy.

Jarrod Dicker on what the blockchain means for media and news

For journalists who are also into new technology, Jarrod Dicker has a pretty compelling CV: He was the head of product management at Huffington Post, director of digital products at Time Inc., helped run operations at online-publishing startup RebelMouse, and ran a digital-research lab at The Washington Post. With a career like that, lots of people in media pay attention when Dicker says something is interesting, and so many heads turned when he said he was leaving the Post for a blockchain startup called Poet.

For many people, “blockchain” is just the latest buzzword to infect internet-focused discussion and more recently media futurism, and there is no question that the topic is surrounded by almost unprecedented levels of hype, in part because it provides the foundation for crypto-currencies like Bitcoin and Ethereum, which have ballooned in value over the past year due to what many see as speculative hysteria.

That isn’t why Dicker is interested in it, however. Much like Civil, a blockchain-based media platform that wants to use the technology for journalism, Dicker sees Poet as a way of using blockchain to empower individual content creators—not just journalists or news organizations, but anyone who creates words or images or music or video for almost any purpose, including advertisers and brands. In effect, he says Poet is trying to build an open-source, blockchain-based licensing system for content.

To try and understand a bit more about what this means, and why Dicker decided to throw in the towel on a promising career at a traditional media entity, I talked with him recently by phone. What follows is a transcript of some of that discussion, edited for length and clarity.

CJR: Can you tell me a bit more about the background of Poet, and who is involved? I understand that it’s connected to Bitcoin Media in some way, yes?

JD: Yes, the Poet Foundation, which runs Po.et, is a separate entity. The parent company is BTC Media, and it’s based in Nashville. They bought Bitcoin magazine from [Ethereum founder] Vitalik Buterin, they own a few other publications and they do events and so on. They did an ICO for the Poet Foundation [an “initial coin offering,” which is a way of raising money for a blockchain-based business, like an equity IPO but with coins or tokens] and I think the documented raise was about $10 million, but since then the foundation has grown and now I think the market cap is around $160 million. We have about 58,000 content creators using the platform right now, we have a WordPress plugin and we also signed a partnership with the Maven network, which is James Heckman’s new venture, so all of those sites will be leveraging it.

CJR: And what is it about Poet that convinced you to join the company, or the foundation? What does it offer that you couldn’t get running the digital lab at the Post?

JD: So for the past few years I’ve been trying to figure out how to build a better media business, and it was just a constant effort in running uphill, not just in terms of financial struggles but ownership and platforms and so on, it’s hard to react in a marketplace where you feel like there’s literally no real control. The Post was great because we had investments and things like Orc and so on, so we had pretty diversified revenue, but it just felt like most people still aren’t looking at the real issues and how to fix them. There are people working on things like subscription tiers and what things should cost and so on, but a lot of it seems like a crapshoot. The bottom line is that media is being consumed more than ever, but there are two major issues: One is attribution, which you see with things like fake news but also sourcing and copyright, and the second is more of a macro idea, which is what is the value of content, whether it’s a story or a piece of music or art? We know how much it costs or how many ads we can put against it but what is its real value?

CJR: And how does Poet propose to solve that problem by using the blockchain? Does it use the blockchain’s “distributed ledger” structure the same way that Civil is?

JD: Yes, the name Poet comes from “proof of existence,” which was the first non-currency related implementation of blockchain technology, organized around attribution and valuation. And the potential of that kind of structure extends into things like smart contracts [for licensing content] or even just building a kind of seamless technology to allow any creator to have proof of the existence of their content in the blockchain that is meta-data enabled and be able to track that. We at Poet are working to be the standard for the world’s creative assets, by building seamless integrations within the Poet marketplace for anything involved in content. So that could go in WordPress or any kind of CMS [content-management system], or music creation tools, so when a piece of media is created, with the click of a button it is documented within the Poet platform, there’s a timestamp, and that allows for attribution of all these assets. 

CJR: And what would having that centralized database of content allow Poet to do? How does it make content more valuable, or empower creators?

JD: That kind of marketplace with attribution and so on benefits the creator but also the publisher, and anyone who wants a record of that content. Once we have all of these assets within the Poet Foundation, the idea is that it’s very easy to access and it’s open source, so it doesn’t cost money to access, and then it allows us or anyone to build a front end to index and search and find content to license and curate, a central marketplace for content, like a Getty Images or a Wikipedia. It’s not just a technically-driven opportunity for content management, but also a community of token-holders who go in and contribute content or license or commission new work, or to resolve conflicts of attribution, or stamp out bad actors—there are a ton of different opportunities. We’re working on plugin integrations, so we’re having conversations with creative platforms like Medium and even with Twitter. So we don’t really look at a tweet as a piece of content or something that has intellectual property value that could be tracked or licensed or whatever, but with Poet we’re trying to build a system that would allow you to do that.

CJR: So you see this as a way to give back a lot of the power that individual content creators have given up to platforms or to traditional media companies?

JD: Definitely. You can see with people like [YouTube star] Logan Paul or whatever, creators will go wherever is most beneficial for them to go, and you can see that with people like Ben Thompson of Stratechery and [former ESPN star] Bill Simmons, where they decide to build their own brand. Right now we’re confined to what the status quo is because that’s the way it’s always been, but most creators want to own their own IP and strike their own deals. So with something like Poet, you can own and archive your own content that’s recorded on the blockchain, and then license or syndicate it to whoever you want. The scary idea for media companies is that if this is possible, do we really need media companies any more? If you’re a sports blogger writing for SB Nation or Deadspin, and all your content is archived in Poet, you may get offers from other media companies, but also brands might say: “Hey look, you’re a big player in this space and we are willing to sponsor you,” and you can cut your own deal.

CJR: So you see Poet as being a way to reinvent sponsored content and advertising too?

JD: I think sponsored content as it stands now is a bubble, they are still using all these old-fashioned KPIs (key performance indicators like impressions or pageviews] and they just don’t work any more. There will be a change in that model. I don’t think there will be an ad-supported business model in five years, I think that is going to go away—not just because of people blocking ads but because brands are learning. Media companies keep telling them you can only do this and this, and users aren’t engaging, and at some point they will turn around and say “We have all the money, so we don’t need you.” It’s the same with platforms like Facebook and Snapchat and Twitter. They’re saying to publishers you can engage with your users on our platform, but you need to do A, B and C. I think everyone is looking for a new model, but I think that model has to come from outside, and that’s what made Poet attractive. There’s no real centralized marketplace for media, and we’re trying to leverage the blockchain and decentralization to fix that.

CJR: It doesn’t sound like traditional media companies are going to like this model very much, if it takes away their power and gives it to the creator?

JD: As CEO of Poet, I don’t want to be part of an organization that takes down media organizations, and I think in a way news is different, there will probably always be a model there for companies—and just because we are focused on the creator doesn’t mean media companies can’t use Poet as well. They can still own and archive their content and syndicate it through the platform, or find new writers and commission them to do new work. But could the model wind up disintermediating news or media companies? Yes, it definitely could. There’s just a lot more liberation of value in this model—maybe people working in news, instead of just working for CNN they create their own brand and then use Poet to leverage that brand in a bunch of different ways. It’s all about how we can acknowledge and attribute the value of the creator.

CJR: One thing that interests me is how a centralized technology that tracks every piece of content, no matter how small will affect fair use, which is a pretty important principle in copyright. Any thoughts?

JD: There could be issues there, definitely, if every piece of content can be time-stamped and tracked and attributed. But they’re very fixable I think—and I’m thinking of something like Creative Commons vs. Getty Images. There could be certain settings or limits or restrictions set up by the creator of the content to allow certain uses and not others. So if you’re thinking about sampling for DJs, what if a content creator was notified every time they were sampled? There’s value to the artist in knowing what is used and in being able to say yes or no, and they might decide that there’s value in being seen or heard so they don’t need to license it. There’s a lot of opportunity there to help come up with a way of solving some of the low-hanging fruit like sampling. Could it disrupt the media and the ways things currently work? Sure, but good things could also come out of it, with something like Napster you could say a lot of bad things happened but it forced everyone to react. The question is can we create something open source that allows us to try to get ahead of that kind of change and figure out how to make use of it or control it.

CJR: And when it comes to journalism in particular, what do you think of what Civil is trying to build with its blockchain-powered platform for publishing. Are you competing with them?

JD: Civil is focused on journalism specifically, where we’re broader and bit more macro-focused. So we see huge opportunity in what brands could do and how they could leverage Poet, and obviously that could be huge for journalists but also for any kind of content creator. Poet is open source and non-profit, so we’re not really competing with anyone, we’re not trying to generate revenue from it—it’s a very altruistic effort, so we’d love to work with Civil if they want to do that. And if someone wants to build something on top of Poet that they can generate revenue from, they are free to do that, just like someone building on WordPress or Github etc. There’s a lot of cynicism around ICOs, but we generated tokens to help support the foundation, so all of the dollars coming in go to support the foundation and the protocol, whether it’s through tokens or grants or whatever. So people can invest in our tokens the same way they would a stock but those investments go to the platform and building the community.