Facebook rolls out another News Feed change aimed at increasing trust

Facebook announced on Tuesday it is expanding a recent test that showed users more information about the articles in their News Feed and the media entities that publish them, in the hope that doing so will make it easier for people to determine who is trustworthy and who isn’t. The test started in October in a number of US markets, and the company says it is now rolling the feature out to everyone in the US, as well as adding more sources of information. The idea, according to Facebook, is to “provide more context for people so they can decide for themselves what to read, trust and share.”

The new features are a small part of what the tech giant has been doing to try and fix what is widely viewed as a “fake news” problem, one that exploded into public view after Russian trolls were shown to have manipulated the network to try and influence the 2016 election. The company has said it wants to cut down on news in the feed, as well as ensuring that what news there is remains “high quality.” But will these kinds of tweaks have any impact on Facebook’s role in spreading misinformation? That seems unlikely. Trust is a very slippery concept when it comes to news, as multiple studies have shown—people tend to believe and share the news that confirms their existing preconceptions.

Facebook maintains that its research, as well that of unnamed “academic and industry partners,” shows certain types of information help users evaluate credibility, and determine whether to trust a source. So it is adding contextual links when an article is shared, including links to related articles on the same topic and stats on how often the article has been shared and where. It will also include a link to the publisher’s Wikipedia page if there is one (and indicate if there isn’t), which is something YouTube also recently said it is doing to add context to videos about conspiracy theories.

In addition to those elements, Facebook says it also plans to add two new features, one that shows other recent stories published by the same outlet, and a module that shows whether a user’s friends have shared the article in question. The company is also starting a test to see whether users find it easier to gauge an article’s credibility if they get more information about the author: When they see an article in Facebook’s mobile-friendly Instant Articles format, some users will be able to click the author’s name and get additional info, including a description from Wikipedia if there is one. Whether any of these new features actually reduce the amount of questionable news shared on Facebook remains to be seen.

Here’s more on Facebook and its news and trust problems:

  • Today in irony: While the social network says it wants to increase the trust people have in what they see in their News Feed, it is facing a trust crisis of its own, thanks to the news that personal information on 50 million users was acquired by a data firm with ties to the Trump campaign. Facebook recently updated its privacy settings in an attempt to show that it cares about the issue, and has taken pains to point out that the source of the data leak was plugged several years ago.
  • An ultimatum: Indonesia has said it is prepared to shut down access to Facebook if there is any evidence the privacy of Indonesian users has been compromised. “If I have to shut them down, then I will do it,” Communications Minister Rudiantara told Bloomberg in an interview on Friday in the Indonesian capital of Jakarta, after pointing out the country had earlier blocked access to the messaging app, Telegram. “I did it. I have no hesitation to do it again.”
  • Power move: As part of its attempts to atone for the Cambridge Analytica fiasco, Facebook recently said it is shutting off the ability of third-party data brokers to target users on the platform directly through what are called Partner Categories. But long-time digital ad exec and publisher John Battelle argues that this is really consolidating Facebook’s power over that kind of targeting.
  • Fake news to blame? A study by researchers at Ohio State appears to show that belief in “fake news” may have affected the 2016 election, something that has been the subject of much debate. According to a Washington Post article on the research, about 4 percent of Democratic voters who supported Barack Obama in 2012 were persuaded not to vote for Hillary Clinton by hoax news stories, including reports she was ill, and that she approved weapon sales to ISIS.
  • Probe launched: The attorney general of Missouri has announced that he is launching a probe into Facebook’s use of personal data following the Cambridge Analytica leak. Josh Hawley said he is asking the social network to disclose every time it has shared user information with a political campaign, as well as how much those campaigns paid Facebook for the data, and whether users were notified.

Other notable stories:

  • During a shooting incident at YouTube’s headquarters in Palo Alto, the Twitter account of a YouTube product manager was apparently hijacked and used to tweet fake news reports about the event, according to a story written by The Verge. After the hack was pointed out by a number of journalists, Twitter CEO Jack Dorsey said he was looking into it, and the fake tweets quickly disappeared.
  • The Environmental Protection Agency tried to limit press access to a briefing by EPA head Scott Pruitt, but the move backfired thanks to journalists at Fox News. The agency reportedly told a TV crew from Fox about the briefing but didn’t tell the other major networks, at which point Fox let its competitors know and agreed to share reporting on the event.
  • The Wall Street Journal reports that 94-year-old billionaire media mogul Sumner Redstone, founder and chairman of Viacom and CBS, won’t have much of a say in the proposed merger of the two companies because his voting power has been reduced. He also now reportedly communicates using an iPad with pre-programmed responses such as “Yes,” “No,” and “F*** you.”
  • Joe Pompeo writes at Vanity Fair about what some see as a culture war taking place in the New York Times newsroom, thanks in part to growing numbers of young employees. “I’ve been feeling a lot lately like the newsroom is split into roughly the old-guard category, and the young and ‘woke’ category, and it’s easy to feel that the former group doesn’t take into account how much the future of the paper is predicated on the talent contained in the latter one,” one staffer told the magazine.
  • The Reporters Committee for Freedom of the Press has released a report that looks at incidents in the US in the past year that threatened press freedom, based on the first annual assessment of data from the Press Freedom Tracker, an index that records attacks on journalists and the media. Out of 122 incidents logged by the tracker, almost half occurred at protests.

 

Mark Zuckerberg wants you to know he cares, just like he did last time

Whenever Mark Zuckerberg talks about something that has gone wrong at Facebook—which happens rather frequently—he almost always comes off as sincerely concerned and apologetic, and his latest interview with Ezra Klein of Vox Media is no exception to this rule. But anyone who has been following Facebook for any length of time probably feels an overwhelming sense of déjà vu, because it all sounds very familiar: We screwed up, we’re sorry, we didn’t know, we will fix it. And please keep using Facebook.

We’re in the middle of a lot of issues, and I certainly think we could’ve done a better job so far. I’m optimistic that we’re going to address a lot of those challenges, and that we’ll get through this, and that when you look back five years from now, 10 years from now, people will look at the net effect of being able to connect online and have a voice and share what matters to them as just a massively positive thing in the world.

To be fair, no one has ever run a globe-spanning social network that has over two billion daily users before, so perhaps we should forgive Mark for not being that good at it. But still, it seems disingenuous to have spent 14 years building a company that now has $40 billion in revenue, but at the same time to claim that it never occurred to anyone such a giant social network—especially one powered by surveillance of its users—could become a tool for deception or evil of various kinds. Which is effectively what Mark wants us to believe.

I think the basic point that you’re getting at is that we’re really idealistic. When we started, we thought about how good it would be if people could connect, if everyone had a voice. Frankly, we didn’t spend enough time investing in, or thinking through, some of the downside uses of the tools. So for the first 10 years of the company, everyone was just focused on the positive. I think now people are appropriately focused on some of the risks and downsides as well.

What this means in practice is that Facebook has been doing its best to ignore the repeated warnings from researchers such as Danah Boyd and Zeynep Tufekci of the dangers inherent in Facebook’s structure and business model. And why wouldn’t it? Some of those concerns go straight to the heart of how the company makes the billions of dollars a year investors have come to rely on.

Tellingly enough, one of the points during the interview where Zuckerberg seems to become genuinely peeved is when Klein mentions Apple CEO Tim Cook’s criticisms of the company’s advertising-based model. The Facebook CEO rejects the idea that “if you’re not paying that somehow we can’t care about you,” calling it “extremely glib” and “not at all aligned with the truth.” And he suggests that consumers should question comments made by companies that he says “work hard to charge you more” for their services, as opposed to someone like him, who is trying to provide something for free to as many people as possible.

There are other interesting moments, such as when Zuckerberg says Facebook is considering a court-style model for deciding what speech should be allowed. “You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech,” he says. A sensible idea, or a frightening glimpse of a potential future in which Facebook is a global censor? As usual with Facebook, it’s a little bit of both.