Facebook’s funding of local journalism is problematic

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

On Wednesday, Facebook announced the first round of grant recipients for what the social-media giant is calling its Facebook Journalism Project Community Network. The 23 media outlets who will receive the money—between $5,000 and $25,000 per newsroom—were chosen by Facebook’s partner: the Lenfest Institute, a non-profit entity set up by former cable magnate Gerry Lenfest in part to finance the continued operation of the Philadelphia Inquirer and Philadelphia Daily News. Facebook said in a news release about the grant program that the winners “include a fresh approach to business sustainability through community-funded journalism, and expansion of successful storytelling events shown to increase reader revenue.”

Being a small, community-focused media outlet has never been easy, but it has gotten increasingly difficult of late, as the print advertising business has plummeted and digital advertising has been squeezed. So it’s not surprising that startups and hyperlocal players like the ones chosen to receive Facebook’s largesse would celebrate their victory, since the company’s funding will presumably allow them to do things they otherwise couldn’t—including, perhaps, keep the lights on. But there is an elephant in the room: namely, the fact that Facebook is one of the main reasons the media industry is in such desperate straits in the first place, since it controls a significant share of the ad market, and the attention of billions of daily users.

The almost two dozen media entities who are getting the Facebook grants include a number of prominent players in community-based journalism, including Spaceship Media, which organizes events aimed at bringing together disparate groups in an attempt to discuss difficult topics, the education-focused outlet known as Chalkbeat, and the Tyler Loop from Texas, which got money to expand its live storytelling events. There’s Block Club Chicago, a member of the blockchain-powered journalism platform Civil, and a project called 100 Days in Appalachia. But somewhat surprisingly, the recipients also include a number of much larger, more traditional media companies, including the Los Angeles Times—which is getting money to fund community forums—as well as Newsday, which is owned by Cablevision founder Charles Dolan, and The Salt Lake Tribune.

Continue reading “Facebook’s funding of local journalism is problematic”

Journalists have to walk a fine line, says disinformation expert Whitney Phillips

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

One of the most challenging problems of the digital information age is how to report on disinformation without pouring gasoline on the fire in the process. While working with the New York-based group Data & Society, media analyst Whitney Phillips (now an assistant professor of communications at Syracuse University) wrote a comprehensive report on this challenge, entitled The Oxygen of Amplification: Better Practices for Reporting on Extremists, Antagonists, and Manipulators. We thought this topic was worth exploring in more depth, so we asked Prof. Phillips to join us on our Galley discussion platform for an interview, which took place over multiple days.

The idea that journalists can exacerbate problems merely by doing their jobs is somewhat more widely accepted now, thanks in part to the work of Prof. Phillips and Joan Donovan, who runs the Technology and Social Change Research Project at Harvard’s Shorenstein Center (and also did an interview with CJR on Galley recently). After the recent mass shooting incident in New Zealand, a number of media outlets chose not to focus on the shooter, and didn’t publish or link to his “manifesto.” In some cases, news outlets didn’t even use his name in the stories they wrote, which is a big change from even a few years ago. But Phillips says there is more to be done.

“I’ve been considering these questions for the better part of a decade and I still find them vexing,” she says. There are some basic guidelines that are comparatively clear, including efforts to avoid publicizing anything that hasn’t yet met the tipping point—which is reached when a topic moves from a discrete online community and becomes a subject of broader discussion. Obviously, a mass shooting will cross that point immediately, but that doesn’t mean reporters should report everything about the incident. Of particular concern, says Phillips, are ways of framing the story that “aggrandize the shooter/antagonist, or otherwise incentivize future shooters/antagonists.”

Some news outlets have argued that they need to report on the personal details and background of extremists like the Christchurch shooter because we need to understand how they were radicalized. But while this kind of understanding might help in some cases, Phillips says it is going to fail in others, because “radicalization is a choice, and changing people’s minds about the things they actively choose is a long-term, up-close-and-personal, complicated ground game, not something you can solve by waving a newspaper article at someone.” Writing in detail about how they were radicalized might be seen by like-minded extremists as a reward rather than punishment.

It’s true that in some cases “sunlight disinfects,” in other words, that exposing wrong-doers can cause them to lose their power. But Phillips notes in some cases, it can function as a hydroponic grow light, “and it’s simply not possible to know what the long term effect of reporting will be. By then, it might be too late to intervene, because what ended up growing turned out to be poison.” Currently, journalists and even academics tend to focus almost exclusively on white supremacists and violent manipulators. But why? “At what point did we internalize the idea that attackers and liars and racists are the most interesting and important parts of a story?” she asks.

The point, Phillips says, is that if the goal is to undermine a violent ideology like white supremacy, you don’t do that by only talking about white supremacists. “That keeps them right where they want to be, which is central to the narrative.” What we should be doing is showing the effects of white supremacy. Many people only know about racism as an abstraction, says Phillips. “But it’s not an abstraction. It’s bleeding bodies. It’s screaming babies. It’s synagogues and mosques on lockdown. Those stories need telling.” Better to spend more time reporting those kinds of details, rather than another profile that amplifies the messaging “of some violent asshole whose actions tell us everything we need to know.”

Part of the problem with fighting misinformation is that we all believe things that turn out to be wrong, whether it’s bad habits or personal relationships. What this shows, she says, “is that well intentioned interventions, outfitted with true and important facts, often go unheeded, and can actually compel a person to double down and feel even more convinced that they’re right and everybody else is wrong.” That’s why well-intentioned fact-checking efforts can have a boomerang effect and actually entrench a false belief in some cases. And on top of that, studies have shown that repeating a message, even while debunking it, can reinforce the message and paradoxically make it seem more believable.

“Efforts to fact check hoaxes and other polluted information operate under the assumption that objective truth is a magic bullet [which] goes right into readers’ brains, without any filter, without any resistance, and fills in the holes that bad information leaves behind,” says Phillips. According to this theory, the problem of disinformation can be solved by handing out facts. But that’s not how human nature works. “When something ugly emerges from the depths, you simply cannot throw facts at it and expect anything transformative to happen—most basically because there is, across and between groups, no agreement about what the facts even are.”

There are even more complicating factors, Phillips says. According to one study of “fake news,” almost 15 percent of users shared false or misleading stories even though they knew they were untrue. And in many cases people do this because they want to send a message about who they are or what they believe, in order to show that they are part of a specific group. “Media literacy discussions within journalism and academia tend to presume good faith in these kinds of cases, and proceed from there,” she says. “But people don’t always operate under good faith. In my line of work in particular, bad faith arguments and actions are everywhere.”

Getting to the bottom of the Seth Rich conspiracy theory

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

It was one of the first prominent “fake news” conspiracy theories to metastasize from Internet rumor all the way to the White House: In the summer of 2016, stories began to circulate in various online forums that Seth Rich, a fairly low-level Democratic National Committee staffer who died in July, wasn’t the victim of a botched robbery at all, but had actually been assassinated by a contract killer working for Hillary Clinton. Rich, the theory went, was actually the secret source who had leaked DNC emails to WikiLeaks—a theory that WikiLeaks founder Julian Assange appeared to lend credence to when he offered a $20,000 reward for information leading to the identity of Rich’s killer or killers. “Our sources take risks,” he said.

As these theories were being spread by Reddit users, denizens of 4chan forums and even Fox News hosts like Sean Hannity, suspicion arose that there were shadowy forces trying to promote the loony-sounding conspiracy. But it wasn’t clear who exactly these forces were, or what their intentions were. On Tuesday, Yahoo News investigative reporter Michael Isikoff announced that he had tracked down the original source of the theory: A fake report concocted by the Russian intelligence agency SVR (short for Sluzhba vneshney razvedki Rossiyskoy Federatsii), a unit of the former KGB. The phony “bulletin,” designed to look like an authentic intelligence report, was released just 3 days after Rich’s death, Isikoff writes.

The idea that the Rich conspiracy theory was distributed by agents acting on behalf of the Russian government is not a new one. When information started to come out about the activities of the so-called Internet Research Agency during the 2016 election—which engaged in a sustained campaign of disinformation and outright propaganda on Facebook and other platforms—the Seth Rich assassination theory turned out to be one of the many pieces of fakery the IRA distributed as a way of destabilizing the campaign. But the agency was a privately run, arm’s-length entity (albeit one run by a close associate of Russian president Vladimir Putin). Until Isikoff’s report, it was not clear that this conspiracy theory originated from Russian intelligence itself.

Continue reading “Getting to the bottom of the Seth Rich conspiracy theory”

Facebook and the private group problem

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Anyone who has been paying attention over the past year is probably well aware that Facebook has a problem with misinformation, but a number of recent events have highlighted an issue that could be even more problematic for the company and its users: namely, harassment and various forms of abusive conduct in private groups. In the latest incident, ProPublica reported on Monday that members of a Facebook group frequented by current and former Customs and Border Patrol agents joked about the deaths of migrants, talked about throwing burritos at members of Congress who were visiting a detention facility in Texas, and posted a drawing of Rep. Alexandria Ocasio-Cortez engaged in oral sex with a migrant. According to ProPublica, the group is called “I’m 10-15,” which is the code that CBP agents use when they have illegal migrants in custody, and has 9,500 members.

It’s not clear whether the administrator of the Facebook group made any effort to restrict membership to actual or former Customs and Border Patrol agents, so it’s probably not fair to conclude that the views expressed in the group are indicative of how a majority of the CBP feels about immigrants. But that didn’t stop many commentators on Twitter and elsewhere—including a number of congressmen—from expressing their concerns about whether attitudes at the agency need to be investigated. “It’s clear there is a pervasive culture of dehumanization of immigrants at CBP,” Democratic presidential candidate Kamala Harris said on Twitter, while her fellow candidate Ocasio-Cortez said “This isn’t about ‘a few bad eggs.’ This is a violent culture,” and described CBP as “a rogue agency.” If agents are threatening violence against members of Congress, Ocasio-Cortez said, “how do you think they’re treating caged children+families?”

In response to the ProPublica story, a spokesman for the CBP said that the agency has initiated an investigation into the “disturbing social media activity,” and Border Patrol Chief Carla Provost said the posts are “completely inappropriate and contrary to the honor and integrity I see—and expect—from our agents.” But the Customs and Border Patrol group is only the latest in a series of such examples that have come to light recently. A recent investigation by Reveal, the digital publishing arm of the Center for Investigative Reporting, reported that hundreds of current and retired law enforcement officers from across the US are members of extremist groups on Facebook, including those that espouse racist, misogynistic, and anti-government views.

Continue reading “Facebook and the private group problem”