Facebook’s disinformation problem is harder than it looks

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

The fact that Facebook can distribute dangerous amounts of misinformation around the world in the blink of an eye is not a new problem, but the social network’s ability to do so got more than the usual amount of attention during the past week. President Joe Biden told reporters during a White House scrum that Facebook was “killing people” by spreading disinformation, hoaxes, and conspiracy theories about COVID-19, and in particular about the efficacy of various vaccines. As Jon Allsop reported in the CJR newsletter on Wednesday, Biden backtracked somewhat on his original statement after some pushback from the company and others: Facebook said that the country needed to “move past the finger pointing” when it comes to COVID disinformation, and that it takes action against such content when it sees it. Biden responded that his point was simply that Facebook has enabled a small group of about a dozen accounts to spread disinformation that might be causing people to avoid getting vaccinated, and that this could result in an increase in deaths.

Biden appears to have gotten his information about this “disinformation dozen” from a group called the Center for Countering Digital Hate, which came out recently with research showing that the bulk of the disinformation around COVID-19 and vaccines appears to come from a handful of accounts. The implication of the president’s comment is that all Facebook has to do is get rid of a few bad apples, and the COVID disinformation problem will be solved. As Farhad Manjoo of the New York Times put it, however, Biden “reduced the complex scourge of runaway vaccine hesitancy into a cartoonishly simple matter of product design: If only Facebook would hit its Quit Killing People button, America would be healed again.” While Biden’s comments may make for a great TV news hit, solving a problem like disinformation at the scale of something like Facebook is much harder than he makes it sound, in part because it involves far more than just a dozen bad accounts. And even the definition of what qualifies as disinformation when it comes to COVID has changed over time.

As Jon Allsop described yesterday, part of the problem is that media outlets like Fox News seem to feel no compunction about spreading “fake news” about the virus in return for the attention of their viewers. That’s not a problem Facebook can fix, nor will ridding the social network of all hoaxes about COVID or vaccines make much of a dent in the influence of Fox’s hysteria — which information researcher Yochai Benkler of Harvard’s Berkstein Center for Internet and Society has argued was much more influential during the 2016 election than any social-media network. But even that’s just the tip of the disinformation iceberg. One of the most prominent sources of COVID and vaccine disinformation is a sitting US member of Congress: Marjorie Taylor Greene from Georgia. Another, Robert F. Kennedy, is a member of one of the most famous political families in US history, and his anti-vaccination conspiracy theories put him near the top of the Center for Countering Digital Hate’s “disinformation dozen” list. What is Facebook supposed to do about their repeated misstatements?

Continue reading “Facebook’s disinformation problem is harder than it looks”

New details on the friction Trump caused inside Facebook

Note: This was originally published as the daily newsletter at the Columbia Journalism Review, where I am the chief digital writer

Donald Trump’s election in 2016 created a significant amount of turmoil for Facebook, including accusations of improper data stewardship involving Cambridge Analytica, and a number of awkward appearances before Congressional committees, where founder and chief executive Mark Zuckerberg was questioned about the social network’s role in spreading disinformation related to everything from the 2016 election to the January 6 attack on the US Capitol building. According to a new book by two New York Times reporters, Cecilia Kang and Sheera Frenkel, the fallout from these events didn’t just cause external problems. It also reportedly created a rift between the Facebook CEO and his second-in-command, Sheryl Sandberg, the company’s chief operating officer and a former Google executive, who was hired in part for her Washington connections. “Mark Zuckerberg and Sheryl Sandberg’s Partnership Did Not Survive Trump,” said the Times headline on an excerpt from the book, which is entitled “Ugly Truth: Inside Facebook’s Battle for Domination.”

In particular, the book alleges that Zuckerberg took control of almost all matters related to Trump, including how to handle his posting of hate speech and disinformation, matters that would previously have been handled by Sandberg — and decisions she reportedly disagreed with, but didn’t want to bring up with the Facebook founder. The company, not surprisingly, denies any and all reports of a rift between the two most powerful people at the top of the company. “This book tells a false narrative based on selective interviews, many from disgruntled individuals, and cherry-picked facts,” Dani Lever, a Facebook spokesperson, told Insider in a statement. “The fault lines that the authors depict between Mark and Sheryl and the people who work with them do not exist. All of Mark’s direct reports work closely with Sheryl and hers with Mark. Sheryl’s role at the company has not changed.”

The alleged friction between Zuckerberg and his second-in-command isn’t the only turmoil the company is dealing with as a result of its handling of Trump, according to the book. Frenkel and Kang report that there is a significant amount of dissent within the ranks of the company’s employees as well, especially over the social network’s failure to act quickly to stop the flow of disinformation from the president’s account. Kang told NPR’s Fresh Air podcast that one of the most fascinating things about doing the reporting for the book — which the authors said involved more than 400 interviews — was talking to employees who “kept trying to raise the alarm, saying ‘This is a problem. We are spreading misinformation. We are letting the president spread misinformation and it’s being amplified by our own algorithms. Our systems aren’t working the way we predicted and we should do something.'”

Continue reading “New details on the friction Trump caused inside Facebook”

Facebook launches Bulletin, its would-be Substack killer

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Three months ago, Facebook announced that it planned to offer a platform for writers and journalists to publish subscription newsletters, a product very similar to that offered by Substack, the venture-funded startup that has helped make subscription email newsletters a hot topic in journalistic circles over the past year. Last week, Facebook officially launched its new platform, known as Bulletin, along with a slate of high-profile writers, including author Malcolm Gladwell and sports reporter Erin Andrews. A blog post from Campbell Brown, the company’s VP of global news partnerships, and Anthea Watson Strong, product manager for news, said that Facebook has partnered with “a small, diverse group of voices… some of whom are up-and-coming writers looking to find and build their audience, while others already have “a long history of work and a sizable following.”

In addition to Gladwell and Andrews, the content creators who have partnered with the company so far include Jessica Yellin, a former White House correspondent for CNN; Ron Claiborne, a former ABC News correspondent; Mitch Albom, sportswriter and author of such books as Tuesdays With Morrie; and Tyler Cowen, a high-profile economist and founder of the blog Marginal Revolution. Mark Zuckerberg, Facebook’s chief executive, said in a Facebook Live audio session held as part of the Bulletin launch that he also hopes to convince local journalists to use the platform in the future. “Part of what I think we can try to do here is make a real investment in local news,” he said.

How the company will decide which local journalists to include was not disclosed, but Facebook said earlier this year that it intends to spend $5 million “to support local journalists interested in starting or continuing their work on our new platform for independent writers.” The company opened up an application process at the time, and said successful applicants would be paid a multi-year licensing fee, and receive other monetization tools and services, but would have to commit to engaging with their audience “through Facebook tools such as Groups, live discussions, and other features.”

Continue reading “Facebook launches Bulletin, its would-be Substack killer”

My mother, Linda Miles Ingram

My mother was a woman with many hidden depths. She often came off as flighty or shallow, I think, because of her love of beautiful clothes or her fondness for acting, or her taste for perhaps a bit more wine than was really necessary, but she had a core of steel ( which she got from her mother Ruth), and that allowed her to take on challenges that would have scared off lesser mortals — including setting off for the Seychelles islands in her retirement years, to help my father beat back the jungle around a would-be BnB, where she learned how to cook fruit bat, among other things (which involves throwing them against the wall to tenderize them, apparently).

After growing up in Toronto in relative luxury on South Drive, with her younger sister Kathy and little brother John, doing all the up-and-coming Toronto society things like debutante balls and being raised largely by nuns, Linda fell in love with a young man she met as part of the theatre group at the University of Western Ontario — as she told the story, she would often go back to his apartment and do the dishes while he called his fiancee, who eventually fell by the wayside, defeated by the charms of this blonde bombshell with the big vocabulary and the cats-eye glasses.

Although her family might have preferred to see her marry a doctor or lawyer, Linda decided to marry a penniless farm boy from Saskatchewan who had just joined the Royal Canadian Air Force as a fighter pilot. Despite — or perhaps because of — their differences, they became an inseparable team, he the director telling everyone where to stand and what to say (or which country and province to move to next) and she the young ingenue, playing the role of Air Force officer’s wife, party hostess, mother, and later grandmother, aunt, and walking encyclopedia.

Continue reading “My mother, Linda Miles Ingram”

Germany’s “flying train” from 1902

The movie clip above, from the Museum of Modern Art, may look like something from an HG Wells-style science fiction movie made at the turn of the century, but it is actually a clip of a functioning suspended railway in Germany built in the late 1800s. Originally called the Einschienige Hangebahn System Eugen Langen (the Eugen Langen Monorail Overhead Conveyor System), it is now known as the Wuppertaler Schwebebahn or Wuppertal Suspension Railway. Not only that, but it is still running — although the cars have been upgraded multiple times since it was built. It moves 25 million passengers every year.

Facebook Oversight Board punts on Donald Trump ban

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

For several months, those who follow politics and those who are more interested in social technology have both been waiting with bated breath for a decision on whether Donald Trump would continue to be banned from posting to his account on Facebook. The former president’s account was blocked after the social network decided he used the site to foment violence, resulting in the attack on the US Capitol on January 6th, and then the decision was sent to the company’s so-called Oversight Board for review. The board — a theoretically arm’s length group of former journalists, academics, and legal scholars, including the former Prime Minister of Denmark, the former editor-in-chief of The Guardian, and a former US federal circuit-court judge — handed down its ruling on Wednesday. The board decided that Facebook was right to have banned Trump from the network for fomenting violence, but said the company had no official policy on its books that allowed it to ban him (or anyone else) permanently, and told Facebook to come up with one if it wanted to do this in the future. This appeared to please some people partially, but almost no one completely.

For some critics, including the so-called Real Facebook Oversight Board — a group that includes former CIA officer and former Facebook advisor Yael Eisenstat, Facebook venture investor Roger MacNamee, and crusading Phillippines journalist Maria Ressa — the board’s decision on Trump just reinforces the fiction that the Oversight Board has any power whatsoever over the company. Although Facebook has gone to great lengths to create a structure that puts the board at arm’s length and theoretically gives it autonomy, the board was still created by Facebook and is funded by Facebook. It also has a fairly limited remit, in the sense that it can make decisions about whether content (or accounts) should be removed or not, but it has no ability to question or influence any of Facebook’s business decisions, or the way its algorithms function, how its advertising strategy works, and so on. The board may have advised Facebook that it should have a policy about how to deal with government leaders who incite violence, but if the company decides not to create one, or not to implement it, the board can do nothing.

On a broader level, some critics argue that all of the attention being paid to the Oversight Board and its Trump decision — not to mention the references to it being Facebook’s Supreme Court — play into the company’s desire to make it seem like a worthwhile or even pioneering exercise in corporate governance. For some at least, it is more like a sideshow, or a Potemkin village: it looks nice from the outside, but behind the facade it’s just a two-dimensional representation of governance, held up by sticks. Shira Ovide of the New York Times wrote: “Facbook is not a representative democracy with brances of government that keep a check on one another. It is a castle ruled by an all-powerful king who has invited billions of people inside to mingle — but only if they abide by opaque, ever-changing rules that are often applied by a fleet of mostly lower-wage workers.”

Continue reading “Facebook Oversight Board punts on Donald Trump ban”

Did the Facebook Oversight Board drop the ball on Trump?

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Last week, after months of deliberation, the Facebook Oversight Board — the theoretically independent body that adjudicates cases in which content has been removed by the social network — released its decision on the banning of Donald Trump. The former president’s account was blocked permanently following the attack on the US Capitol building on January 6th, after Facebook decided he was using it to incite violence. It sent this decision to the Oversight Board for review, and the board announced its ruling last week — a two-part decision that upheld the ban on Trump as justified, but also noted that Facebook doesn’t have a policy that allows it to ban accounts indefinitely. So the board gave the company six months to either come up with such a policy, or impose a time limit on the Trump ban. Some observers saw this as the Oversight Board “punting” on the choice of whether to ban the former president, rather than making a hard and fast decision, while others argued that paying so much attention to the rulings of a quasi-independent body created by the company it is supposed to oversee meant giving the board (and Facebook) too much credit, and was a distraction from the company’s real problems.

Is the Oversight Board a valid mechanism for making these kinds of content decisions, or is Facebook just trying to pass the buck and avoid responsibility? Did the board’s ruling on Trump’s ban reinforce this latter idea, or is the board actively fighting to ensure that it’s not just a rubber stamp for Facebook’s decisions? To answer these and other related questions, we’re talking this week with a number of experts on Galley, CJR’s digital discussion platform, including Nathalie Maréchal, a policy analyst at Ranking Digital Rights; Kate Klonick, a law professor at St. John’s in New York who has been following the Oversight Board since its inception, and Evelyn Douek, a lecturer at Harvard Law School and an affiliate at the Berkman Klein Center for Internet and Society. Maréchal, for one, falls firmly into the camp that believes the Oversight Board is mostly a distraction from the important questions about Facebook and its control over speech and privacy. “The more time and energy we all spend obsessing over content moderation, the less time and energy we have scrutinizing the Big Tech Companies’ business models and working to pass legislation that actually protects us from these companies’ voraciousness,” she said.

The most important thing that the government could do to reign in Facebook’s power, Maréchal says, is “pass federal privacy legislation that includes data minimization (they can only collect data that they actually need to perform the service requested by the user), purpose limitation (they can only use data for the specific purpose they collected it for — ie, not for advertising), and a robust enforcement mechanism that includes penalties that go beyond fines, since we know that fines are just a slap-on-the-wrist cost of doing business.” Klonick, for her part, is a little more willing to see the Oversight Board as a potential positive influence on Facebook — although not a solution to all of the company’s myriad problems, by any means. And Klonick knows the board more intimately than most. In 2019, she convinced the company to give her access to those inside Facebook who were responsible for creating the board and developing its mandate and governance structure, and she later wrote about this process for The New Yorker and others.

Continue reading “Did the Facebook Oversight Board drop the ball on Trump?”

Social networks accused of censoring Palestinian content

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Violence between Israel and Palestine has been going on for decades, but the conflict has flared up in recent weeks, in part because of the forced eviction of Palestinians who live in Jerusalem on land claimed by Israel, and attacks on Muslims near the Al-Aqsa mosque toward the end of the holy month of Ramadan. But as Palestinians and their supporters have shared images and posts about the violence on Facebook, Twitter, and Instagram, some have noticed their content suddenly disappearing, or their posts being flagged for breaches of the platforms’ terms of use when no such breach had occurred. In some cases, their accounts have been suspended, including the Twitter account of Palestinian-American writer Mariam Barghouti, who had been posting photos and videos of the violence in Jerusalem. Twitter later restored Barghouti’s account, and apologized for the suspension, saying it was done by mistake.

Some of those who have been covering such issues for years don’t think these kinds of things are a mistake — they believe social networks are deliberately censoring Palestinian content. In a recent panel discussion on Al Jazeera’s show The Stream, Marwa Fatafta of the human-rights advocacy group AccessNow said this is not a new problem, but it has recently gotten worse. “Activists and journalists and users of social media have been decrying this kind of censorship for years,” she said. “But I’ve been writing about this topic for a long time, and I have not seen anything of this scale. It’s so brazen and so incredible, it’s beyond censorship — it’s digital repression. They are actively suppressing the narrative of Palestinians or those who are documenting these war crimes.”

On Monday, AccessNow did a Twitter thread about censorship involving Palestinian content on Facebook, Twitter, Instagram, and TikTok. The group said it has received “hundreds of reports that social platforms are suppressing Palestinian protest hashtags, blocking livestreams, and removing posts and accounts.” Ameer Al-Khatahtbeh, who runs a magazine for millennials called Muslim, says he has documented 12,000 acts of censorship on Instagram alone in the past several weeks.

Continue reading “Social networks accused of censoring Palestinian content”

Facebook and the dilemma of coordinated inauthentic behavior

Note: This was originally published as the daily newsletter for the Columbia Journalism Reviews, where I am the chief digital writer

Yesterday, Facebook released a report on what it calls “influence operations” on its platform, which are defined as “coordinated efforts to manipulate or corrupt public debate for a strategic goal.” By this, the company seems to mean primarily the kinds of activity that Americans heard about during the 2016 election, from entities like the Russian “troll farm” known as the Internet Research Agency, which used fake accounts to spread disinformation about the election and to just generally cause chaos. Facebook says in this “threat report” that it has uncovered evidence of disinformation campaigns in more than 50 countries since 2017, and it breaks down some of the details of 150 of these operations over that period. In addition to noting that Russia is still the leading player in this kind of campaign (at least the ones that Facebook knows about) the company describes how dealing with these kinds of threats has become much more complex since 2016.

One of the big challenges is defining what qualifies as “coordinated inauthentic behavior.” Although Facebook doesn’t really deal with this in its report, much of what happens on the service (and other similar platforms) would fit that description, including much of the advertising that is the company’s bread and butter. In private groups devoted to everything from politics to fitness and beauty products, there are likely plenty of posts that could be described as “coordinated efforts to manipulate public debate for a strategic goal,” albeit not the kind that rise to the level of a Russian troll farm.

Influence operations can take a number of forms, Facebook says, “from covert campaigns that rely on fake identities to overt, state-controlled media efforts that use authentic and influential voices to promote messages that may or may not be false.” For example, during the 2016 election, the Internet Research Agency created groups devoted to Black Lives Matter and other topics that were filled with authentic posts from real users who were committed to the cause. In one case mentioned in Facebook’s threat report, a US marketing firm working for clients such as the PAC Turning Point USA “recruited a staff of teenagers to pose as unaffiliated voters” and comment on various pages and accounts. As researchers like Shannon McGregor of UNC note, “enlisting supporters in coordinated social media efforts is actually a routine campaign practice.”

Continue reading “Facebook and the dilemma of coordinated inauthentic behavior”