Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer
As part of its ongoing efforts to deal with the spread of misinformation on its platform, Twitter is experimenting with adding colored labels that would appear directly beneath any inaccurate statements posted by politicians and other public figures, according to a leaked demo of new features sent recently to NBC. The labels would contain fact-checks either by Facebook’s team of third-party checkers, or by journalists and other users who agreed to participate in a kind of “community reports” crowdsourcing effort.
Twitter later confirmed to the network that the leaked demo (which was accessed via a publicly available website), is one possible version of a new policy aimed at curbing the spread of misinformation. The original NBC News report said the new features would be rolling out on March 5, but the story was later updated to say Twitter doesn’t have plans to roll out the labels — or any other misinformation features — on any specific date. Whether such a feature would actually help curb misinformation even if it was rolled out, meanwhile, is still an open question.
After NBC ran its report, communications researcher Rebekah Tromble of George Washington University posted a tweet that said: “Please, @Twitter, do NOT do this. Do NOT add this massive flag. As @shannimcg and I argued, Facebook already made this mistake. This will only increase the circulation of false info.” Tromble included a link to a research paper that she and a colleague wrote last year about social-network design, which described how Facebook tried a similar type of misinformation flag, and later scrapped the feature because it was concerned that the flags would backfire and made the problem worse. Research, the company said, appeared to show that highlighting that something was inaccurate in some cases made users believe it even more, instead of less, something that has come to be known as the “backfire effect.” But a Facebook researcher responded and said that further research shows the backfire effect is not as strong as it was once believed to be.
Researcher Brendan Nyhan of Dartmouth College, who wrote one of the first papers to describe the backfire effect in 2010, has in fact said more recently that further studies have shown “while backfire may occur in some cases, the evidence now suggests it is rare rather than the norm. Generally debunking can make people’s beliefs in specific claims more accurate.” However, in an email to CJR, Tromble said there is still a risk with including such misinformation flags — including a risk that they will make the misinformation more obvious, and thereby encourage others to share it. “The use of obtrusive flags is likely to draw more attention to the tweet,” she said. “In some instances, people might share the tweet in order to condemn the misinformation, but even with good intent, it still increases the reach of the misinformation.”
There’s also reason for concern about such labels — not to mention Facebook’s entire third-party fact-checking program — because of a phenomenon known as the “implied truth effect.” In a nutshell, some research has found that when certain specific pieces of information are labeled as false, users assume that all the other information that appears on a service has also been fact-checked and verified as true, which of course is never the case. Twitter has said that if the colored flags are used, that any tweets identified in that way would be less visible because of the network’s filtering algorithms. But it is unclear how this would work, Tromble says, and “algorithmic efforts to reduce visibility could, at least in theory, be offset by attention drawn by the prominence of the flag itself.”
Evelyn Douek, a student at Harvard Law School and fellow at the Berkman Klein Center, was somewhat more optimistic about Twitter’s proposal. She said on Twitter that services like Twitter and Facebook “need to get out of the Take-down/Leave up paradigm of content moderation, so thinking like this excites me.” However, they key to whether it works or not, she said in an email to CJR, is whether Twitter is transparent about the implementation and the results of this experiment.
“The devil will be in the details,” Douek said. Issues like the backfire effect and the implied-truth effect, she said, don’t mean that “we should give up on creative thinking outside the take-down/leave up paradigm that dominates so much of the conversation about content moderation. I’m still optimistic that we can find solutions that aren’t so binary.” Whether Twitter ever releases anything like the demo NBC found, of course, remains to be seen.
Here’s more on misinformation and social networks:
- No virus scams: Facebook CEO Mark Zuckerberg says he is “focused on making sure everyone can access credible and accurate information” about the corona virus, COVID-19. Users who search for the term will see a pop-up that directs them to the World Health Organization or their local health authority for the latest information, and the company is giving the WHO as many free ads as it needs to distribute information. Facebook is also taking steps to curb misinformation, Zuckerberg said, by removing false claims and conspiracy theories that are flagged by health organizations, and blocking users from running ads that try to exploit the situation by offering cures, etc.
- Zero tolerance: Twitter also said it is expanding a “dedicated search prompt” feature it launched in January when the virus first started to become a global issue. The company said it will ensure that “when you come to the service for information about COVID-19, you are met with credible, authoritative content at the top of your search experience.” The network said it will be “monitoring the conversation” to make sure that it has all the possible keywords and spellings of the virus covered, and added that its safety and security team would be taking a “zero-tolerance approach to platform manipulation and any other attempts to abuse our service at this critical juncture.”
- Not WhatsApp: Twitter and Facebook may be cracking down on the spread of misinformation about the virus, but according to a Washington Post report, fake cures and other coronavirus conspiracy theories are flooding Facebook-owned WhatsApp, leaving governments and users with a “sense of panic.” The Post says people in Nigeria, Singapore, Brazil, Pakistan, and other countries “say they’ve seen a flood of misinformation on WhatsApp about the number of people affected by coronavirus, the way the illness is transmitted and the availability of treatments. The messages and voice memos have instilled fear, troubled businesses and created public health headaches for governments.”
Other notable stories:
- Sam Thielman and Ishaan Jhaveri write for CJR about a project the Tow Center at Columbia is working on that is tracking all the TV ads that presidential candidate Mike Bloomberg has bought since he started his campaign. So far, the project has accounted for about $9 million of the more than $258 million Bloomberg is spending on television advertising through Assembly, his Manhattan advertising firm. Tow researchers have done this “by compiling invoices sent to Assembly by the Sinclair Broadcasting Group, the largest station operator in the US.” Invoices, orders, and signed statements about candidacy are all submitted to the Federal Communications Commission’s website, which maintains a public inspection file for every licensed broadcaster.
- According to the Washington Post, MSNBC host Chris Matthews, 74, had a conversation with the network about retiring in early 2019, before the election cycle shifted into high gear. He made it known that he wanted to start getting home before 9 p.m. to have dinner with his wife, according to a person familiar with the discussions. In the end, however, Matthews remained in his Hardball slot and made plans to retire after the election, but a series of embarrassments in recent weeks changed all that and put Matthews in the position of having to jump before he was pushed, the Post says.
- American Media, owner of at least a dozen fitness and celebrity magazines, continued a wave of layoffs Monday, cutting at least 23 positions, according to former employees and documents obtained by Business Insider. Editorial staffers at nearly all levels were affected, including associate editors, digital writers, an executive editor, a video producer and reporter, and other staff. A New York Post report said nearly all of Radar’s digital staffers were laid off, giving rise to speculation the site could be closed entirely, but an American Media spokesman denied that Radar Online was going dark.
- Politico Europe is expanding its UK team after identifying a “massive opportunity for policy-focused coverage” post-Brexit. Four new reporting and editing roles are being created to bolster the London-based editorial team, who will move to a larger office in April, according to the Press Gazette. The new roles are expected to be advertised this month, with more editorial and commercial vacancies set to open up later this year. Politico Europe currently has about ten journalists based in London.
- Vanity Fair has a feature about the New York Daily News, and how staffers say cuts by various owners have left the paper a shadow of its former self. “Misery had become so commonplace,” the piece says, that news of cost-cutting hedge fund Alden Global’s stake in the paper’s parent was barely a blip on the radar. “There’s nobody left to lay off, and no parts to strip from what’s left of the paper. All that’s left is turning out the lights,” said one staffer. “Just a decade ago we were major players in the national media. Money was no object and we could go the distance on anything,” said another. “Now we have a skeleton crew that races around to prop up a once great newspaper.”
- A jury on Wednesday found that former PBS host Tavis Smiley violated the morals clause of his contract by having affairs with multiple subordinates. Smiley sued PBS in February 2018, alleging that the broadcaster had used a sham investigation as a pretext to cancel his show. PBS countersued, claiming that Smiley’s sexual conduct constituted a breach of their contract. The jury began deliberating on Monday, and issued its verdict in favor of PBS on Wednesday morning. PBS will be awarded at least $1.5 million.
- A report from Bloomberg says that Google-owned video service YouTube has become “the platform of choice for reporters facing one of the harshest media crackdowns in Pakistan’s 72-year history,” including Syed Talat Hussain, a former journalist with the country’s largest broadcaster, who quit after he was told his programs were too critical of the army and the government. “Hunting down dissidents and demonizing critics as traitors was always part of the media landscape, but the scale, audacity and scope of it we see now remains unprecedented,” said Hussain.
- A couple is facing charges after authorities in Susquehanna County say they set fire to a building they owned that housed their family-owned weekly newspaper. Charles and Rita Ficarro, 69 and 64, were both charged in connection with the fire at the building that housed the Susquehanna County Transcript in Susquehanna in September. According to reports, he purposely set a fire in a first-floor closet, and she deposited a $35,000 check from their insurance company. The building housed their family-owned newspaper for years, as well as some vacant apartments. The newspaper is still in circulation, but the building has since been demolished