In September of 2014, Tom Maxwell moved with his family into a large, historic home in Hillsborough, North Carolina. With its affordable rent and lush surroundings, it seemed too good to be true. Nine months later, they broke their lease, loaded up the truck, and ran away as fast as they could from the spirits and apparitions that had tortured them. Only afterward would Maxwell learn about the 300 years of bad mojo that had piled up in the house they called Nannie.
“The old and sturdy house, set on rolling pastureland alongside a placid river, appeared safe and calm. It was not. Nannie, and the land around her, was thoroughly haunted. In less than a year we would break the lease, perform a binding ritual, and leave.
As the nature and intensity of the hauntings increased, an elongate man appeared downstairs, almost two-dimensional in his flatness. He would peep at you from around corners or through doorways, just inside your peripheral vision. When you looked at him, he would flash a toothy smile, flatten into the wall and vanish. Scratches appeared on Brooke’s back several times, before my eyes, as we showered.
A hooded thing with long, thin arms began standing over Brooke as she slept. We discussed the possibility of night-hag syndrome, a particularly unpleasant type of sleep paralysis. Whatever it was, it was recurring and utterly terrifying. We had a list of nicknames for our tormentors: Smokey, Spaghetti Arms, The Spook Parade, Bonnet Lady, Smiley, Buckskin Man, Kitchen Lady, The Upstairs Thing.”
Note: This was originally published as the daily newsletter at the Columbia Journalism Review, where I am the chief digital writer
Last week, Facebook whistleblower Frances Haugen testified before a Senate subcommittee about the company’s propensity for disregarding its own research into the harms done by its content algorithms, particularly among young girls who use Instagram, its photo-sharing site. One of the solutions that Haugen recommended is something a number of other Facebook critics have also proposed over the past several years: regulatory oversight that would impose standards of behavior on the social network (and presumably other social networks such as Twitter and YouTube) in an attempt to minimize their various harms. “Right now, the only people in the world who are trained to … understand what’s happening inside of Facebook, are people who grew up inside of Facebook or Pinterest or another social media company,” Haugen told the Senate subcommittee on **. She said the company’s profit motive was so strong that Facebook would not change unless it was subjected to pressure from a government regulator. “Until incentives change at Facebook, we should not expect Facebook to change,” she said. “We need action from Congress.”
There are plenty of critics of this idea, but there’s also one somewhat surprising supporter: Facebook. In a March, 2019 op-ed in the Washington Post, Mark Zuckerberg, the chief executive of Facebook, argued that government regulation is necessary and that he welcomes it: “Every day, we make decisions about what speech is harmful, what constitutes political advertising, and how to prevent sophisticated cyberattacks,” he wrote. “But if we were starting from scratch, we wouldn’t ask companies to make these judgments alone. “I believe we need a more active role for governments and regulators.” Among other things, Zuckerberg said he agreed with the need for a data protection law similar to Europe’s General Data Protection Regulation. “I believe it would be good for the Internet if more countries adopted regulation such as GDPR as a common framework,” he wrote.
Nick Clegg, Facebook’s vice president for global affairs, reiterated this line of argument in interviews following Haugen’s 60 Minutes interview. The algorithms the company uses “should be held to account, if necessary by regulation so that people can match what our systems say they’re supposed to do from what actually happens,” Clegg said on CNN. He also said the company is open to amending Section 230 of the Communictions Decency Act, which protects platforms from liability for what their users post. “We’re not saying this is a substitution of our own responsibilities,” Clegg told NBC, “but there are a whole bunch of things that only regulators and lawmakers can do. I don’t think anyone wants a private company to adjudicate on these difficult trade-offs between free expression on one hand and moderating or removing content on the other. Only lawmakers can create a digital regulator.”
Continue reading “Facebook hearing sparks talk of a social media regulator”
Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer
Several weeks ago, the Wall Street Journal published a series of six investigative news stories about Facebook, alleging a pattern of questionable behavior on the part of both the social network and its photo-sharing service, Instagram. One alleged that changes to the Facebook news feed algorithm, which were purportedly designed to improve the news-reading experience, actually had the opposite effect, and “turned it into an angrier place.” Another said that the company knew about the negative effects its Instagram service was having on the mental health of young girls, because researchers working at Facebook had repeatedly mentioned it during briefings with senior executives, but Facebok took little or no action. Other Journal stories revealed a little-known feature that allowed celebrities to avoid responsibility for breaching Facebook’s rules, and claimed that the company knew its services were being used by drug cartels and human trafficking networks, but routinely failed to do anything to stop it (Facebook responded that the stories are inaccurate and that it cares deeply about the effect its products have on users, including young girls).
The Journal reports were all based on what the paper called “an extensive array of internal company communications” given to it by a whistleblower, a former Facebook staffer who copied the documents before they quit working for the company because they disagreed with its behavior. On Sunday, the whistleblower revealed herself on 60 Minutes to be Frances Haugen, a former product manager at Facebook who has also worked for Google, Pinterest, Yelp, and a number of other technology companies. On Tuesday, Haugen testified before the Senate Commerce subcommittee on consumer protection, product safety, and data security about the potential dangers of Instagram for young users (Haugen also posted her testimony to her personal website). In both her 60 Minutes interview and her congressional testimony, Haugen made the same central point: that Facebook knew about the dangers of the recommendation algorithms that power it and Instagram, but chose to do nothing. It knew about these dangers, Haugen said, because the company’s own researchers had mentioned them repeatedly in multiple research papers.
“The company’s leadership knows how to make Facebook and Instagram safer but won’t make the necessary changes because they have put their astronomical profits before people,” Haugen said during her testimony to the senate committee. The bottom line, she said, is that Congress must take action, comparing Facebook to other industries that also wound up being regulated by the government in order to protect consumers from harm, such as tobacco companies and car makers. One of the big challenges with Facebook, she argued, is that legislators don’t have any idea how the company’s products work, because it is so reluctant to either share data from its own internal research, or provide data to outside scientists. “This inability to see into Facebook’s actual systems and confirm they work as communicated is like the Department of Transportation regulating cars by only watching them drive down the highway,” Haugen told the senate committee.
Continue reading “Whistleblower turns up the heat on Facebook and Instagram”