YouTube says it wants to fix things, but it feels too little and too late

Most of the attention around “fake news” and misinformation so far has tended to focus on Facebook, in part because of its enormous size, and because of the role that Russian trolls and data-driven targeting by organizations like Cambridge Analytica may (or may not) have played in the US election. But YouTube is also a gigantic misinformation machine, as CJR has pointed out a number of times, and yet it seems to be almost incapable of dealing with the fallout from the machinery it has created.

A piece in the latest issue of BusinessWeek is entitled “YouTube’s Plan to Clean Up the Mess That Made It Rich,” but there doesn’t seem to be any real plan per se, or if there is the article doesn’t describe it in any detail. It appears to consist of hiring more moderators to police content, and/or working on artificial intelligence as a way of flagging the worst offenders—in other words, more or less the same solution that Facebook has offered when it gets criticized for similar things.

And just as Facebook did, when CEO Mark Zuckerberg admitted in an interview that for the first 10 years of its existence it simply didn’t think about the negative aspects of the technology it was creating, YouTube would like us to believe that most of these problems came as a complete surprise. Just the growing pains of a hyperactive and rapidly-expanding toddler, in other words:

In interviews at the San Bruno complex, YouTube executives often resorted to a civic metaphor: YouTube is like a small town that’s grown so large, so fast, that its municipal systems—its zoning laws, courts, and sanitation crews, if you will—have failed to keep pace. “We’ve gone from being a small village to being a city that requires proper infrastructure,” Kyncl says. “That’s what we’ve been building.”

The only problem with that kind of argument, either from Facebook or Google, is that hundreds of thousands of smart people have been building this machinery for more than a decade now. These are not country bumpkins in a small town somewhere. To assume no one ever suggested or thought about the potential negative aspects of these networks defies belief. The only other explanation is that those concerns simply weren’t seen as being important enough, or at least not as important as growth itself.

Former YouTube executive Hunter Walk tells BusinessWeek that resources were gradually taken away from trying to improve the environment on the network. And former YouTube engineer Guillaume Chaslot tells the magazine the same thing he told CJR for a piece on “computational propaganda,” which is that suggestions about ways to keep the recommendation engine from promoting conspiracy theories and fake news were rejected, in favor of a single-minded focus on growth and engagement.

Is this tide turning? Perhaps. But even as YouTube and Facebook say they are committed to solving these problems, their revenue continues to grow at eye-popping rates—analysts estimate YouTube’s revenues are in the $22-billion range, and Facebook’s revenues climbed by almost 50 percent in the latest quarter to $12 billion. In other words, even the high-profile issues Facebook is having with the fallout from the Cambridge Analytica data leak don’t seem to be having much impact on the bottom line. What incentive is there to attack any of these problems when the overall business is going so well?

British MPs say they may try to compel Zuckerberg to testify

Britain failed in its attempts to get Facebook CEO Mark Zuckerberg to come and testify before a committee looking into the problem of fake news and user privacy, but it seems the British parliament hasn’t given up quite yet. Damian Collins, the head of the parliamentary committee on digital culture and the media, suggested in a news release following the hearing that Zuckerberg could be compelled to testify in Britain if he enters the UK on his way to hearings being held by the European Union. Collins said:

We believe that, given the large number of outstanding questions for Facebook to answer, Mark Zuckerberg should still appear in front of the Committee. We note, in particular, reports that he intends to travel to Europe in May to give evidence to the European Parliament. As an American citizen living in California, Mr Zuckerberg does not normally come under the jurisdiction of the UK Parliament, but he will the next time he enters the country. We hope that he will respond positively to our request, but if not the Committee will resolve to issue a formal summons for him to appear.

Collins also tried to get a dig in right out of the gate in his questioning of Facebook chief technology officer Mike Schroepfer on Thursday, by asking him how much he planned to spend on his next car and the square footage of his house (Schroepfer said he didn’t know the answer to either question).

This was similar to the tactic used by US senator Dick Durbin when Zuckerberg appeared before Congress earlier this month — Durbin asked whether Zuckerberg could tell the committee what hotel he was staying at, and Zuckerberg said he would rather not. The rather obvious implication being, of course, that there are certain kinds of private information that even Facebook execs don’t want to reveal.

Apart from these attempts at theatrics, the British hearing appeared to be almost five hours of Schroepfer avoiding most of the questions that were asked of him about the Cambridge Analytica data leak, and MPs taking the company to task for its cavalier approach to user privacy. According to a statement from British parliament, most of the MPs found Schroepfer’s responses to be “unsatisfactory,” and he failed to answer a number of crucial questions, including:

  • Whether Facebook knew about Cambridge Analytica when Facebook gave evidence to the Committee on February 8th
  • How much money was made and kept from dark ads, and whether there is any archive or record kept by Facebook of dark ads
  • The fact that an individual adapting their privacy settings cannot absolutely block all categories of ads
  • How many developers there were at the time before they made these 2014 policy changes between 2011-2014
  • Why they moved one and a half billion accounts to Facebook Inc from Facebook Irl a month before GDPR came into force
  • Facebook using the data of individuals who are not on Facebook.
  • What changes they are about to make ahead of GDPR in terms of becoming fully compliant

In addition to not answering a number of key questions, Schroepfer admitted that Facebook did not read the terms and conditions of the app that provided the personal data on more than 85 million users that Cambridge Analytica wound up using for targeted political advertising during the US election. And he said he regretted that Facebook sent a letter threatening The Guardian before its expose ran, saying he understood that the letter was designed “just to correct the facts.”

Schroepfer did provide at least one positive note for MPs: The Facebook executive promised that the company will make sure that only verified accounts will be allowed to place political ads on its platform, and that all of those ads will be vetted in time for elections in England and Northern Ireland next year. Also, Schroepfer said that users will be able to view all of the promotions paid for by a campaign, not just those targeted at them.

Do people really want to watch a Netflix show about BuzzFeed journalists?

Netflix announced on Wednesday that it is rolling out a new short-form series called “Follow This,” which will profile writers and editors who work at BuzzFeed News and the stories they are working on, in 15-minute segments. As an example, a promo for the series features BuzzFeed reporter Scaachi Koul talking about a story she is working on related to ASMR, a somewhat bizarre Internet subculture of people who create and consume videos consisting solely of soothing noises designed to trigger a feeling of mild euphoria.

It’s a classic kind of BuzzFeed story, and the clip does its best to make the process of reporting interesting to non-journalists, with short cut-scenes of people typing on their laptops, or monitors with interesting-looking things on them. But doo ordinary people really want to watch journalists at work? Obviously most journalists would like to think the answer is yes, but it’s not clear whether that’s actually true or not.

Whenever a movie like Spotlight or The Post comes out and gets a good response at the box office, journalists cheer in part because it validates what they do, and even in some cases makes it seem mildly interesting. But it often does this by leaving out all the hard work, and focusing on tropes like the chain-smoking reporter who meets his sources in dark alleys, or the crusty editor with the heart of gold.

It’s easy to see why BuzzFeed would jump at a Netflix series–it could give the site a higher profile, and promote some of its writers. And it’s easy to see why the streaming service would be interested in doing it: Netflix has a desperate need for more and more content, and Follow This is a good way to experiment with the 15-minute format (which both Facebook Watch and YouTube also have in their sights). But is there any real demand for this kind of content, apart from journalists and their friends?

https://twitter.com/desertgardens/status/989144949697347586

It’s true that BuzzFeed has produced some success stories from its own internal short-form video experiments, including former writer Matt Bellassai, who gained a following for his Whine About It series, in which he complained about things while drinking wine in the BuzzFeed newsroom, and later left the site to pursue a career as a comedian. But that seemed more like a happy accident.

Journalist friends have argued the time may be ripe for this kind of behind-the-scenes series, now that the media and journalism itself are under fire from the president, and people are theoretically more interested in protecting it. And perhaps BuzzFeed News can manage to tap into some of that with this series. Or it might join TMZ Live–a behind-the-scenes show about the celebrity news site and its reporting–as something that exists for a very tiny niche market. And maybe that’s as it should be.

Facebook pulls back the curtain on what kinds of speech it tolerates

Last year, The Guardian published leaked documents it said were internal Facebook rule books on how and when to moderate inappropriate content. The list of permitted terms caused significant controversy because it included threats of violence towards women, children and various ethnic groups, which Facebook said should be allowed to remain as long as they were not too specific. Harassment of white men, however, was not tolerated because they are considered a “protected group.” The guidelines sparked an ongoing debate over the way that Facebook makes decisions about which kinds of speech it will censor and which it won’t.

On Tuesday, the giant social network finally gave in to pressure from critics and published the community standards guidelines it says it uses to make most of its content decisions, with categories ranging from “violence and criminal behavior” to “integrity and authenticity.” The company said in a post introducing the rules that it generally errs on the side of allowing content, even when some find it objectionable “unless removing that content can prevent a specific harm.” Facebook also said that it often allows content that technically violates its standards “if we feel that it is newsworthy, significant, or important to the public interest.”

Some of the company’s rules are fairly straightforward, such as not allowing people to sell drugs or firearms. But much of what the social network is trying to do amounts to pinning Jell-O to the wall, especially when it comes to censoring speech around violence. The blog post says that Facebook considers “the language, context and details” in order to determine when content represents a “credible threat to public or personal safety.” But drawing those kinds of sharp lines is incredibly difficult, especially given the billions of posts that Facebook gets every day, which explains why the company gets so much criticism from users.

In an attempt to address some of those complaints, Facebook also announced it is introducing an official appeal process that will allow users to protest the removal of content or blocking of accounts. Until now, anyone who had content removed had to try and reach a support person via email to a general Facebook account, or through posts on social media. But Facebook says it is rolling out an official process that will allow users to request a review of the decision and get a response within 24 hours. Appeals will start being allowed for content involving nudity, hate speech and graphic violence, with other content types added later.

Facebook’s new transparency around such issues is admirable, but it still raises troubling questions about how much power the social network has over the speech and behavior of billions of people. The First Amendment technically only applies to government action, but when an entity of Facebook’s size and influence decides to ban or censor content, it has almost as much impact as if a government did it.

Here are some links to more information on Facebook’s latest moves:

  • Facebook has experts: Monika Bickert, Facebook’s VP of Global Policy Management, describes how community standards decisions are made: “We have people in 11 offices around the world, including subject matter experts on issues such as hate speech, child safety and terrorism. Many of us have worked on the issues of expression and safety long before coming to Facebook.” Bickert says as a criminal prosecutor, she worked on everything from child safety to counter terrorism, and other members of the team include a former rape crisis counsellor, a human-rights lawyer and an academic who studies hate speech.
  • Not enough: Malkia Cyril, a Black Lives Matter activist and executive director for the Center for Media Justice, was part of a group of civil-rights organizations that pushed Facebook to make its moderation system less racially biased. She tells The Washington Post that the company’s latest moves don’t go far enough in dealing with white supremacy and hate on the social network. “This is just a drop in the bucket,” she says. “What’s needed now is an independent audit to ensure that the basic civil rights of users are protected.”
  • Protected but still sensitive: As Wired magazine points out, Facebook doesn’t have to remove any of the offensive or disturbing content on its network if it doesn’t want to, thanks to Section 230 of the Communications Decency Act, which protects online services such as Google, Twitter and Facebook from any legal consequences for the actions of its users or the content they post. But all of the major platforms have been trying to boost their efforts at removing the worst of the material they host, in part to try and stave off potential regulation.
  • The advisory team: As part of Facebook’s attempts to be more transparent about how it makes such decisions, the company allowed a number of journalists to sit in on one of the social network’s weekly community standards meetings, where the team of advisers decides what content meets the guidelines and what doesn’t. HuffPost says the attendees included people “who specialize in public policy, legal matters, product development and communication,” and said there was very little mention of what other large platforms such as Google do when it comes to removing offensive or disturbing content.

Other notable stories:

  • After a number of anti-gay posts were found on the blog that she mothballed last year following similar allegations, MSNBC host Joy Reid claims the posts in question were the result of hackers infiltrating the Internet Archive, which is the only place her blog is still available (the Archive is an ongoing attempt to preserve a copy of as many websites as possible). The Archive, however, says that after an investigation of the claims it could find no evidence that the blog was tampered with.
  • CJR’s Alexandra Neason writes about a group of high-school students who were frustrated by the limitations of the Freedom of Information Act, and so decided to write their own bill — known as the Cold Case Records Collection Act — to make it easier to get documents related to Civil War-era crimes from the FBI and other agencies, without having them tied up in red tape or redacted to the point where they’re unusable.
  • Google is rolling out its new subscription tool, which it calls Subscribe with Google, and its first launch partner is the McClatchy newspaper chain. The search giant says that its new tool allows people to subscribe to newspapers and other online publications with just two clicks, at which point Google highlights results from those publications in search results for those users who sign up. McClatchy plans to implement the tool on all 30 of its local newspaper sites, according to Digiday.
  • In a fundraising email sent to his supporters, Donald Trump says that he won’t be attending the annual White House Correspondents’ Dinner because he says he doesn’t want to be “stuck in a room with a bunch of fake news liberals who hate me.” Instead, the president said he will be holding a rally in Michigan “to spend my evening with my favorite deplorables who love our movement and love America.”
  • In a Rolling Stone magazine feature, Ben Wofford writes about how Sinclair Broadcasting is trying to build what amounts to a national network of hundreds of conservative-leaning, Fox News-style TV stations in small and medium-sized towns across the country, and how the Trump administration is making it easier for the company to do that. “Everything the FCC has done is custom-built for the business plan of one company, and that’s Sinclair,” one FCC commissioner told the magazine.

Zuckerberg is trying hard to get out in front of the regulatory wave

Like blind men trying to describe an elephant, everyone describing the Zuckerberg 2018 Apology Tour seems to have found whatever they wanted to find. Some members of Congress clearly believe they confronted the arrogant young billionaire and asked him the tough questions, while many observers—especially those in Silicon Valley—saw Congress demonstrating its ignorance about how Facebook works on even a basic level, proving themselves to be completely unprepared to handle the problematic aspects of a giant social network married to a one-of-a-kind surveillance engine.

And what did Mark Zuckerberg see? It’s impossible to know for sure, but it’s likely that what he saw was a very clear show of power by Congress. Some of the questions may have been infantile, and some of the grandstanding amounted to a sideshow (as it so often does), but the hearings sent an obvious message: Namely, that Congress thinks Facebook is up to something—even if it’s not too sure what it is exactly—and they’re willing to consider legislation to clean things up.

In other words, if Zuckerberg wants to avoid another packed-house grilling in Washington, he is going to have to get out in front of this whole regulation thing, and that means figuring out how to surf that wave rather than getting smashed into the rocks by it. Regardless of what’s involved, it’s likely to be a lot better than Facebook being broken up, which is the real nightmare scenario.

Front-running the idea of regulation was clearly part of Zuckerberg’s agenda going into the hearings, because he took the surprising step of bringing up the idea himself in interviews, admitting that the giant social network might need to be regulated. That allowed him to be seen as leading the discussion of potential regulation rather than being dragged into it kicking and screaming, and he continued this approach during both the Senate and the House committee meetings.

When it comes to what kind of regulation he is in favor of, however, Zuckerberg was considerably more wishy-washy. He told a Reuters reporter before the hearings that while the company plans to comply with Europe’s GDPR or General Data Protection Regulation, it won’t implement those same rules for users elsewhere. But in subsequent interviews and in Congress,  he said Facebook does plan to extend GDPR-like protection outside Europe, but then hedged his answer on what that would involve.

All of this suggests the Facebook CEO is going to try and game the Congressional regulatory process in much the same way Russian trolls and Trump-connected data brokers gamed Facebook’s rules. All Zuckerberg has to do is give the impression that he is moving ahead on implementing the same things legislators might want—more privacy controls, or even full data portability—while avoiding the things he doesn’t want, like allowing users to block Facebook from tracking them.

Whether the young billionaire in the dark-blue suit can thread that particular needle successfully, however, remains to be seen. Stay tuned!

Will Congress accept Mark Zuckerberg’s apology? Should we?

It’s showtime for Mark Zuckerberg. The Facebook co-founder and CEO appears before Congress on Wednesday, testifying before the House Committee on Energy and Commerce about the recent data leak involving the personal information of more than 87 million Facebook users, whose data was used by Cambridge Analytica to target them with advertising and misinformation during the 2016 election.

Of course, we already know what Zuckerberg plans to say, not just because Congress released the text of his prepared statement on Monday, but because (as more than one person has pointed out) we have been down this road with the Facebook CEO so many times that it’s easy to lose track of the exact number. In some ways, Facebook’s entire history is a series of privacy-related mishaps and screwups, followed by a sincere and heartfelt apology from Zuckerberg and other Facebook executives.

In a recent piece for Wired magazine, sociologist Zeynep Tufekci calls it “Zuckerberg’s 14-year apology tour.” She lists the highlights of the company’s on-again, off-again interest in users’ privacy, starting with the 2006 controversy over the introduction of the News Feed, which many saw as a privacy disaster. Then in 2007 it was “Beacon,” which tracked people’s purchases and in many cases made them public without their consent. And so on. Zuckerberg’s prepared remarks for Wednesday are in the same vein:

It’s clear now that we didn’t do enough to prevent these tools from being used for harm as well. That goes for fake news, foreign interference in elections, and hate speech, as well as developers and data privacy. We didn’t take a broad enough view of our responsibility, and that was a big mistake. It was my mistake, and I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here.

The only real difference this time around is that Zuckerberg isn’t just apologizing to Facebook users in a blog post, he is testifying before Congress. And what he’s apologizing for isn’t just a few loose information policies or the fact that other users can see your purchases; he’s admitting that Facebook wasn’t prepared for the idea that its users’ data might be used to target election ads, or that Russian trolls would hijack the platform to try and swing the results of the presidential election. Does a “my bad” really cut it here?

What’s more than a little frustrating is that Zuckerberg’s apology statement suggests the company was just too darn naive, and too focused on all the good that Facebook can do in people’s lives. This might seem admirable, if it wasn’t for the fact that literally dozens of researchers like Tufecki and others have been pointing out the potential dangers for years, complete with tangible examples. Not to mention that there is a long history of negative outcomes associated with other platforms that Facebook could have learned from.

Congress may ask some hard questions on Wednesday (or members might just use the occasion for some personal grandstanding, as many did in the previous sessions in November) and Mark Zuckerberg may even convince both them and us that he is sincerely repentant. But how many times can we watch the same show without figuring out that little is going to change, because to change would require a completely different business model? Fool me once, shame on you — fool me 15 times, and maybe shame isn’t even the right word to be using at this point.

If you like living in the middle of nowhere, you can get a great house really cheap

I find it endlessly fascinating how much amazingly cheap real estate there is if you look outside the major centres in North America. I would have assumed by now that the Internet would have enabled enough people to live anywhere and that house prices would have evened out, but that doesn’t seem to be the case. Look at some of the prices for these amazing homes on Old House Dreams:

  • A 1910 home in Winfield, Kansas — four bedrooms and 2,700 square feet. Cost? $35,000. Does it need a little work? Sure. But still, you can’t even buy a half-decent car for $35,000
  • A six-bedroom Queen Anne-style home with over 4,000 square feet of space, in beautiful shape, in Altmar, New York. Cost? Just $107,000.
  • A five-bedroom, four-bathroom Civil War-era house with over 2,500 square feet of space on a four-acre piece of land in York, Pennsylvania. Cost? Only $195,000
  • Five bedrooms and almost 3,800 square feet of space in this extensively renovated 1903 Victorian beauty in Boykins, VA. Cost? Just $179,000

The list goes on and on. It’s sad to see people paying massive sums to live in tiny little houses in major cities when they could have a beautiful home like this on a huge piece of land out in the country. Admittedly, not everyone likes living in small towns, but how bad could it be? There are lots of health and personal benefits to living outside of major cities that would probably be worth the tradeoff. Obviously not everyone can work from anywhere, but with more and more jobs being done on the Internet, it’s probably getting more common.

The island of Capri, Axel Munthe and the Marchesa Luisa Casati

I love a good Internet rabbit hole as much as the next person (probably more), and I came across a great one recently while searching for information on the island of Capri in Italy. Since some friends and I were planning a trip there, I was looking up some of the sights to see, including the Villa San Michele, which was built by the Swedish doctor (at one time physician to Queen Victoria) and author Axel Munthe in the early 1900s.

The Wikipedia entry mentioned in passing that when he ran short of money, Munthe had to rent the villa “unwillingly” to the Marchesa Luisa Casati. Why unwillingly? So I looked up the Marchesa, who was described as “a muse and patron of the arts” and a legendary figure. According to her entry in Wikipedia, the Marchesa was known for “eccentricities that delighted European society for three decades” including her penchant for parading around with two cheetahs on a leash and “wearing live snakes as jewellery.”

From there, a Google search found an excerpt from a book that mentions her dispute with Axel Munthe over the villa. He apparently decided not to rent to her after learning about her behavior, but she came anyway and stayed for several months and drove him mad with her requests. Munthe designed the villa to be as open to the air as possible, but the Marchesa — who “was dressing herself entirely in black that summer” — ordered black curtains for every window. Guests often arrived to find her reclining naked on a black rug.

A New Yorker article says: “She blackened her eyes with kohl, powdered her skin a fungal white, and dyed her hair to resemble a corona of flames; her mouth was a lurid gash. Her totem animal was the snake. Her contemporaries couldn’t decide if she was a vampire, a bird of paradise, an androgyne, a goddess, an enigma, or a common lunatic. Her clothes were esoteric and memorable––i.e., the suit of armor pierced with hundreds of electric arrows; the iridescent necklace of live snakes; the headdress of peacock tail feathers accessorized with chicken’s blood.”

She also invited a wide range of guests to the villa, including some of the gay and lesbian artists who hung out on Capri at that time, and people like the Baron Jacques d’Adelsward-Fersen, described as a “self-styled diabolist” who liked to smoke opium with the Marchesa. A separate entry from the book describes her later setting up residence in Paris with her cheetah Anaxagoras and a pet cobra named Agamemnon, and mentions that after Anaxagoras passed away she had him replaced with a stuffed black panther that had a clockwork mechanism inside that made its eyes flash and the tail swing back and forth.

Not a happy ending to this story, unfortunately — Wikipedia says the Marchesa built up debts of more than $20 million (equal to $200 million today) and had to sell her possessions. She moved to one-bedroom flat in London and later died there of a heart attack in 1957, at the age of 76. The New Yorker says “spent her last days in a cheap bed-sit, casting spells on her enemies and compiling three volumes of a strange journal… Poor, and increasingly addled by gin and drugs.” According to Wikipedia, she was buried “wearing her black and leopard skin finery and a pair of false eyelashes,” along with one of her stuffed Pekingese dogs.

This special privacy feature is only available to Facebook executives

One thing about Facebook is it’s very difficult to get rid of things once they are on the social network. Even if you delete your profile, it is only hidden for several days in case you change your mind, and even after that your data remains on Facebook’s servers for up to three months. In the past, Facebook has even kept status updates that users typed into the post box but never actually posted.

Until recently, however, if you happened to be a senior executive at the company, you had access to a special feature—one that allowed you to not only automatically delete your past instant messages, but to also remove them from the inboxes of anyone who received them. And for many, that seems like a double standard when it comes to who gets to delete their personal behavior on the site and when.

TechCrunch first reported on this phenomenon on Thursday, April 5, with a story that described how several people who got access to their message history through Facebook’s “Download Your Information” tool noticed that their messages to CEO Mark Zuckerberg were in the archive, but his replies had disappeared—and not just recent ones, but replies going all the way back to 2010. According to TechCrunch, when asked about the deletion, a Facebook spokesperson gave the following explanation:

After Sony Pictures’ emails were hacked in 2014 we made a number of changes to protect our executives’ communications. These included limiting the retention period for Mark’s messages in Messenger. We did so in full compliance with our legal obligations.

The company apparently never told the users who had these discussions that those messages would be removed from their inboxes, however. And while Facebook introduced a feature in 2016 that allows users of a special encrypted version of Messenger to set a timer and have their instant messages auto-deleted after a certain period, they can’t remove those messages from the inboxes of the people who received them, and that feature also doesn’t remove messages dating back as far as 2010.

After questioning from TechCrunch and a number of other outlets about the feature, Facebook said that it would stop allowing executives to delete messages from other people’s inboxes while it works on the roll-out of a similar deletion feature for all users. A spokesman told BuzzFeed:

People using our secret message feature in the encrypted version of Messenger have the ability to set a timer and have their messages automatically deleted. We will now be making a broader delete message feature available. This may take some time. And until this feature is ready, we will no longer be deleting executives’ messages. We should have done this sooner — and we’re sorry that we did not.

While it might be a small feature that was designed to limit the potential liability of Facebook executives if their messages were hacked somehow, for many it seems like a case of Facebook allowing its senior staff to do something that normal users aren’t allowed to do. And it seems especially egregious that they did this to give Zuckerberg and others the power to remove personal information, while at the same time distributing massive quantities of personal data to entities like Cambridge Analytica.

Could we build the Facebook-era equivalent of public broadcasting?

As Mark Zuckerberg continues his 2018 apology tour by admitting that Cambridge Analytica may have illicitly acquired personal data on as many as 87 million Facebook users, instead of the previous estimate of 50 million, the chorus of voices saying we need to reject the social network (the #DeleteFacebook movement) grows louder. In a New York Times opinion piece published on Wednesday, however, Columbia law professor and author Tim Wu recommends a different course—that we build an alternative to Facebook, or possibly multiple alternatives. Fixing it isn’t an option, he says:

Every business has its founding DNA. Real corporate change is rare, especially when the same leaders remain in charge. In Facebook’s case, we are not speaking of a few missteps here and there, the misbehavior of a few aberrant employees. The problems are central and structural, the predicted consequences of its business model. From the day it first sought revenue, Facebook prioritized growth over any other possible goal.

The solution, Wu says, is to figure out how to replace the giant social network with other models that aren’t predicated on massive ad-driven surveillance of users. And what would that look like? Wu says it could be a network that provides the same social connection and media sharing functions as Facebook, but users would pay a small fee instead of having their data harvested for advertising. Or, he suggests that it might be possible to create a non-profit that could do something similar, an entity that wouldn’t be driven by the need to sell its targeting abilities to brands.

In his piece, Wu says the problem with Facebook is that it suffers from the same problem that journalist Walter Lippmann complained about in 1959 with respect to television, namely that it was ultimately “the creature, the servant and indeed the prostitute of merchandizing.”  Those kinds of sentiments led to the creation of the American public broadcasting system. Would it be possible to build the equivalent for the Facebook era? It’s an intriguing idea. Could public funding, donations and other mechanisms be used to support something like PBS but for social networking?

https://twitter.com/superwuster/status/979828300565622784

The biggest hurdle, as Wu notes, are the network effects that Facebook now enjoys by having two billion people attached to its platform every day. If you want to remain connected to friends and family members, you almost have to be on it, because everyone else is. Would an alternative be as attractive, even if it didn’t harvest your data, especially if it required you to pay a monthly fee? For some, perhaps. But for enough people to make it practical? It seems unlikely. But then, public broadcasting probably seemed like a moonshot in its day too, and somehow that happened.

One crucial step required for that future to work would be regulations requiring some form of data portability or federation between Facebook and these alternative networks, to lower the barrier to people moving from one to the other. Ironically, data-protection regulations implemented in the wake of Facebook’s data leak could actually make doing this harder rather than easier. Which would be a shame. As Wu puts it: “If today’s privacy scandals lead us merely to install Facebook as a regulated monopolist, insulated from competition, we will have failed completely.”

Facebook rolls out another News Feed change aimed at increasing trust

Facebook announced on Tuesday it is expanding a recent test that showed users more information about the articles in their News Feed and the media entities that publish them, in the hope that doing so will make it easier for people to determine who is trustworthy and who isn’t. The test started in October in a number of US markets, and the company says it is now rolling the feature out to everyone in the US, as well as adding more sources of information. The idea, according to Facebook, is to “provide more context for people so they can decide for themselves what to read, trust and share.”

The new features are a small part of what the tech giant has been doing to try and fix what is widely viewed as a “fake news” problem, one that exploded into public view after Russian trolls were shown to have manipulated the network to try and influence the 2016 election. The company has said it wants to cut down on news in the feed, as well as ensuring that what news there is remains “high quality.” But will these kinds of tweaks have any impact on Facebook’s role in spreading misinformation? That seems unlikely. Trust is a very slippery concept when it comes to news, as multiple studies have shown—people tend to believe and share the news that confirms their existing preconceptions.

Facebook maintains that its research, as well that of unnamed “academic and industry partners,” shows certain types of information help users evaluate credibility, and determine whether to trust a source. So it is adding contextual links when an article is shared, including links to related articles on the same topic and stats on how often the article has been shared and where. It will also include a link to the publisher’s Wikipedia page if there is one (and indicate if there isn’t), which is something YouTube also recently said it is doing to add context to videos about conspiracy theories.

In addition to those elements, Facebook says it also plans to add two new features, one that shows other recent stories published by the same outlet, and a module that shows whether a user’s friends have shared the article in question. The company is also starting a test to see whether users find it easier to gauge an article’s credibility if they get more information about the author: When they see an article in Facebook’s mobile-friendly Instant Articles format, some users will be able to click the author’s name and get additional info, including a description from Wikipedia if there is one. Whether any of these new features actually reduce the amount of questionable news shared on Facebook remains to be seen.

Here’s more on Facebook and its news and trust problems:

  • Today in irony: While the social network says it wants to increase the trust people have in what they see in their News Feed, it is facing a trust crisis of its own, thanks to the news that personal information on 50 million users was acquired by a data firm with ties to the Trump campaign. Facebook recently updated its privacy settings in an attempt to show that it cares about the issue, and has taken pains to point out that the source of the data leak was plugged several years ago.
  • An ultimatum: Indonesia has said it is prepared to shut down access to Facebook if there is any evidence the privacy of Indonesian users has been compromised. “If I have to shut them down, then I will do it,” Communications Minister Rudiantara told Bloomberg in an interview on Friday in the Indonesian capital of Jakarta, after pointing out the country had earlier blocked access to the messaging app, Telegram. “I did it. I have no hesitation to do it again.”
  • Power move: As part of its attempts to atone for the Cambridge Analytica fiasco, Facebook recently said it is shutting off the ability of third-party data brokers to target users on the platform directly through what are called Partner Categories. But long-time digital ad exec and publisher John Battelle argues that this is really consolidating Facebook’s power over that kind of targeting.
  • Fake news to blame? A study by researchers at Ohio State appears to show that belief in “fake news” may have affected the 2016 election, something that has been the subject of much debate. According to a Washington Post article on the research, about 4 percent of Democratic voters who supported Barack Obama in 2012 were persuaded not to vote for Hillary Clinton by hoax news stories, including reports she was ill, and that she approved weapon sales to ISIS.
  • Probe launched: The attorney general of Missouri has announced that he is launching a probe into Facebook’s use of personal data following the Cambridge Analytica leak. Josh Hawley said he is asking the social network to disclose every time it has shared user information with a political campaign, as well as how much those campaigns paid Facebook for the data, and whether users were notified.

Other notable stories:

  • During a shooting incident at YouTube’s headquarters in Palo Alto, the Twitter account of a YouTube product manager was apparently hijacked and used to tweet fake news reports about the event, according to a story written by The Verge. After the hack was pointed out by a number of journalists, Twitter CEO Jack Dorsey said he was looking into it, and the fake tweets quickly disappeared.
  • The Environmental Protection Agency tried to limit press access to a briefing by EPA head Scott Pruitt, but the move backfired thanks to journalists at Fox News. The agency reportedly told a TV crew from Fox about the briefing but didn’t tell the other major networks, at which point Fox let its competitors know and agreed to share reporting on the event.
  • The Wall Street Journal reports that 94-year-old billionaire media mogul Sumner Redstone, founder and chairman of Viacom and CBS, won’t have much of a say in the proposed merger of the two companies because his voting power has been reduced. He also now reportedly communicates using an iPad with pre-programmed responses such as “Yes,” “No,” and “F*** you.”
  • Joe Pompeo writes at Vanity Fair about what some see as a culture war taking place in the New York Times newsroom, thanks in part to growing numbers of young employees. “I’ve been feeling a lot lately like the newsroom is split into roughly the old-guard category, and the young and ‘woke’ category, and it’s easy to feel that the former group doesn’t take into account how much the future of the paper is predicated on the talent contained in the latter one,” one staffer told the magazine.
  • The Reporters Committee for Freedom of the Press has released a report that looks at incidents in the US in the past year that threatened press freedom, based on the first annual assessment of data from the Press Freedom Tracker, an index that records attacks on journalists and the media. Out of 122 incidents logged by the tracker, almost half occurred at protests.

 

Mark Zuckerberg wants you to know he cares, just like he did last time

Whenever Mark Zuckerberg talks about something that has gone wrong at Facebook—which happens rather frequently—he almost always comes off as sincerely concerned and apologetic, and his latest interview with Ezra Klein of Vox Media is no exception to this rule. But anyone who has been following Facebook for any length of time probably feels an overwhelming sense of déjà vu, because it all sounds very familiar: We screwed up, we’re sorry, we didn’t know, we will fix it. And please keep using Facebook.

We’re in the middle of a lot of issues, and I certainly think we could’ve done a better job so far. I’m optimistic that we’re going to address a lot of those challenges, and that we’ll get through this, and that when you look back five years from now, 10 years from now, people will look at the net effect of being able to connect online and have a voice and share what matters to them as just a massively positive thing in the world.

To be fair, no one has ever run a globe-spanning social network that has over two billion daily users before, so perhaps we should forgive Mark for not being that good at it. But still, it seems disingenuous to have spent 14 years building a company that now has $40 billion in revenue, but at the same time to claim that it never occurred to anyone such a giant social network—especially one powered by surveillance of its users—could become a tool for deception or evil of various kinds. Which is effectively what Mark wants us to believe.

I think the basic point that you’re getting at is that we’re really idealistic. When we started, we thought about how good it would be if people could connect, if everyone had a voice. Frankly, we didn’t spend enough time investing in, or thinking through, some of the downside uses of the tools. So for the first 10 years of the company, everyone was just focused on the positive. I think now people are appropriately focused on some of the risks and downsides as well.

What this means in practice is that Facebook has been doing its best to ignore the repeated warnings from researchers such as Danah Boyd and Zeynep Tufekci of the dangers inherent in Facebook’s structure and business model. And why wouldn’t it? Some of those concerns go straight to the heart of how the company makes the billions of dollars a year investors have come to rely on.

Tellingly enough, one of the points during the interview where Zuckerberg seems to become genuinely peeved is when Klein mentions Apple CEO Tim Cook’s criticisms of the company’s advertising-based model. The Facebook CEO rejects the idea that “if you’re not paying that somehow we can’t care about you,” calling it “extremely glib” and “not at all aligned with the truth.” And he suggests that consumers should question comments made by companies that he says “work hard to charge you more” for their services, as opposed to someone like him, who is trying to provide something for free to as many people as possible.

There are other interesting moments, such as when Zuckerberg says Facebook is considering a court-style model for deciding what speech should be allowed. “You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech,” he says. A sensible idea, or a frightening glimpse of a potential future in which Facebook is a global censor? As usual with Facebook, it’s a little bit of both.