Facebook gets blocked for a month by government in Papua New Guinea

Facebook has been forced to appear before both the US Congress and British Parliament for its alleged role in spreading misinformation and its mis-handling of user data, but neither country has even hinted that they might block the giant social network because of its negative effects. Papua New Guinea, however, has decided to do exactly that. The government of the country, which has a population of about eight million, has announced that it will block access to Facebook for a month.

In comments made to the country’s Post-Courier newspaper, Papua New Guinea’s communication minister, Sam Basil, suggested that a month-long period of Facebook-free existence would allow the government to investigate both the positive and the negative effects of using the social network. But the minister also appeared to suggest that part of what he has in mind is identifying bad actors who post false or offensive information on the platform so that they can be removed. Basil said:

The time will allow information to be collected to identify users that hide behind fake accounts, users that upload pornographic images, users that post false and misleading information on Facebook to be filtered and removed. This will allow genuine people with real identities to use the social network responsibly.

Basil also floated the idea that if the government’s investigation finds significant negative effects from Facebook, Papua New Guinea might choose to create its own online social network. “If there need be then we can gather our local applications developers to create a site that is more conducive for Papua New Guineans to communicate within the country and abroad as well,” he told the newspaper.

While some might cheer Papua New Guinea’s move, others are concerned about the impact that such a ban could have on the information diet of the country’s citizens. While only about 10 percent of the country has internet access, mobile phones have become hugely popular and Facebook is a source of news for many, since it is often a default application installed on new phones. Removing access can play havoc in such developing markets, as journalists who cover Cambodia have described to CJR.

Papua New Guinea isn’t the first country to block access to Facebook for an extended period of time. The government of India banned access to Facebook and a range of other social apps (including WhatsApp, Snapchat and Twitter) for a month in the Kashmir region last year because it was worried about what it called “antisocial” elements, including separatists, using them for nefarious purposes. Facebook has also been blocked for some time in North Korea, as well as China and Iran.

Here’s more on Facebook and its struggles in various countries:

  • Sri Lanka: The social network and several other platforms including WhatsApp were shut down briefly earlier this year during anti-Muslim riots in that country, because the government said it was concerned that Facebook and other services were being used to spread hate and fuel the rioting. Buddhist groups attacked Muslim temples and businesses, setting fires and killing two people.
  • Myanmar: Facebook has also come under fire in the country formerly known as Burma for its role in distributing hate speech and misinformation directed at the Rohingya, which many activists and journalists believe has contributed to persecution of members of that Muslim ethnic group. Facebook said recently it is beefing up the resources it devotes to Myanmar in the hope of solving the problem.
  • Pakistan: The government shut down access to Facebook in Pakistan, as well as YouTube and Twitter, for 24 hours in November of last year, in order to try and prevent information spreading about a crackdown on religious protesters in the capital of Islamabad that led to rioting. The ban also extended to all national private news channels operating in the country.
  • North Korea: Supreme Leader Kim Jong-un moved to block Facebook and several other social-networking sites in 2016, including Twitter, and YouTube. Prior to that decision, sites like Facebook were available to foreign visitors as well as certain North Korean citizens with internet access, but the government chose to block them completely in much the same way that China has.

Other notable stories:

  • Russian journalist Arkady Babchenko, a former Army veteran who became an investigative journalist and a strong critic of the Russian government, was shot and killed at his home in Ukraine on Tuesday, according to multiple reports. More than a dozen Russian journalists have died under mysterious circumstances during the past decade.
  • The Committee to Protect Journalists says a Democratic Party lawsuit against WikiLeaks over the release of material from hacked computers during the 2016 election “goes against press freedom precedents going back to the Pentagon Papers and contains arguments that could make it more difficult for reporters to do their jobs,” according to a number of First Amendment experts.
  • On CJR’s latest podcast, Jessica Lessin — the founder of a subscription-only tech news site known as The Information — interviews Adam Mosseri, the man who until recently was in charge of the Facebook News Feed. In the discussion, which was taped during an event in San Francisco earlier this month that was co-hosted by The Information, Lessin asks Mosseri about how the giant social network views its relationship with the media, and what it is trying to do to improve it.
  • Journalists at one of Turkey’s few remaining independent news outlets describe to The Wall Street Journal what it’s like to be imprisoned for doing their jobs. More than a dozen reporters and editors were recently released on bail after being charged with supporting terrorism, in what press-freedom advocates say was an attempt by the government to muzzle the publication.
  • The Russian government has reportedly asked for Apple’s help in blocking usage of the secure messaging app Telegram in that country, after a ban on the service failed to have much effect. The Russian authorities banned the app after the company refused to provide the encryption keys it uses, which would have allowed the government to see user and message data.
  • A lawyer representing actor Morgan Freeman has sent a letter to CNN demanding that the news outlet take down a story that alleges multiple incidents of sexual harassment by the actor. According to Freeman’s lawyer, two of the sources CNN used have said they were not harassed and that the network misrepresented their stories. CNN says it stands by its reporting.

Correction: Russian journalist Arkady Babchenko is alive. He said Wednesday that he co-operated with Ukrainian authorities in faking his own death as a way of trapping a Russian-backed hit man who had been assigned to kill him.

Arrival of GDPR causes chaos and confusion, and we’re just getting started

For something that has been in the works for more than two years, the EU’s General Data Protection Regulation or GDPR seemed to take at least some people by surprise—including some publishers, it appears. When the new rules on how to handle user information went into effect on May 25th, a number of news sites responded by simply shutting off access to anyone who appeared to be coming from a European address.

Several of the papers belonging to the Tronc chain, for example, including the Los Angeles Times and Chicago Tribune, showed European Union visitors a message saying: “Unfortunately, our website is currently unavailable in most European countries. We are engaged on the issue and committed to looking at options that support our full range of digital offerings to the EU market. We continue to identify technical compliance solutions that will provide all readers with our award-winning journalism.”

https://twitter.com/passantino/status/999806966758162434

This response drew some criticism from European regulators, who clearly thought they provided more than enough notice for publishers to make adjustments to conform with the new rules. In an email to Bloomberg about some of the US sites that blocked European users by default, Andrea Jelinek—the head of the EU’s Data Protection Board, which is in charge of administering the GDPR—said that the new rules “didn’t just fall from heaven. Everyone had plenty of time to prepare.”

Other news sites such as USA Today responded to the new rules—which can result in multi-million-dollar fines for improper use of data—by removing some or all of the ad-related software that harvests information from users and tracks their behavior. According to one web engineer, the US version of the USA Today site was 5.5 megabytes in size and included more than 800 ad-related requests for information involving 188 different domains. The EU version was less than half a megabyte in size and contained no third-party content at all, meaning it not only didn’t track as much data but also loaded much faster.

That may be good news for actual users, but the long-term picture isn’t good for publishers who rely on ad-related tracking systems for revenue. According to those who follow the digital ad market, ad exchanges used by many publishers saw an immediate drop of between 25 and 40 percent in demand for their ads following the introduction of the GDPR. And there is a fear that the changes could ultimately could wind up further weakening media companies and increasing the dominance of giant platforms like Google and Facebook.

Amazon shows the downside of having an always-on microphone in your home

Critics of “smart assistant” devices such as the Amazon Echo and Google Home often talk about the potential risks of having a device in your house that is always listening to your conversations, and some dismiss this kind of criticism as fear-mongering or unjustified techno-panic. But Amazon has provided even skeptical users with an object lesson in the risks of this new kind of hardware: The company admitted on Thursday that one of its Echo units recorded a conversation and then sent that recording to someone on a user’s contact list.

How could this happen? According to the company, it was the result of a series of misunderstandings between the device and its owners. While the owners were having a conversation, the Echo misunderstood what they were saying and thought it heard the pre-programmed “wake word” — usually the name Alexa — followed by a command to send a message to a friend. The owners only became aware of what had happened when that friend told them he had received a copy of their conversation. Here’s how Amazon described it:

Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right”. As unlikely as this string of events is, we are evaluating options to make this case even less likely.”

If this sounds wildly improbably to you, you’re not the only one. But the company insists it is what happened, and that this kind of occurrence is extremely rare. But is it? Vanity Fair writer Maya Kosoff writes about how something similar almost happened to her, when her Alexa misunderstood audio from a TV show she was watching as a command to send a message and asked repeatedly “To whom?” Kosoff says she subsequently shut the device off. Others have reported similar events when it appeared that their device misunderstood background noise as a command.

https://twitter.com/bkurbs/status/1000055483800080384

Washington Post writer Geoffrey Fowler pointed out that the Amazon Echo doesn’t have a command to stop listening (although it and Google Home have buttons that can turn off the listening function). One way owners can make themselves aware of what the device is doing, Fowler says, is to go into the settings and see what has already been recorded, since the Echo keeps track of all the times it has been triggered and what it recorded as a result (this is how Amazon diagnosed what happened).

As these kinds of smart devices become more integrated into our lives, it seems obvious that this kind of incident will become more commonplace, even if Amazon says it is a rare accident. In at least one case already, alleged recordings by an Echo became an issue in a murder trial, when the prosecutor subpoenaed audio from the device in a man’s home in the hope that it might have heard something related to the alleged killing. The case was later dismissed, so the Echo recordings were never introduced.

How can journalists stop providing oxygen for trolls and extremists?

Are journalists partly to blame for the rise of the alt-right and the outcome of the 2016 election? A report from the research institute Data & Society looks at the ways in which journalists help to popularize extremist views, in some cases accidentally. The paper—written by Whitney Phillips, an assistant professor of communications at Syracuse University, and entitled The Oxygen of Amplification—argues that alt-right and other groups were aided and abetted by the media, which helped expose their ideas to new audiences. Says Phillips:

The takeaway for establishment journalists is stark, and starkly distressing: just by showing up for work and doing their jobs as assigned, journalists covering the far-right fringe – which subsumed everything from professional conspiracy theorists to proTrump social media shit-posters to actual Nazis – played directly into these groups’ public relations interests. In the process, this coverage added not just oxygen, but rocket fuel to an already-smoldering fire.

Among the 50 interviews Phillips did with journalists were many who agreed their work covering the alt-right and other extremist groups provided them with more publicity, and in some sense may have helped create the movement itself. “Without journalists reporting on them, there’s no way they would have gotten the attention they did,” said HuffPost reporter Ashley Feinberg. “At this point we have built the world they told us existed.”

The mechanisms by which this happened are complex, as Phillips describes at some length in the report, and many of them are not easy to change because they are built into the very structure of journalism itself. But the author also suggests ways of mitigating the damage—steps that journalists can take to ensure that the coverage they are providing of such groups is not only justified, but reduces the amplification problem.

Continue reading “How can journalists stop providing oxygen for trolls and extremists?”

Zuckerberg gets grilled by EU parliament but provides few answers

Members of the European Parliament finally got the chance on Tuesday to quiz Facebook CEO Mark Zuckerberg about everything from the Cambridge Analytica data scandal to whether the network deliberately down-ranks content that is posted by conservatives. What they didn’t get a lot of, however, was anything approaching a substantive answer to those questions. In part, that was because of the unusual format of the meeting: Each of the 17 attendees—heads of the EU parliament’s main groups and committees—got to ask a question, and then in the remaining time left, Zuckerberg got to answer some or all of them.

As more than one observer pointed out going into the meeting, this format more or less ensured that the Facebook CEO would cherry-pick the questions he was either most interested in or most prepared to answer, and avoid the difficult ones. And that is exactly what happened. The meeting began a statement by Zuckerberg similar to the one he made in front of the US Congress, about how he regretted not taking action sooner to stop Russian trolls and data leaks. That was followed by more than an hour of questions, ranging from whether Facebook should be broken up because of its monopoly power to when the company expects to be compliant with the new General Data Protection Regulation rules.

Nigel Farage, the former head of the UK’s right-wing Independence Party, asked the Facebook co-founder about the impact of recent algorithm changes, which the company says it introduced in an attempt to focus on interpersonal engagement and remove clickbait and other forms of low-quality news. “Since January this year, you’ve changed your algorithms, and it’s led to a substantial drop to views and engagements for those who’ve got right-of-centre political opinions,” said Farage. “On average, we’re down about 25% over the course of this year. What interests me is, who decides what is acceptable? Why is there no transparency?”

EU member of parliament Guy Verhofstadt noted that the Facebook CEO had “apologized three times already this year” for the behavior of his company. “Are you capable to fix it? There has to be clearly a problem,” he said. “The only way I can see to fix it is to have public regulation. It’s a bit like the banks in ’07, ’08: they said, ‘Oh, we’ll regulate ourselves,’ but they didn’t.” Verhofstadt also said he was skeptical of Facebook downplaying its market monopoly, and argued that pointing to Apple and Google as competitors was like a car manufacturer saying: “We don’t have a monopoly, you can take a train or a plane!”

When it came time to answer, however, Zuckerberg spent most of his time reiterating his opening statement about how the network is trying hard to improve its ability to detect bad behavior, and that he now believes the company’s automated systems can flag close to 99 percent of the terrorist-related content before users notice it. A number of European Parliament members shouted additional questions at the Facebook CEO after the official time had run out, but Zuckerberg dodged them by saying he would have his staff respond to the questioners later via email, in much the same way he did in Congress.

Here’s more on Facebook and Zuckerberg’s ongoing apology tour:

  • Business Insider has published the full text of Zuckerberg’s opening statement to the European Parliament, in which the Facebook CEO maintains that the company is “committed to Europe. Ireland is home to our European Headquarters. London is home to our biggest engineering team outside the United States; Paris is home to our artificial intelligence research lab; and we have data centers in Sweden, Ireland and Denmark, which will open in 2020.”
  • The broader backdrop to the European Union’s questioning of Zuckerberg and Facebook is the imminent introduction of the General Data Protection Regulation, an EU law that will restrict what platforms like Facebook can do with a user’s data. The rules allow regulators to fine companies up to 4 percent of their global revenue, which for Facebook would be about $1.6 billion.
  • The Facebook CEO may have testified before the European Parliament, but the British government is still waiting for Zuckerberg to accept their invitation to do the same in the UK. He has refused to attend a hearing into the Cambridge Analytica data leak, even after British politicians threatened to hit him with a formal summons that would take effect if he ever sets foot in the UK.
  • MP Damian Collins, the chairman of the UK’s Digital, Culture, Media and Sport committee said in reaction to the hearing that “questions were blatantly dodged” by Zuckerberg on issues such as data sharing between WhatsApp and Facebook and the use of what are known as “shadow profiles” constructed for people who aren’t on the social network.
  • European Union regulators have threatened that if Facebook and other social platforms such as Twitter don’t prove that they can do a better job with protecting users’ data and preventing foreign actors from meddling during elections, they could be hit with new regulations that would try to force them to do so. EU security commissioner Julian King has said either voluntary or mandatory protections need to be in place before the European Union elections in 2019.

Other notable stories:

  • Whitney Phillips, a researcher working with the Data & Society institute, has released a report looking at the amplification of extremism online that argues journalists covering members of the far-right fringe? “played directly into these groups’ public relations interests. In the process, this coverage added not just oxygen, but rocket fuel to an already-smoldering fire.”
  • Mei Fong writes for CJR about the unprecedented rollback of journalistic freedom that has taken place in Southeast Asia over the past few years, including repeated legal and governmental attacks on the legitimacy of independent online news outlets such as Rappler in the Philippines, and the jailing of Reuters journalists Wa Lone and Kyaw Soe Oo in Myanmar.
  • Lesley Stahl of CBS News says that during a conversation with President Trump, she asked why he continually attacks the “fake news media,” and he said: “You know why I do it? I do it to discredit you all and demean you all, so when you write negative stories about me, no one will believe you.” Many journalists have suspected that was the reason, but now it is on the record.
  • Indian journalist Rana Ayyub writes in The New York Times about being targeted by a co-ordinated social media campaign aimed at harassing her, following her political reporting on the government of Narendra Modi. Ayyub said recent attacks have included falsely attributing to her a quotation supporting child rapists, and a pornographic video with her face digitally superimposed on one of the actors.
  • The International Consortium of Investigative Journalists has published what it calls West Africa Leaks, a series of documents detailing the use of offshore holding accounts and other methods to hide millions of dollars in payments to government agencies and corporations, evidence of what the group says is widespread tax evasion and corruption in the region.

A Q&A with Adam Mosseri about Facebook’s complicated relationship with the media

The man in charge of the Facebook News Feed, longtime senior executive Adam Mosseri, took questions about the company’s often acrimonious relationship with the media at a Columbia Journalism Review event in San Francisco on Thursday. Mosseri was interviewed by Jessica Lessin, who founded and runs The Information, a subscription-based technology news site. Facebook recently announced that Mosseri will be switching jobs, leaving the News Feed role to run Instagram, but he talked about the social network’s approach to news during an interview with Lessin and then took questions from the audience.

“I think it’s complicated,” Mosseri said when asked about the relationship that Facebook has with publishers and the media in general. “The news industry is going through a massive amount of change in a very short time. The Internet is changing how people consume not only news but all kinds of information, turning a lot of business models in the industry upside down, and we are part of that. We are trying to figure things out and we make mistakes but we are trying to do better. There are areas where we have really strong partnerships and areas where the relationships are incredibly antagonistic and everything in between.”

Mosseri addressed criticisms that when Facebook changes the way the News Feed algorithm works, it can often mean sudden and dramatic changes in the traffic publishers get. “There’s no way for me to guarantee stable distribution for any one publisher, it’s just not possible, even if we stopped changing how [algorithmic] ranking worked entirely,” he said. “People’s interests change, the news changes every day, there’s all sorts of competitive effects. So distribution is always going to be volatile.”

Continue reading “A Q&A with Adam Mosseri about Facebook’s complicated relationship with the media”

A Q&A with Adam Mosseri about Facebook’s complicated relationship with the media

The man in charge of the Facebook News Feed, longtime senior executive Adam Mosseri, took questions about the company’s often acrimonious relationship with the media at a Columbia Journalism Review event in San Francisco on Thursday. Mosseri was interviewed by Jessica Lessin, who founded and runs The Information, a subscription-based technology news site. Facebook recently announced that Mosseri will be switching jobs, leaving the News Feed role to run Instagram, but he talked about the social network’s approach to news during an interview with Lessin and then took questions from the audience.

“I think it’s complicated,” Mosseri said when asked about the relationship that Facebook has with publishers and the media in general. “The news industry is going through a massive amount of change in a very short time. The Internet is changing how people consume not only news but all kinds of information, turning a lot of business models in the industry upside down, and we are part of that. We are trying to figure things out and we make mistakes but we are trying to do better. There are areas where we have really strong partnerships and areas where the relationships are incredibly antagonistic and everything in between.”

Mosseri addressed criticisms that when Facebook changes the way the News Feed algorithm works, it can often mean sudden and dramatic changes in the traffic publishers get. “There’s no way for me to guarantee stable distribution for any one publisher, it’s just not possible, even if we stopped changing how [algorithmic] ranking worked entirely,” he said. “People’s interests change, the news changes every day, there’s all sorts of competitive effects. So distribution is always going to be volatile.”

Continue reading “A Q&A with Adam Mosseri about Facebook’s complicated relationship with the media”

In India, the fake news problem isn’t Facebook, it’s WhatsApp

Ever since the news broke that a Russian troll factory used Facebook to spread misinformation during the 2016 US election, the social network has been a lightning rod for widespread concern about the problem of fake news, hoaxes and conspiracy theories. But in many countries outside the US, the big problem isn’t what is spreading on Facebook, it’s what is being distributed via WhatsApp, the messaging software that Facebook acquired in 2014 for $19 billion, which for many people in non-US countries provides a free alternative to text messaging. As The New York Times points out in a recent story:

More than any other social media or messaging app, WhatsApp was used in recent months by India’s political parties, religious activists and others to send messages and distribute news to Karnataka’s 49 million voters. While many messages were ordinary campaign missives, some were intended to inflame sectarian tensions and others were downright false, with no way to trace where they originated.

The Times quotes a youth leader for one Indian political party who says he used WhatsApp to keep in touch with 60 voters he was assigned to track, and says he sent them critiques of the government but also a claim that 23 activists were killed by jihadists—a report that has been proven to be false—as well as a fake poll allegedly commissioned by the BBC that predicted a win for his party. In the days leading up to the recent election, the two leading parties said they had set up at least 50,000 WhatsApp groups to spread messages, including videos and fake news articles aimed at exploiting anti-Muslim sentiment.

According to a report from the Indian news site Financial Express, fabricated reports on WhatsApp of child abductions by immigrants have led to at least two attacks that resulted in innocent people being beaten by mobs and hanged. India is estimated to have more WhatsApp users than any other country, with about 200 million people using it at least once a month, out of a total of 1.5 billion monthly active global users, and the rate of adoption is still climbing, driven by the declining cost of smartphones and cellular data plans. The same phenomenon has been seen in Indonesia and Latin America.

Fact-checking groups working to debunk hoaxes and conspiracy theories in India say the spread of misinformation is increasingly happening on WhatsApp rather than Facebook or Twitter, according to a report in The Wall Street Journal. “More than 90 percent of the stuff we are debunking is on WhatsApp,” said Govindraj Ethiraj, a journalist and founder of a fact-checking group called Boom. And because WhatsApp allows for anonymous accounts and uses end-to-end encryption, it can be almost impossible to determine where a rumor or hoax originated or how it spread so widely.

Here’s more on the problem of fake news outside the US and WhatsApp’s role in it:

  • A recent Washington Post article says many political activists in India are concerned that the spread of fake news and hate speech on WhatsApp is affecting not just recent elections there but could impair the functioning of democratic society as a whole. “It is getting out of hand, and WhatsApp doesn’t know what to do about it,” said Nikhil Pahwa, a digital rights activist. “The difficulty with WhatsApp is that it’s impossible to know how this information is spreading.”
  • The New York Times describes how the app has been used to spread rumors about alleged Muslim mob violence, including one report that included a video of a purported attack on a Hindu woman, which turned out to be video of an unrelated lynching in Guatemala. Messages spread by political parties have said the Indian elections represent a “war of faith.”
  • According to one Indian news outlet, NDTV, riots erupted in December after the body of a boy was found floating in a pond in the Karnataka region and reports spread on WhatsApp and Twitter that his body had been mutilated in a variety of ways. The local police eventually released a forensic report noting that these reports were false, but the rumors continued.
  • On an Indian opinion site, an author and academic called for the government to consider regulating WhatsApp to prevent the spread of fake news and hate speech. The app has been blocked for short periods in both India and Brazil, where it has been criticized for not handing over data on users when ordered to do so. Facebook says it can’t provide data because the app is encrypted.

Other notable stories:

  • Tom Wolfe, the father of New Journalism, died on Monday at 88. His New York Times obituary called Wolfe, “an innovative journalist and novelist whose technicolor, wildly punctuated prose brought to life the worlds of California surfers, car customizers, astronauts and Manhattan‚ moneyed status-seekers.” In 2006 for CJR, Jack Shafer examined the legacy of Wolfe’s “The Electric Kool-Aid Acid Test.”
  • As Europe prepares for the introduction of the General Data Protection Regulation on May 25, the Reuters Institute looked at the use of third-party content (i.e. ads) and tracking cookies on European news sites. The report found that news sites in the UK have an average of 90 third-party tracking cookies on every page, and content that comes from as many as 17 different domains.
  • Google has said it is working on a bug that resulted in the BBC website coming up for the vast majority of searches of the term “news” in the UK, according to BuzzFeed, which said it notified the search engine company of the problem. At one point, the BBC appeared in every single result in the top 50 returned for that term, and in 97 of the top 100 results for the word news.
  • Facebook reported in a quarterly overview of its moderation efforts that it took some kind of action against about 1.5 billion accounts or pieces of posted content in the first three months of this year. The company said it took action on 837 million pieces of spam, shut down 583 million fake accounts, removed 2.5 million pieces of hate speech and 1.9 million pieces of terrorist propaganda, as well as 3.4 million pieces of graphic violence and 21 million examples of nudity and sexual activity.
  • The Pew Research Center for Journalism and Media looked at attitudes towards the media and trust in European countries, and found that in general, residents of countries in southern Europe tend to be much more skeptical of the media than people who live in northern countries such as Sweden and Germany. In France, only 28 percent of those surveyed said the media was very important.
  • CJR contributor Nicholas Diakopolous writes about how machine-learning algorithms can be used to create compelling but totally fictional images, audio clips and even video that appear to be of real events and individuals. These so-called “deepfakes” are becoming much more feasible, Diakopolous says, and journalists have to be aware of the technology so they can work to debunk them.

Our trip to the magical Amalfi Coast of Italy — April 2018

For the past few years, my wife Becky and I have made an annual trip to Italy, to the wonderful International Journalism Festival in Perugia, which is about two hours north of Rome in Umbria, not far from Assisi and a little east of Florence. The conference is in the old city of Perugia, most of which dates back to the 13th century — and the old city is built on top of an even older city, the ruins of an Etruscan capital dating back to the year 275 or so.

Every year, we take a few days either before the festival or afterwards to see some of Italy — one year it was Rome, then Florence, then Venice, then Sorrento and last year we hiked Cinque Terre. This year, some of our friends came along for the trip, and so we had nine people in all on a wonderful vacation to the Amalfi Coast.

Becky and I spent the week in Perugia, then took the train from Perugia to Rome on Saturday the 14th to meet up with Becky’s brother Dave and his wife Jenn, our friends Marc and Kris and another friend Sandra. All of them had gotten in from Canada that day so they were quite jet-lagged, but we met for dinner at a restaurant near the train station and then headed to our nearby Airbnb, which was in the neighborhood known as Monti, near the Basilica of Santa Maria Maggiore. The remainder of the group, our friends Barb and Lori, showed up the next day and had their own Airbnb.

We spent the day touring the Forum, the Palatine Hill and the Colosseum, where we had what they call the “belvedere” tour, which gives you access to the upper levels of the amphitheatre. And then we walked north along the Via dei Fori Imperiali — which happened to be closed to traffic — to the Pantheon, and then to the Palazzo Navona. We stopped for some gelato at Grom, which makes some of the best gelato in Italy, and then headed to the Church of San Luigi dei Francesi to see the famous Caravaggio paintings that are displayed there.

Continue reading “Our trip to the magical Amalfi Coast of Italy — April 2018”

Facebook admits hundreds of apps vacuumed up user data

The Facebook apology tour continues: The company announced on Monday that it had found at least 200 other apps that had access to user data in the same way that the app behind the infamous Cambridge Analytica leak did. A VP at the social network said in a blog post that Facebook is currently trying to determine whether that data was misused, and whether the companies in question deleted it as they were supposed to when Facebook changed the rules around data access by apps in 2014.

If you’re wondering why it took the company four years to run this kind of audit, especially after multiple reports from individuals involved (like the whistleblower who revealed the Cambridge Analytica fiasco), you’re not the only one. Facebook CEO Mark Zuckerberg has said that he’s sorry it wasn’t done sooner, but hasn’t explained why the company didn’t do such an audit earlier.

It’s also not clear whether 200 is the final number of apps that have been suspended as part of this investigation, or whether there are more to come, and the company so far hasn’t identified any of the apps. According to the blog post:

To date thousands of apps have been investigated and around 200 have been suspended — pending a thorough investigation into whether they did in fact misuse any data. Where we find evidence that these or other apps did misuse data, we will ban them and notify people via this website. It will show people if they or their friends installed an app that misused data before 2015

In the Cambridge Analytica case, a seemingly harmless personality quiz designed by researcher Aleksandr Kogan got personal information on more than 85 million users without notifying them, because of the way Facebook was configured at the time—it not only allowed apps access to a user’s data, but also the personal data of all that user’s friends. In 2014, the company changed the rules for such apps so that they can no longer get friend data, and it asked app developers to delete the data they had.

Cambridge Analytica, however, apparently didn’t delete the data that it got from Kogan, and instead used that information to create psychographic profiles of Facebook users based on their likes and other behavior, and then used those profiles to target advertising and other content to Facebook users on behalf of clients like the Trump presidential election campaign. The company has since gone bankrupt, but the key players behind it have reportedly created a similar company called Emerdata.