My mother, Linda Miles Ingram

My mother was a woman with many hidden depths. She often came off as flighty or shallow, I think, because of her love of beautiful clothes or her fondness for acting, or her taste for perhaps a bit more wine than was really necessary, but she had a core of steel ( which she got from her mother Ruth), and that allowed her to take on challenges that would have scared off lesser mortals — including setting off for the Seychelles islands in her retirement years, to help my father beat back the jungle around a would-be BnB, where she learned how to cook fruit bat, among other things (which involves throwing them against the wall to tenderize them, apparently).

After growing up in Toronto in relative luxury on South Drive, with her younger sister Kathy and little brother John, doing all the up-and-coming Toronto society things like debutante balls and being raised largely by nuns, Linda fell in love with a young man she met as part of the theatre group at the University of Western Ontario — as she told the story, she would often go back to his apartment and do the dishes while he called his fiancee, who eventually fell by the wayside, defeated by the charms of this blonde bombshell with the big vocabulary and the cats-eye glasses.

Although her family might have preferred to see her marry a doctor or lawyer, Linda decided to marry a penniless farm boy from Saskatchewan who had just joined the Royal Canadian Air Force as a fighter pilot. Despite — or perhaps because of — their differences, they became an inseparable team, he the director telling everyone where to stand and what to say (or which country and province to move to next) and she the young ingenue, playing the role of Air Force officer’s wife, party hostess, mother, and later grandmother, aunt, and walking encyclopedia.

Continue reading “My mother, Linda Miles Ingram”

Germany’s “flying train” from 1902

The movie clip above, from the Museum of Modern Art, may look like something from an HG Wells-style science fiction movie made at the turn of the century, but it is actually a clip of a functioning suspended railway in Germany built in the late 1800s. Originally called the Einschienige Hangebahn System Eugen Langen (the Eugen Langen Monorail Overhead Conveyor System), it is now known as the Wuppertaler Schwebebahn or Wuppertal Suspension Railway. Not only that, but it is still running — although the cars have been upgraded multiple times since it was built. It moves 25 million passengers every year.

Facebook Oversight Board punts on Donald Trump ban

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

For several months, those who follow politics and those who are more interested in social technology have both been waiting with bated breath for a decision on whether Donald Trump would continue to be banned from posting to his account on Facebook. The former president’s account was blocked after the social network decided he used the site to foment violence, resulting in the attack on the US Capitol on January 6th, and then the decision was sent to the company’s so-called Oversight Board for review. The board — a theoretically arm’s length group of former journalists, academics, and legal scholars, including the former Prime Minister of Denmark, the former editor-in-chief of The Guardian, and a former US federal circuit-court judge — handed down its ruling on Wednesday. The board decided that Facebook was right to have banned Trump from the network for fomenting violence, but said the company had no official policy on its books that allowed it to ban him (or anyone else) permanently, and told Facebook to come up with one if it wanted to do this in the future. This appeared to please some people partially, but almost no one completely.

For some critics, including the so-called Real Facebook Oversight Board — a group that includes former CIA officer and former Facebook advisor Yael Eisenstat, Facebook venture investor Roger MacNamee, and crusading Phillippines journalist Maria Ressa — the board’s decision on Trump just reinforces the fiction that the Oversight Board has any power whatsoever over the company. Although Facebook has gone to great lengths to create a structure that puts the board at arm’s length and theoretically gives it autonomy, the board was still created by Facebook and is funded by Facebook. It also has a fairly limited remit, in the sense that it can make decisions about whether content (or accounts) should be removed or not, but it has no ability to question or influence any of Facebook’s business decisions, or the way its algorithms function, how its advertising strategy works, and so on. The board may have advised Facebook that it should have a policy about how to deal with government leaders who incite violence, but if the company decides not to create one, or not to implement it, the board can do nothing.

On a broader level, some critics argue that all of the attention being paid to the Oversight Board and its Trump decision — not to mention the references to it being Facebook’s Supreme Court — play into the company’s desire to make it seem like a worthwhile or even pioneering exercise in corporate governance. For some at least, it is more like a sideshow, or a Potemkin village: it looks nice from the outside, but behind the facade it’s just a two-dimensional representation of governance, held up by sticks. Shira Ovide of the New York Times wrote: “Facbook is not a representative democracy with brances of government that keep a check on one another. It is a castle ruled by an all-powerful king who has invited billions of people inside to mingle — but only if they abide by opaque, ever-changing rules that are often applied by a fleet of mostly lower-wage workers.”

Continue reading “Facebook Oversight Board punts on Donald Trump ban”

Did the Facebook Oversight Board drop the ball on Trump?

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Last week, after months of deliberation, the Facebook Oversight Board — the theoretically independent body that adjudicates cases in which content has been removed by the social network — released its decision on the banning of Donald Trump. The former president’s account was blocked permanently following the attack on the US Capitol building on January 6th, after Facebook decided he was using it to incite violence. It sent this decision to the Oversight Board for review, and the board announced its ruling last week — a two-part decision that upheld the ban on Trump as justified, but also noted that Facebook doesn’t have a policy that allows it to ban accounts indefinitely. So the board gave the company six months to either come up with such a policy, or impose a time limit on the Trump ban. Some observers saw this as the Oversight Board “punting” on the choice of whether to ban the former president, rather than making a hard and fast decision, while others argued that paying so much attention to the rulings of a quasi-independent body created by the company it is supposed to oversee meant giving the board (and Facebook) too much credit, and was a distraction from the company’s real problems.

Is the Oversight Board a valid mechanism for making these kinds of content decisions, or is Facebook just trying to pass the buck and avoid responsibility? Did the board’s ruling on Trump’s ban reinforce this latter idea, or is the board actively fighting to ensure that it’s not just a rubber stamp for Facebook’s decisions? To answer these and other related questions, we’re talking this week with a number of experts on Galley, CJR’s digital discussion platform, including Nathalie Maréchal, a policy analyst at Ranking Digital Rights; Kate Klonick, a law professor at St. John’s in New York who has been following the Oversight Board since its inception, and Evelyn Douek, a lecturer at Harvard Law School and an affiliate at the Berkman Klein Center for Internet and Society. Maréchal, for one, falls firmly into the camp that believes the Oversight Board is mostly a distraction from the important questions about Facebook and its control over speech and privacy. “The more time and energy we all spend obsessing over content moderation, the less time and energy we have scrutinizing the Big Tech Companies’ business models and working to pass legislation that actually protects us from these companies’ voraciousness,” she said.

The most important thing that the government could do to reign in Facebook’s power, Maréchal says, is “pass federal privacy legislation that includes data minimization (they can only collect data that they actually need to perform the service requested by the user), purpose limitation (they can only use data for the specific purpose they collected it for — ie, not for advertising), and a robust enforcement mechanism that includes penalties that go beyond fines, since we know that fines are just a slap-on-the-wrist cost of doing business.” Klonick, for her part, is a little more willing to see the Oversight Board as a potential positive influence on Facebook — although not a solution to all of the company’s myriad problems, by any means. And Klonick knows the board more intimately than most. In 2019, she convinced the company to give her access to those inside Facebook who were responsible for creating the board and developing its mandate and governance structure, and she later wrote about this process for The New Yorker and others.

Continue reading “Did the Facebook Oversight Board drop the ball on Trump?”

Social networks accused of censoring Palestinian content

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Violence between Israel and Palestine has been going on for decades, but the conflict has flared up in recent weeks, in part because of the forced eviction of Palestinians who live in Jerusalem on land claimed by Israel, and attacks on Muslims near the Al-Aqsa mosque toward the end of the holy month of Ramadan. But as Palestinians and their supporters have shared images and posts about the violence on Facebook, Twitter, and Instagram, some have noticed their content suddenly disappearing, or their posts being flagged for breaches of the platforms’ terms of use when no such breach had occurred. In some cases, their accounts have been suspended, including the Twitter account of Palestinian-American writer Mariam Barghouti, who had been posting photos and videos of the violence in Jerusalem. Twitter later restored Barghouti’s account, and apologized for the suspension, saying it was done by mistake.

Some of those who have been covering such issues for years don’t think these kinds of things are a mistake — they believe social networks are deliberately censoring Palestinian content. In a recent panel discussion on Al Jazeera’s show The Stream, Marwa Fatafta of the human-rights advocacy group AccessNow said this is not a new problem, but it has recently gotten worse. “Activists and journalists and users of social media have been decrying this kind of censorship for years,” she said. “But I’ve been writing about this topic for a long time, and I have not seen anything of this scale. It’s so brazen and so incredible, it’s beyond censorship — it’s digital repression. They are actively suppressing the narrative of Palestinians or those who are documenting these war crimes.”

On Monday, AccessNow did a Twitter thread about censorship involving Palestinian content on Facebook, Twitter, Instagram, and TikTok. The group said it has received “hundreds of reports that social platforms are suppressing Palestinian protest hashtags, blocking livestreams, and removing posts and accounts.” Ameer Al-Khatahtbeh, who runs a magazine for millennials called Muslim, says he has documented 12,000 acts of censorship on Instagram alone in the past several weeks.

Continue reading “Social networks accused of censoring Palestinian content”

Facebook and the dilemma of coordinated inauthentic behavior

Note: This was originally published as the daily newsletter for the Columbia Journalism Reviews, where I am the chief digital writer

Yesterday, Facebook released a report on what it calls “influence operations” on its platform, which are defined as “coordinated efforts to manipulate or corrupt public debate for a strategic goal.” By this, the company seems to mean primarily the kinds of activity that Americans heard about during the 2016 election, from entities like the Russian “troll farm” known as the Internet Research Agency, which used fake accounts to spread disinformation about the election and to just generally cause chaos. Facebook says in this “threat report” that it has uncovered evidence of disinformation campaigns in more than 50 countries since 2017, and it breaks down some of the details of 150 of these operations over that period. In addition to noting that Russia is still the leading player in this kind of campaign (at least the ones that Facebook knows about) the company describes how dealing with these kinds of threats has become much more complex since 2016.

One of the big challenges is defining what qualifies as “coordinated inauthentic behavior.” Although Facebook doesn’t really deal with this in its report, much of what happens on the service (and other similar platforms) would fit that description, including much of the advertising that is the company’s bread and butter. In private groups devoted to everything from politics to fitness and beauty products, there are likely plenty of posts that could be described as “coordinated efforts to manipulate public debate for a strategic goal,” albeit not the kind that rise to the level of a Russian troll farm.

Influence operations can take a number of forms, Facebook says, “from covert campaigns that rely on fake identities to overt, state-controlled media efforts that use authentic and influential voices to promote messages that may or may not be false.” For example, during the 2016 election, the Internet Research Agency created groups devoted to Black Lives Matter and other topics that were filled with authentic posts from real users who were committed to the cause. In one case mentioned in Facebook’s threat report, a US marketing firm working for clients such as the PAC Turning Point USA “recruited a staff of teenagers to pose as unaffiliated voters” and comment on various pages and accounts. As researchers like Shannon McGregor of UNC note, “enlisting supporters in coordinated social media efforts is actually a routine campaign practice.”

Continue reading “Facebook and the dilemma of coordinated inauthentic behavior”

Andreessen Horowitz’s new media entity is an op-ed page

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

At the beginning of this year, an otherwise innocuous job ad — looking for an executive editor to oversee a site about technology — got more than its fair share of attention. Why? Because the entity that posted the ad wasn’t a traditional media company. The opening was for a job at Andreessen Horowitz, an influential venture capital firm in Silicon Valley that has developed a reputation for avoiding the traditional technology press, and this raised a number of questions. Was the proposed site another way to do an end run around the media industry, from a powerful investor who believes that some (if not all) traditional industries need to be disrupted by technology?

A former analyst at Andreessen Horowitz, Benedict Evans, famously described it as “a media company that monetizes through venture capital.” The firm’s assets under management — the stakes it holds in companies like Airbnb, Stripe, and Instacart — are worth about $16 billion. If such an organization really wanted to disrupt an industry like the media, it clearly has the power to do so.

Andreessen Horowitz may still have a master plan to overturn established media, but for now at least, members of the press can probably rest easy. Based on the launch of the firm’s new venture — which is simply called Future — the only thing that might be disrupted is the op-ed industry, and in particular the vast universe of opinions about technology. Sonal Chokshi, the former Wired senior editor turned Andreessen Horowitz editor-in-chief. told CJR the venture firm has no intention of trying to use its new offering to duplicate what journalists do. In other words, it will be focused on personal opinion rather than reported stories. “We’re not going to do what good reporters do, in terms of investigative journalism etc.,” she said. “Others are already doing a good job of that.”

Continue reading “Andreessen Horowitz’s new media entity is an op-ed page”

My testimony before a Senate committee on copyright

No Canadian Filipino appointed by Trudeau to Senate - Canadian Filipino Net

I testified — virtually — before a sub-committee of the Senate yesterday (the Canadian one) about Bill S-225, which wants to create a new copyright scheme to help ailing newspapers get money from Facebook and Google. Here’s what I told them (if you want to watch a livestream of testimony from myself and others, including Jason Kee from Google, followed by questions from the senators, you can see that here)

Good evening, honourable members of the committee. I’d like to thank you all for having me here to talk about Bill S-225. I don’t want to take up too much of your time before answering your questions, but I want to give you a brief overview of why I think that this Bill, although directed at a very real and pressing problem, is fundamentally misguided in the way that it proposes to solve that problem.

The preamble to this Bill states several things that are true. Journalism is important in a free and democratic society, there are a number of excellent Canadian journalism organizations, and digital platforms have disrupted the advertising industry. But the preamble also says something that is not quite true, which is that these platforms “supply their sites with the journalistic work generated by traditional media.”

Continue reading “My testimony before a Senate committee on copyright”

The challenges of content moderation in the global south

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

The difficulty of moderating the ocean of content that gets posted on social networks by billions of users every day was obvious even before Donald Trump’s presidential trolling forced Facebook and other platforms to block his accounts earlier this year. Trying to determine what constitutes genuine harassment or abuse vs. friendly banter, identifying images and videos that are inappropriate or harmful from the tens of millions uploaded every day, distinguishing between authentic political messages and professional trolling operations, and so on is hard enough just for English-speaking audiences in North America, but these challenges are compounded when different languages and cultural norms are involved. What sounds like innocuous phrasing when translated into English could be dangerous hate speech in another language or culture, and automated systems — and even human moderators — are often not good at making those distinctions.

On top of these kinds of social or technical hurdles, there are political ones as well. Countries with authoritarian regimes have become expert in navigating the terms of service for the major platforms, and using them to flag content they don’t agree with, and some countries have used problematic content such as “fake news” as an excuse to legislate the truth. How are the digital platforms handling these challenges? And what are the potential downsides of their failure to do so? To answer these and related questions, we convened a virtual panel discussion using CJR’s Galley platform, with a group of experts in content moderation and internet governance and policy around the world.

The group included Jillian York, the director of international freedom of expression for the Electronic Frontier Foundation; Michael Karanicolas, executive director of the Institute for Technology, Law, and Policy at UCLA; Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology; Jenny Domino, who leads the Internet Freedom Initiative for the Asia-Pacific region at the International Commission of Jurists; Sarah Roberts, an associate professor of information studies at the UCLA; Rasha Abdulla, a professor of journalism at The American University in Cairo; Agustina Del Campo, director of the Center for Studies on Freedom of Expression and Access to Information at the University of Palermo in Argentina; and Tomiwa Ilori, a researcher at the Centre for Human Rights at the University of Pretoria in South Africa.

Continue reading “The challenges of content moderation in the global south”

Donald Trump shuts down his blog, irked by low traffic

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

After being banned from both Facebook and Twitter for his role in spreading disinformation about the election and the January 6 attack on the US Capitol, former president Donald Trump’s team of advisors started talking about a “new social platform” he would soon be launching, which they said would provide a direct conduit for his views, and restore him to his rightful place at the top of the social-media firmament. Trump advisor Jason Miller told Fox News in March that Trump would be returning to social media “in two or three months, with his own platform,” which Miller said would be “the hottest ticket in social media,” and would “completely redefine the game.” On May 4, the Trump website unveiled a new social feature, but it was more like a recapitulation of an old game rather than the definition of a new one: in sum, it was a blog, with short posts in Trump’s voice (although most were likely not written by him) and a series of buttons with which to share his comments on the social platforms where he could no longer post them himself.

Now, less than a month after this much-hyped launch, Trump has shut down the blog, according to a number of reports. The page formerly known as “From the Desk of Donald J. Trump” has been removed from the site and will not be returning to it in the future, Miller confirmed to CNBC on Wednesday. According to a report from the Washington Post, based on interviews with anonymous sources close to the Trump camp, the former president’s decision was driven by the relentless mocking the feature got from established media outlets and political commentators, combined with a significant lack of traffic and engagement. “Upset by reports from The Washington Post and other outlets highlighting its measly readership,” the paper reported on Wednesday, “Trump ordered his team Tuesday to put the blog out of its misery.”

In May, NBC News looked at data from a social-media analytics company called BuzzSumo and found that the Trump blog as a whole had only attracted about 200,000 forms of engagement, including links and other social interactions (likes, shares, etc.) on Facebook, Twitter, Pinterest and Reddit. Before he was banned from those and other platforms, a single tweet from the former president would often be liked or reshared hundreds of thousands of times within a matter of hours, thanks to his 88 million followers. The Post reported that on the final day of the blog’s existence, the Trump website got just 1,500 shares and comments on Facebook and Twitter.

Continue reading “Donald Trump shuts down his blog, irked by low traffic”