My mother, Linda Miles Ingram

My mother was a woman with many hidden depths. She often came off as flighty or shallow, I think, because of her love of beautiful clothes or her fondness for acting, or her taste for perhaps a bit more wine than was really necessary, but she had a core of steel ( which she got from her mother Ruth), and that allowed her to take on challenges that would have scared off lesser mortals — including setting off for the Seychelles islands in her retirement years, to help my father beat back the jungle around a would-be BnB, where she learned how to cook fruit bat, among other things (which involves throwing them against the wall to tenderize them, apparently).

After growing up in Toronto in relative luxury on South Drive, with her younger sister Kathy and little brother John, doing all the up-and-coming Toronto society things like debutante balls and being raised largely by nuns, Linda fell in love with a young man she met as part of the theatre group at the University of Western Ontario — as she told the story, she would often go back to his apartment and do the dishes while he called his fiancee, who eventually fell by the wayside, defeated by the charms of this blonde bombshell with the big vocabulary and the cats-eye glasses.

Although her family might have preferred to see her marry a doctor or lawyer, Linda decided to marry a penniless farm boy from Saskatchewan who had just joined the Royal Canadian Air Force as a fighter pilot. Despite — or perhaps because of — their differences, they became an inseparable team, he the director telling everyone where to stand and what to say (or which country and province to move to next) and she the young ingenue, playing the role of Air Force officer’s wife, party hostess, mother, and later grandmother, aunt, and walking encyclopedia.

Continue reading “My mother, Linda Miles Ingram”

Germany’s “flying train” from 1902

The movie clip above, from the Museum of Modern Art, may look like something from an HG Wells-style science fiction movie made at the turn of the century, but it is actually a clip of a functioning suspended railway in Germany built in the late 1800s. Originally called the Einschienige Hangebahn System Eugen Langen (the Eugen Langen Monorail Overhead Conveyor System), it is now known as the Wuppertaler Schwebebahn or Wuppertal Suspension Railway. Not only that, but it is still running — although the cars have been upgraded multiple times since it was built. It moves 25 million passengers every year.

Facebook Oversight Board punts on Donald Trump ban

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

For several months, those who follow politics and those who are more interested in social technology have both been waiting with bated breath for a decision on whether Donald Trump would continue to be banned from posting to his account on Facebook. The former president’s account was blocked after the social network decided he used the site to foment violence, resulting in the attack on the US Capitol on January 6th, and then the decision was sent to the company’s so-called Oversight Board for review. The board — a theoretically arm’s length group of former journalists, academics, and legal scholars, including the former Prime Minister of Denmark, the former editor-in-chief of The Guardian, and a former US federal circuit-court judge — handed down its ruling on Wednesday. The board decided that Facebook was right to have banned Trump from the network for fomenting violence, but said the company had no official policy on its books that allowed it to ban him (or anyone else) permanently, and told Facebook to come up with one if it wanted to do this in the future. This appeared to please some people partially, but almost no one completely.

For some critics, including the so-called Real Facebook Oversight Board — a group that includes former CIA officer and former Facebook advisor Yael Eisenstat, Facebook venture investor Roger MacNamee, and crusading Phillippines journalist Maria Ressa — the board’s decision on Trump just reinforces the fiction that the Oversight Board has any power whatsoever over the company. Although Facebook has gone to great lengths to create a structure that puts the board at arm’s length and theoretically gives it autonomy, the board was still created by Facebook and is funded by Facebook. It also has a fairly limited remit, in the sense that it can make decisions about whether content (or accounts) should be removed or not, but it has no ability to question or influence any of Facebook’s business decisions, or the way its algorithms function, how its advertising strategy works, and so on. The board may have advised Facebook that it should have a policy about how to deal with government leaders who incite violence, but if the company decides not to create one, or not to implement it, the board can do nothing.

On a broader level, some critics argue that all of the attention being paid to the Oversight Board and its Trump decision — not to mention the references to it being Facebook’s Supreme Court — play into the company’s desire to make it seem like a worthwhile or even pioneering exercise in corporate governance. For some at least, it is more like a sideshow, or a Potemkin village: it looks nice from the outside, but behind the facade it’s just a two-dimensional representation of governance, held up by sticks. Shira Ovide of the New York Times wrote: “Facbook is not a representative democracy with brances of government that keep a check on one another. It is a castle ruled by an all-powerful king who has invited billions of people inside to mingle — but only if they abide by opaque, ever-changing rules that are often applied by a fleet of mostly lower-wage workers.”

Continue reading “Facebook Oversight Board punts on Donald Trump ban”

Did the Facebook Oversight Board drop the ball on Trump?

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Last week, after months of deliberation, the Facebook Oversight Board — the theoretically independent body that adjudicates cases in which content has been removed by the social network — released its decision on the banning of Donald Trump. The former president’s account was blocked permanently following the attack on the US Capitol building on January 6th, after Facebook decided he was using it to incite violence. It sent this decision to the Oversight Board for review, and the board announced its ruling last week — a two-part decision that upheld the ban on Trump as justified, but also noted that Facebook doesn’t have a policy that allows it to ban accounts indefinitely. So the board gave the company six months to either come up with such a policy, or impose a time limit on the Trump ban. Some observers saw this as the Oversight Board “punting” on the choice of whether to ban the former president, rather than making a hard and fast decision, while others argued that paying so much attention to the rulings of a quasi-independent body created by the company it is supposed to oversee meant giving the board (and Facebook) too much credit, and was a distraction from the company’s real problems.

Is the Oversight Board a valid mechanism for making these kinds of content decisions, or is Facebook just trying to pass the buck and avoid responsibility? Did the board’s ruling on Trump’s ban reinforce this latter idea, or is the board actively fighting to ensure that it’s not just a rubber stamp for Facebook’s decisions? To answer these and other related questions, we’re talking this week with a number of experts on Galley, CJR’s digital discussion platform, including Nathalie Maréchal, a policy analyst at Ranking Digital Rights; Kate Klonick, a law professor at St. John’s in New York who has been following the Oversight Board since its inception, and Evelyn Douek, a lecturer at Harvard Law School and an affiliate at the Berkman Klein Center for Internet and Society. Maréchal, for one, falls firmly into the camp that believes the Oversight Board is mostly a distraction from the important questions about Facebook and its control over speech and privacy. “The more time and energy we all spend obsessing over content moderation, the less time and energy we have scrutinizing the Big Tech Companies’ business models and working to pass legislation that actually protects us from these companies’ voraciousness,” she said.

The most important thing that the government could do to reign in Facebook’s power, Maréchal says, is “pass federal privacy legislation that includes data minimization (they can only collect data that they actually need to perform the service requested by the user), purpose limitation (they can only use data for the specific purpose they collected it for — ie, not for advertising), and a robust enforcement mechanism that includes penalties that go beyond fines, since we know that fines are just a slap-on-the-wrist cost of doing business.” Klonick, for her part, is a little more willing to see the Oversight Board as a potential positive influence on Facebook — although not a solution to all of the company’s myriad problems, by any means. And Klonick knows the board more intimately than most. In 2019, she convinced the company to give her access to those inside Facebook who were responsible for creating the board and developing its mandate and governance structure, and she later wrote about this process for The New Yorker and others.

Continue reading “Did the Facebook Oversight Board drop the ball on Trump?”

Social networks accused of censoring Palestinian content

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Violence between Israel and Palestine has been going on for decades, but the conflict has flared up in recent weeks, in part because of the forced eviction of Palestinians who live in Jerusalem on land claimed by Israel, and attacks on Muslims near the Al-Aqsa mosque toward the end of the holy month of Ramadan. But as Palestinians and their supporters have shared images and posts about the violence on Facebook, Twitter, and Instagram, some have noticed their content suddenly disappearing, or their posts being flagged for breaches of the platforms’ terms of use when no such breach had occurred. In some cases, their accounts have been suspended, including the Twitter account of Palestinian-American writer Mariam Barghouti, who had been posting photos and videos of the violence in Jerusalem. Twitter later restored Barghouti’s account, and apologized for the suspension, saying it was done by mistake.

Some of those who have been covering such issues for years don’t think these kinds of things are a mistake — they believe social networks are deliberately censoring Palestinian content. In a recent panel discussion on Al Jazeera’s show The Stream, Marwa Fatafta of the human-rights advocacy group AccessNow said this is not a new problem, but it has recently gotten worse. “Activists and journalists and users of social media have been decrying this kind of censorship for years,” she said. “But I’ve been writing about this topic for a long time, and I have not seen anything of this scale. It’s so brazen and so incredible, it’s beyond censorship — it’s digital repression. They are actively suppressing the narrative of Palestinians or those who are documenting these war crimes.”

On Monday, AccessNow did a Twitter thread about censorship involving Palestinian content on Facebook, Twitter, Instagram, and TikTok. The group said it has received “hundreds of reports that social platforms are suppressing Palestinian protest hashtags, blocking livestreams, and removing posts and accounts.” Ameer Al-Khatahtbeh, who runs a magazine for millennials called Muslim, says he has documented 12,000 acts of censorship on Instagram alone in the past several weeks.

Continue reading “Social networks accused of censoring Palestinian content”

Facebook and the dilemma of coordinated inauthentic behavior

Note: This was originally published as the daily newsletter for the Columbia Journalism Reviews, where I am the chief digital writer

Yesterday, Facebook released a report on what it calls “influence operations” on its platform, which are defined as “coordinated efforts to manipulate or corrupt public debate for a strategic goal.” By this, the company seems to mean primarily the kinds of activity that Americans heard about during the 2016 election, from entities like the Russian “troll farm” known as the Internet Research Agency, which used fake accounts to spread disinformation about the election and to just generally cause chaos. Facebook says in this “threat report” that it has uncovered evidence of disinformation campaigns in more than 50 countries since 2017, and it breaks down some of the details of 150 of these operations over that period. In addition to noting that Russia is still the leading player in this kind of campaign (at least the ones that Facebook knows about) the company describes how dealing with these kinds of threats has become much more complex since 2016.

One of the big challenges is defining what qualifies as “coordinated inauthentic behavior.” Although Facebook doesn’t really deal with this in its report, much of what happens on the service (and other similar platforms) would fit that description, including much of the advertising that is the company’s bread and butter. In private groups devoted to everything from politics to fitness and beauty products, there are likely plenty of posts that could be described as “coordinated efforts to manipulate public debate for a strategic goal,” albeit not the kind that rise to the level of a Russian troll farm.

Influence operations can take a number of forms, Facebook says, “from covert campaigns that rely on fake identities to overt, state-controlled media efforts that use authentic and influential voices to promote messages that may or may not be false.” For example, during the 2016 election, the Internet Research Agency created groups devoted to Black Lives Matter and other topics that were filled with authentic posts from real users who were committed to the cause. In one case mentioned in Facebook’s threat report, a US marketing firm working for clients such as the PAC Turning Point USA “recruited a staff of teenagers to pose as unaffiliated voters” and comment on various pages and accounts. As researchers like Shannon McGregor of UNC note, “enlisting supporters in coordinated social media efforts is actually a routine campaign practice.”

Continue reading “Facebook and the dilemma of coordinated inauthentic behavior”

Andreessen Horowitz’s new media entity is an op-ed page

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

At the beginning of this year, an otherwise innocuous job ad — looking for an executive editor to oversee a site about technology — got more than its fair share of attention. Why? Because the entity that posted the ad wasn’t a traditional media company. The opening was for a job at Andreessen Horowitz, an influential venture capital firm in Silicon Valley that has developed a reputation for avoiding the traditional technology press, and this raised a number of questions. Was the proposed site another way to do an end run around the media industry, from a powerful investor who believes that some (if not all) traditional industries need to be disrupted by technology?

A former analyst at Andreessen Horowitz, Benedict Evans, famously described it as “a media company that monetizes through venture capital.” The firm’s assets under management — the stakes it holds in companies like Airbnb, Stripe, and Instacart — are worth about $16 billion. If such an organization really wanted to disrupt an industry like the media, it clearly has the power to do so.

Andreessen Horowitz may still have a master plan to overturn established media, but for now at least, members of the press can probably rest easy. Based on the launch of the firm’s new venture — which is simply called Future — the only thing that might be disrupted is the op-ed industry, and in particular the vast universe of opinions about technology. Sonal Chokshi, the former Wired senior editor turned Andreessen Horowitz editor-in-chief. told CJR the venture firm has no intention of trying to use its new offering to duplicate what journalists do. In other words, it will be focused on personal opinion rather than reported stories. “We’re not going to do what good reporters do, in terms of investigative journalism etc.,” she said. “Others are already doing a good job of that.”

Continue reading “Andreessen Horowitz’s new media entity is an op-ed page”