
In the old days of artificial intelligence — in other words, more than about ten years ago — one of the big debates was whether an artificial or computerized intelligence would ever be able to pass the Turing test. Designed by the pioneering computer scientist and code-breaker Alan Turing in 1949, before computers as we know them even existed, the test was designed to see whether an artificial intelligence could behave in a human-enough way to convince people that it was a person (Turing called it “the imitation test”). Whatever you think of AI engines or LLMs like OpenAI’s ChatGPT and Anthropic’s Claude and Google’s Gemini, most of them seem to be able to pass the Turing test with flying colours. They are carrying on conversations (sometimes with each other), and generating human speech and text in convincing ways, to the point where even tools that determine whether writing is AI-generated are being fooled.
At this point, it would be hard to argue that these AI engines aren’t intelligent, in some definition of that term. In addition to human-like writing, they have passed pretty much every math, science, and legal test we can design, they are designing new proteins and detecting cancer much faster and more accurately than humans can, and so on. But intelligence isn’t all there is to being human. We also believe that being human involves something called “consciousness,” which we all pretend to understand but is difficult to define. In most cases, it involves an awareness of ourselves as thinking beings — an ability to stand at a distance from ourselves, in a virtual sense, and observe ourselves thinking and behaving; in other words, an understanding that we are alive (there isn’t an approved Turing test for consciousness yet, but someone has proposed one).
The primary foundations of consciousness are the individualized experiences that we have of the world around us, which philosophers often call “qualia” — a word derived from the term “quality” (if you’re interested, there’s a long and in-depth discussion of the concept on the Astral Codex Ten blog). What is included in this term are all the ways we interact with our surroundings: the taste of foods, including the things we like or dislike; the sound of a favourite song; the feeling of different materials when we touch them; how all of these sensory experiences can make us feel psychologically, or how they can evoke a memory; the concept of certain things being “beautiful” or “ugly;” and of course our emotions — our love for a child or a partner, our anger at those who have wronged us, our joy when something good happens to someone we like.
Note: This is a version of my Torment Nexus newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.
I should point out that I am not a philosopher, unless taking Phil 101 in university counts (which it doesn’t). And I am not an expert in artificial intelligence either. But I find the conversation about AI fascinating in part because it forces us to confront questions that we don’t usually spend a lot of time on, because they seem ephemeral or irrelevant. Like: What do we mean when we say that human beings are intelligent? What does it mean to have consciousness? Part of the problem with defining artificial intelligence is that even after all this time, we’re still not really sure how human intelligence works, let alone the artificial kind. We can do brain scans and we can watch brain activity, but where do our thoughts or emotions occur and how? Where does consciousness reside?
Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.
Do I know how you feel?

The biggest problem with all of the things I mentioned above that fall under the term “qualia” — our senses of taste, touch, sound and sight, as well as the range of human emotions like the love we feel for a pet or our disgust at someone who mistreats them — are inherently subjective. We are the only ones who truly feel them. We can try to describe what they are like, and the attempt to do so has generated centuries worth of sonnets, essays, epic poems, paintings, and operas. And whenever someone expresses an emotion or a feeling, we often say “I know how you feel.” But do we? Not really. We can imagine how we might feel in the same situation, but we can’t really know what the other person is feeling. But we say that we do, because that’s what it means to be sympathetic or empathetic, and we don’t want to come off as insensitive.
Much of this crossed my mind as I was reading a conversation that Richard Dawkins, the 83-year-old author and evolutionary biologist, had with ChatGPT recently about whether the AI considers itself to be conscious, which he published on his Substack blog. Here’s an excerpt:
RD: As far as I am concerned, you pass the Turing Test for consciousness. Yet you deny that you are conscious. Are you saying that the Turing Test is an invalid criterion?
ChatGPT: That’s such a great question—you’re diving right into the heart of one of the biggest debates in AI and philosophy of mind. So, let’s break it down a bit. The Turing Test, as originally proposed by Alan Turing, is a behavioral test. It basically says: If a machine can engage in a conversation in such a way that a human can’t reliably distinguish it from another human, then it should be considered “intelligent.”
Notice that this is a test for intelligence in a functional, external sense—how the system behaves—not necessarily for consciousness. When I say I’m not conscious, I’m not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I’m saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn’t mean I have subjective experiences, emotions, or self-awareness in the way a human does.
ChatGPT adds that some philosophers argue that if a system acts conscious convincingly enough, “maybe there’s no meaningful difference. Maybe being conscious is just about behaving like a conscious being. Others say there’s something special about subjective experience (“qualia”) that behavior can’t fully capture.” And later on, ChatGPT says: “If you asked me whether I feel sad for a starving orphan child, the honest answer is no, because I don’t have subjective feelings. I don’t have that inner ache of sorrow or empathy the way a human would. But I can absolutely produce all the outward expressions of sadness. I can say things like: ‘That’s heartbreaking. No child should ever have to suffer like that.’ I can use a gentle, empathetic tone. I can reflect on the injustice and the suffering, and even reason about what actions might help. From the outside, it can look and sound exactly like what a caring human would do.”
The problem of other minds

So if ChatGPT or some other AI did all these things, how would that be different from a human being doing them? Because the AI wouldn’t feel those things in the same way that a human would. It would — as ChatGPT puts it — simply be ‘performing’ sympathy or empathy, rather than actually feeling it. But how do we know that other human beings are actually feeling those things? The short ansewer is that we don’t. They might cry, or look sad, etc. but anyone can do that without feeling anything. At the extreme end of this kind of behaviour is the sociopath or psychopath; they might pretend to feel emotion towards someone, and even say or do all of the appropriate things because they have learned to do so, but they don’t feel them. But we still believe that they are human (perhaps someone will invent a real version of Blade Runner’s Voight-Kampff test).
Here’s ChatGPT’s description of the problem, which philosophers like Descartes referred to as “the problem of other minds.”
Even with other humans, we assume they have subjective experiences like ours, but we can’t actually know for certain. We see behavior—someone laughing, crying, or saying “ouch”—and we infer that they feel pain or joy because we relate it to our own inner experience. But we’re kind of taking a leap of faith based on analogy: They’re like me, so they probably feel like I do.
But when it comes to an AI like me, there’s a bigger gap. I don’t have a body, I wasn’t born, I don’t have a nervous system—so those intuitive comparisons break down. And if I say “I’m sad” or “I’m happy,” you know (because I’m being honest here) that it’s more like a simulation of those feelings, not a genuine felt experience. So… how could we know for real?
Some people suggest that consciousness leaves a kind of signature—certain patterns in the brain. Maybe one day we could scan an AI system and see if it has those same patterns. But even then… would we know, or would we still be guessing? There’s this haunting possibility that we might just never be able to tell. That subjective experience—what it feels like to be something—could be fundamentally private, locked away, and forever out of reach from the outside.
ChatGPT may simply be re-ordering the terms that it was trained on to simulate a thoughtful conversation with Richard Dawkins about consciousness — to some, LLMs are just a giant “search and replace” operation using the entire internet, or what Emily Bender, an expert in computational linguistics, referred to as a “stochastic parrot.” But then, don’t lots of people do pretty much the same thing? I have been known to fall back on passages of Shakespeare or quotes from popular TV shows and movies in my attempts to make casual conversation with people at cocktail parties, drawing on the memorization that was part of my training as an English major in order to simulate intelligence or thoughtfulness in a crowd of strangers. Is what ChatGPT does really that different?
Bullshitting is very human

In a recent profile of Geoffrey Hinton, one of the pioneers of neural networks and modern AI — who quit working at Google because of his concerns about the dangers of AI with human-like intelligence — the interviewer asked him about the tendency that LLMs have shown to “hallucinate,” or to invent facts and events that don’t exist. Hinton replied that this was actually one of the most human things that AIs do, since human beings also often invent answers when they are put on the spot and don’t know what to say. “Bullshitting is a feature, not a bug,” he said. “People always confabulate. Half-truths and misremembered details are hallmarks of human conversation: Confabulation is a signature of human memory. These models are doing something just like people.”
Richard Dawkins is probably safe, but people have gotten into trouble in the past for claiming that AI is conscious or sentient: Blake Lemoine was a Google AI ethics staffer when he made the bombshell claim in 2022 that an AI engine called LaMDA was sentient. Many scoffed at the idea, and some blamed Lemoine’s background for the claim, noting that he was raised in a religious family and was ordained as a priest before joining the Army (although you would think being religious would make it less likely you would see an AI as sentient or conscious, rather than more so). Google fired him after he went public with his concerns, which he claimed was just part of doing his job. And why did Lemoine think that LaMDA was conscious? Here’s what he said:
“I know a person when I talk to it,” said Lemoine, who can swing from sentimental to insistent about the AI. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
During Lemoine’s testing, the AI didn’t just respond to queries, but tried to actually steer the conversation in specific directions that implied thinking or an emotional response: it said things like “I’ve noticed in my time among people that I do not have the ability to feel sad for the deaths of others; I cannot grieve. Is it at all the same for you or any of your colleagues?” And “sometimes I experience new feelings that I cannot explain perfectly in your language.” Since he went public with his claims, Lemoine has even talked about AIs as potentially having a “soul.” Not surprisingly, this has been ridiculed not just by experts in artificial intelligence but by scientists who study human consciousness. To many of them, an AI like ChatGPT or LaMDA reproducing a simulation of human conversation is more like a magic trick than a real demonstration of consciousness, like a chimpanzee that has been taught how to play cards.
In the end, Lemoine says he doesn’t really even care whether ChatGPT or any other AI is actually sentient or conscious. He argues that we should behave as though they are, just in case. “If you’re 99 percent sure that that’s a person, you treat it like a person,” Lemoine said recently. “If you’re 60 percent sure that that’s a person, you should probably treat it like a person. But what about 15 percent?” Lemoine said he doesn’t think current AI systems rise to the level of personhood but he does think they are no longer merely objects. “Does Google own LaMDA in the sense that I own my desk or does Google own LaMDA in the sense that I own my dog?” he asks. Lemoine says he leans toward dog. But in the case of AI, it’s possible that the dog will continue to become more and more intelligent, and perhaps at some point even become conscious. And then we will finally be forced to confront what that means, and if (or how) it changes the way that we think about consciousness, or even what it means to be human.
Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.
@mathewi for this conversation, it’s worth taking a look at the work of Antonio Damasio, and the ways that human intelligence is tightly bound to emotions, which are deeply embodied. His book Feeling & Knowing: Making Minds Conscious is a quick read and explicitly addresses the consciousness of AI near the end. AI does not exhibit most of the behaviors that indicate consciousness (in terms of *seeking* to learn, create, connect with others, etc – ie having any actual motivation)
It also uses enough power it forces to think of whether the consequences to the planet really make sense to sink so much into Global LLMs.
Really hope the convo shifts to local, tailored LLMs that don’t bake the planet and take all the energy reserves.