
In the old days of artificial intelligence — in other words, more than about ten years ago — one of the big debates was whether an artificial or computerized intelligence would ever be able to pass the Turing test. Designed by the pioneering computer scientist and code-breaker Alan Turing in 1949, before computers as we know them even existed, the test was designed to see whether an artificial intelligence could behave in a human-enough way to convince people that it was a person (Turing called it “the imitation test”). Whatever you think of AI engines or LLMs like OpenAI’s ChatGPT and Anthropic’s Claude and Google’s Gemini, most of them seem to be able to pass the Turing test with flying colours. They are carrying on conversations (sometimes with each other), and generating human speech and text in convincing ways, to the point where even tools that determine whether writing is AI-generated are being fooled.
At this point, it would be hard to argue that these AI engines aren’t intelligent, in some definition of that term. In addition to human-like writing, they have passed pretty much every math, science, and legal test we can design, they are designing new proteins and detecting cancer much faster and more accurately than humans can, and so on. But intelligence isn’t all there is to being human. We also believe that being human involves something called “consciousness,” which we all pretend to understand but is difficult to define. In most cases, it involves an awareness of ourselves as thinking beings — an ability to stand at a distance from ourselves, in a virtual sense, and observe ourselves thinking and behaving; in other words, an understanding that we are alive (there isn’t an approved Turing test for consciousness yet, but someone has proposed one).
The primary foundations of consciousness are the individualized experiences that we have of the world around us, which philosophers often call “qualia” — a word derived from the term “quality” (if you’re interested, there’s a long and in-depth discussion of the concept on the Astral Codex Ten blog). What is included in this term are all the ways we interact with our surroundings: the taste of foods, including the things we like or dislike; the sound of a favourite song; the feeling of different materials when we touch them; how all of these sensory experiences can make us feel psychologically, or how they can evoke a memory; the concept of certain things being “beautiful” or “ugly;” and of course our emotions — our love for a child or a partner, our anger at those who have wronged us, our joy when something good happens to someone we like.
Note: This is a version of my Torment Nexus newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.
Continue reading “AI forces us to think about what consciousness means”