Is AI going to save us or kill us? Even the experts don’t agree

Like many, I’ve been fascinated by the speed with which artificial intelligence has taken over the spotlight as the technology that everyone is either excited by, confused by, or terrified by (or possibly all three). Part of that, I think, has to do with the speed at which the group of things we call AI have been evolving — it’s hard to believe that the term AI was mostly restricted to academic circles as recently as 2022, when OpenAI’s ChatGPT was released in the wild. Then came visual AI engines like DALL-E and Midjourney, which generated some hilarious photographs and video clips, like the widely-lampooned video of an AI version of Will Smith trying to eat spaghetti, which is alternately laughable and also creepy, in a way that only AI art seems to be. ChatGPT and other AI engines based on large language models routinely generated nonsensical results — or “hallucinations,” as some call them — where they just make things up out of thin air.

Within a matter of months, however, those same AI chatbots were producing high-quality transcriptions and summaries, and the AI photo and video engines were generating incredibly lifelike pictures of things that don’t exist, and videos of people and animals that are virtually indistinguishable from the real thing. I recently took a test that Scott Alexander of Astral Codex Ten sent to his newsletter readers, which presented them with pictures and asked which ones were generated by AI and which by humans, and I have zero confidence that I got any of them right. ChatGPT’s various iterations, meanwhile, have not only aced the Turing test (which determines whether an AI is able to mimic being human) but the LSAT and a number of other tests. It’s true that AI engines like Google’s have told people to do stupid thing like eat rocks, but the speed with which their output has become almost indistinguishable from human content is staggering.

I should mention up front that I am well aware of the controversy over where AI engines get all the information they use to generate video and photos and text — the idea that their scraping or indexing of books and news articles is theft, and they should either pay for it or be prevented from using it. If I were an artist whose name has become a prompt for generating images that look like his work, I might think differently. But for me, the act of indexing content (as I’ve argued for the Columbia Journalism Review) is not that different from what a search engine like Google does, which I believe should qualify as fair use under the law (and has in previous cases such as the Google Books case and the Perfect 10 case.) Whether the Supreme Court agrees with me remains to be seen, of course, but that is my belief. I’m not going to argue about that here, however, because that is a separate question from the one I’m interested in exploring right now.

Note: This is a version of my Torment Nexus newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.

Doom or utopia?

The question I’m interested in is whether AI is inherently bad, in the sense that it is likely to develop something approaching or even surpassing human intelligence (known as AGI or artificial general intelligence), at which point it will cause humanity harm or even exterminate us. There are already large camps of “AI doomers” who believe that the above will almost certainly be the case, as well as those who think we should press ahead anyway, and that supersmart AI will solve all of humanity’s problems and usher us into a utopia, a group who are often called “accelerators.” What makes this interesting, to me at least, is not that there are people who think it is bad and others who think it is great, but that even the top scientists in this field — the godfathers of modern AI — can’t seem to agree on whether their creation is going to be the biggest boon to mankind since the discovery of fire, or whether it is likely to go rogue and kill us all.

Geoff Hinton, for example, is widely viewed as one of the founders of modern AI, since he co-developed neural networks and other technologies such as “backpropagation” that are used in most of the AI engines of today, including work for which he and his partner just won the Nobel Prize. One of his grad students, Ilya Sutskever, went on to co-found what became OpenAI and has since left to start his own AI company based on developing AI safely. Hinton — who said in accepting the Nobel that he was proud one of his students (Sutskever) “fired Sam Altman” — quit working at Google because he wanted to speak freely about the dangers of AI. He has said in interviews with the New Yorker and others that he came to believe AI models such as ChatGPT (GPT stands for “generative pre-trained transformer) were developing human-like intelligence faster than he expected. “Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English,” he told MIT’s Tech Review.

Neural networks are widely believed to be bad at learning, in part because it takes vast amounts of data and energy to train them, and they seem to be very slow at developing new skills, even simple things like adding numbers. Brains seem to pick up new ideas and skills quickly, using a fraction of the energy that neural networks require. But when it comes right down to it, Hinton isn’t convinced that humans have any built-in superiority — if anything, he thinks it’s the opposite. “Our brains have 100 trillion connections,” he told the MIT Tech Review. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm.” AI critics like to say that the technology is just glorified auto-complete, he told the New Yorker, but “in order to be good at predicting the next word, you have to understand what’s being said.”

Building the butterfly

Hinton says he thinks that AI engines such as GPT-4 can comprehend the meanings of words and ideas, and therefore it’s just a matter of time before they make the jump to being able to reason independently. “It’s analogous to how a caterpillar turns into a butterfly,” he told the New Yorker. “In the chrysalis, you turn the caterpillar into soup—and from this soup you build the butterfly.” We tend to prize reason over intuition, but Hinton believes that we are more intuitive than we acknowledge. “For years, symbolic-AI people said our true nature is, we’re reasoning machines,” he said. “I think that’s just nonsense. Our true nature is, we’re analogy machines, with a little bit of reasoning built on top, to notice when the analogies are giving us the wrong answers, and correct them.”

As for the so-called “hallucinations” that AI bots sometimes come out with (Hinton uses the term “confabulations,” which is more correct in a psychological sense), critics argue that this shows that LLMs have no true understanding of what they say. But Hinton argues that this is a feature, not a bug, and that hallucinating or confabulating is among the most human attributes of an AI. “People always confabulate,” he told the New Yorker — half-truths and invented details are a normal part of human conversation. “Confabulation is a signature of human memory,” he says. “These models are doing something just like people.” Hinton adds that he suspects that skepticism of AI’s potential is often motivated by an unjustified faith in human exceptionalism. In other words, there’s nothing that different or special about the human mind, which means it should be relatively simple for an advanced AI engine to duplicate — and then surpass.

That’s Geoff Hinton’s view. But one of his fellow ground-breaking AI researchers, Yann LeCun — an NYU professor and senior researcher at Meta who has been a friend and coworker of Hinton’s for more than 40 years — believes almost the exact opposite. LeCun told the Wall Street Journal recently that warnings about the technology’s potential for existential peril are “complete B.S.” LeCun, who won the Turing Award — one of the top prizes in computer science — in 2019 thinks that today’s AI models are barely even as smart as an animal, let alone a human being, and the risk that they will soon become all-powerful supercomputers is negligible at best. Hinton has said that he thinks ChatGPT has shown signs of something approaching consciousness, but LeCun says that before we get too worried about the risks of superintelligence, “we need to have a hint of a design for a system smarter than a house cat,” as he put it on X.

Thinking or just talking?

Today’s AI models are really just predicting the next word in a text, he says, but they’re so good at this that they fool us into thinking that they are intelligent. That and their vast capacity for remembering things can make it seem as though they are reasoning, LeCun told the Journal, when in fact they’re merely regurgitating information they’ve been trained on. “We are used to the idea that people or entities that can express themselves, or manipulate language, are smart — but that’s not true,” says LeCun. “You can manipulate language and not be smart, and that’s basically what LLMs are demonstrating.” Part of LeCun’s skepticism stems from the fact that he believes AI researchers are going in the wrong direction, and that true intelligence and reasoning ability won’t come from simply expanding the size of an AI database or giving it more processors. He believes that AI software needs to be trained like a baby learns, in order to develop a model of the world and how human beings operate within it, and what the rules are.

In addition to the back-and-forth between the godfathers of AI over whether it is going to kill us all, another thing that fascinates me about the discussion is that even some AI experts don’t know how large-language models do what they do. Tech Review describes how two scientists left one of their experiments running for days instead of hours and found that the AI had somehow figured out how to add two numbers. They found that in certain cases, LLMs could fail repeatedly to learn a task and then all of a sudden just get it, as if a lightbulb had switched on, which they called “grokking.” This isn’t how deep learning is supposed to work, and yet somehow it did. And the recipes for how to do this are still more alchemy than chemistry: “We figured out certain incantations at midnight after mixing up some ingredients,” as one researcher put it.

Perhaps it’s not that surprising that even AI experts don’t ultimately know whether it will save us or kill us. Anyone who has seen the movie Oppenheimer, or read any of the books about the Manhattan Project (a popular thing to read at safety-conscious Anthropic AI, according to the Times) knows that even as the scientists were developing the first atomic bomb — which was arguably as complicated than developing a neural network, if not more so — some top scientists were afraid that having radioactive material go critical inside the atomic bomb could start a fission reaction that would ignite the earth’s atmosphere and bring an end to life as we know it. That obviously didn’t happen. Whether AI research starts a chain reaction that ends with killer robots from Cyberdyne Systems is also unknown — even by those who have devoted their lives to this kind of research. Which is somehow fascinating and frightening at the same time.

Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on Ghost, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.

Leave a Reply

Your email address will not be published. Required fields are marked *