Like many, I’ve been fascinated by the speed with which artificial intelligence has taken over the spotlight as the technology that everyone is either excited by, confused by, or terrified by (or possibly all three). Part of that, I think, has to do with the speed at which the group of things we call AI have been evolving — it’s hard to believe that the term AI was mostly restricted to academic circles as recently as 2022, when OpenAI’s ChatGPT was released in the wild. Then came visual AI engines like DALL-E and Midjourney, which generated some hilarious photographs and video clips, like the widely-lampooned video of an AI version of Will Smith trying to eat spaghetti, which is alternately laughable and also creepy, in a way that only AI art seems to be. ChatGPT and other AI engines based on large language models routinely generated nonsensical results — or “hallucinations,” as some call them — where they just make things up out of thin air.
Within a matter of months, however, those same AI chatbots were producing high-quality transcriptions and summaries, and the AI photo and video engines were generating incredibly lifelike pictures of things that don’t exist, and videos of people and animals that are virtually indistinguishable from the real thing. I recently took a test that Scott Alexander of Astral Codex Ten sent to his newsletter readers, which presented them with pictures and asked which ones were generated by AI and which by humans, and I have zero confidence that I got any of them right. ChatGPT’s various iterations, meanwhile, have not only aced the Turing test (which determines whether an AI is able to mimic being human) but the LSAT and a number of other tests. It’s true that AI engines like Google’s have told people to do stupid thing like eat rocks, but the speed with which their output has become almost indistinguishable from human content is staggering.
I should mention up front that I am well aware of the controversy over where AI engines get all the information they use to generate video and photos and text — the idea that their scraping or indexing of books and news articles is theft, and they should either pay for it or be prevented from using it. If I were an artist whose name has become a prompt for generating images that look like his work, I might think differently. But for me, the act of indexing content (as I’ve argued for the Columbia Journalism Review) is not that different from what a search engine like Google does, which I believe should qualify as fair use under the law (and has in previous cases such as the Google Books case and the Perfect 10 case.) Whether the Supreme Court agrees with me remains to be seen, of course, but that is my belief. I’m not going to argue about that here, however, because that is a separate question from the one I’m interested in exploring right now.
Note: This is a version of my Torment Nexus newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.
Continue reading “Is AI going to save us or kill us? Even the experts don’t agree”