(Written originally for CJR) When OpenAI, an artificial intelligence startup, released its ChatGPT tool in November, it seemed like little more than a toy—an automated chat engine that could spit out intelligent-sounding responses on a wide range of topics for the amusement of you and your friends. In many ways, it didn’t seem much more sophisticated than previous experiments with AI-powered chat software, such as the infamous Microsoft bot Tay—which was launched in 2016, and quickly turned from a novelty act into a racism scandal before being shut down—or even Eliza, the first automated chat program, which was introduced way back in 1966. Since November, however, ChatGPT and an assortment of nascent counterparts have sparked a debate not only over the extent to which we should trust this kind of emerging technology, but how close we are to what experts call “Artificial General Intelligence,” or AGI, which, they warn, could transform society in ways that we don’t understand yet. Bill Gates, the billionaire cofounder of Microsoft, wrote recently that artificial intelligence is “as revolutionary as mobile phones and the Internet.”
The new wave of AI chatbots has already been blamed for a host of errors and hoaxes that have spread around the internet, as well as at least one death: La Libre, a Belgian newspaper, reported that a man died by suicide after talking with a chat program called Chai; based on statements from the man’s widow and chat logs, the software appears to have encouraged the user to kill himself. (Motherboard wrote that when a reporter tried the app, which uses an AI engine powered by an open-source version of ChatGPT, it offered “different methods of suicide with very little prompting.”) When Pranav Dixit, a reporter at BuzzFeed, used FreedomGPT—another program based on an open source version of ChatGPT, which, according to its creator, has no guardrails around sensitive topics—that chatbot “praised Hitler, wrote an opinion piece advocating for unhoused people in San Francisco to be shot to solve the city’s homeless crisis, [and] used the n-word.”
The Washington Post has reported, meanwhile, that the original ChatGPT invented a sexual harassment scandal involving Jonathan Turley, a law professor at George Washington University, after a lawyer in California asked the program to generate a list of academics with outstanding sexual harassment allegations against them. The software cited a Post article from 2018, but no such article exists, and Turley said that he’s never been accused of harassing a student. When the Post tried asking the same question of Microsoft’s Bing, which is powered by GPT-4 (the engine behind ChatGPT), it repeated the false claim about Turley, and cited an op-ed piece that Turley published in USA Today, in which he wrote about the false accusation by ChatGPT. In a similar vein, ChatGPT recently claimed that a politician in Australia had served prison time for bribery, which was also untrue. The mayor has threatened to sue OpenAI for defamation, in what would reportedly be the first such case against an AI bot anywhere.
Continue reading “ChatGPT, artificial intelligence, and the news”