In November, OpenAI, a company that develops artificial-intelligence software, released ChatGPT, a program that allows users to ask conversational-style questions and receive essay-style answers. It soon became clear that, unlike with some earlier chat-software programs, this one could, in a matter of seconds, generate content that was both readable and reasonably intelligent. Unsurprisingly, this caused consternation among humans who get paid to generate content that is readable and intelligent. Their concerns are reasonable: companies that make money creating such content may well see AI-powered tools as an opportunity to cut costs and increase profits, two things that companies that make money from content like to do.
AI in the media is, more broadly, having a moment. Around the same time that ChatGPT launched, CNET, a technology news site, quietly started publishing articles that were written with the help of artificial intelligence, as Futurism reported last month. A disclaimer on the site assured readers that all of the articles were checked by human editors, but as Futurism later reported, many of the CNET pieces written by the AI software not only contained errors, but in some cases were plagiarized. After these reports came out, Red Ventures—a private-equity firm that owns CNET and a number of other online publications, including Lonely Planet and Healthline—told staff that it was pausing the use of the AI software, which it said had been developed in-house.
As CNET pressed pause, media companies have announced plans to expand their use of AI. The Arena Group, which publishes Sports Illustrated among other magazines, is now using AI to generate articles and story ideas, according to the Wall Street Journal; Arena said that it doesn’t plan to replace journalists with AI, but to “support content workflows, video creation, newsletters, sponsored content and marketing campaigns,” according to Ross Levinsohn, the CEO and a former publisher of the Los Angeles Times. BuzzFeed, meanwhile, said that it plans to use OpenAI’s software to develop quizzes and personalize content for readers. After that news broke, BuzzFeed‘s stock more than doubled in price, a move “reminiscent of the crypto and blockchain craze five years ago when shares of a company would surge when it announced a potential partnership or entry into the popular sector,” Bloomberg’s Alicia Diaz and Gerry Smith wrote. Jonah Peretti, BuzzFeed’s CEO, told staff that the use of AI was not about “workplace reduction,” according to a spokesperson quoted by the Journal. (The Journal also reported that BuzzFeed “remains focused on human-generated journalism” in its newsroom.)
The use of AI software to create journalism didn’t begin with the rise of ChatGPT. The Associated Press has been using AI to write corporate earnings reports since 2015, since such reports are often so formulaic that they don’t require human input. (Incidentally, the AP also recently asked ChatGPT to write the president’s State of the Union speech in the style of various historical figures, including Shakespeare, Aristotle, and Mahatma Gandhi. Cleopatra: “Let us continue to work together, to strive for a better future, and to build a stronger, more prosperous Egypt.”) And Yahoo and several other content publishers have been using similar AI-powered tools for several years, to generate game summaries and corporate reports.
While the practice may not be as new as some of the commentary around it may have you believe, however, the popularity of ChatGPT, and the quality of its output, has led to a renewed debate about its potential impact on journalism. Jack Shafer, a media columnist at Politico, is relatively sanguine about the possibilities of AI-powered content software to improve their work. Journalism “doesn’t exist to give reporters and editors a paycheck,” Shafer wrote. “It exists to serve readers. If AI helps newsrooms better serve readers, they should welcome its arrival.” This will be difficult if the technology also leads to widespread job losses, however. Max Read, a former editor at Gawker, wrote recently in his newsletter that “any story you hear about using AI is [fundamentally] a story about labor automation,” whether that involves adding tools that could help journalists do more with less or replacing humans completely.
Both paths, Read wrote, “suck, in my opinion.” Indeed, those who fear the ChatGPTization of journalism don’t see the problem merely as one of labor rights. Kevin Roose, of the New York Times, described AI-generated content as “pink slime” journalism on a recent episode of the Hard Fork podcast with Casey Newton, using a term that more often refers to low-quality meat products. The term “pink slime” has been used to describe low-quality journalism before, as Priyanjana Bengani has documented exhaustively for CJR; by using it to refer to AI-powered content, Roose and others seem to mean journalism that simulates human-created content without offering the real thing.
Experts have said that the biggest flaw in a “large language model” like ChatGPT is that, while it is capable of mimicking human writing, it has no real understanding of what it is writing about, and so it frequently inserts errors and flights of fancy that some have referred to as “hallucinations.” Colin Fraser, a data scientist at Meta, has written that the central quality of this type of model is that “they are incurable, constant, shameless bullshitters. Every single one of them. It’s a feature, not a bug.” Gary Marcus, a professor of psychology and neuroscience at New York University, has likened this kind of software to “a giant autocomplete machine.”
Newton wrote in a recent edition of his Platformer newsletter that some of what ChatGPT and similar software will be used for probably isn’t worth journalists worrying about. “If you run a men’s health site, there are only so many ways to tell your readers to eat right and get regular exercise,” Newton said. He wrote in a different edition of the newsletter, however, that these software engines could also potentially be used to generate reams of plausible-sounding misinformation. Dave Karpf, a professor of internet politics at George Washington University, wrote that the furor over ChatGPT reminds him of the hysteria around “content farms” in 2009 and 2010, when various companies paid writers tiny sums of money to generate content based on popular search terms, then monetized those articles through ads. As Karpf notes, the phenomenon appeared to spell disaster for journalism, but it was ultimately short-circuited when Google changed its search algorithm to downrank “low quality” content. (“Relying on platform monopolists to protect the public interest isn’t a great way to run a civilization,” Karpf wrote, “but it’s better than nothing.”)
Unfortunately, in this case, Google isn’t casting a skeptical eye toward AI-generated content—it is planning to get into the business itself; this week, it unveiled a new chat-based model called “Bard.” (Shakespeare obviously wasn’t busy enough writing the State of the Union.) Nor is it just Google: Microsoft is also getting into the AI software game, having recently invested ten billion dollars for a stake in OpenAI, the ChatGPT creator. This raises the possibility that search engines—which already provide answers to simple questions, such as What is the score in the Maple Leafs game?—could offer more sophisticated content without having to link to anything, potentially weakening online publishers that are already struggling. (Then again, Bard made a mistake in its trial demo.)
While there are some obvious reasons to be concerned about the impact of AI software on journalism, it seems a little early to say definitively whether it is bad or good. ChatGPT seems to agree: When I asked it to describe its impact on the media industry recently, it both-sidesed the question in fine journalistic style. “ChatGPT has the potential to impact the media industry in a number of ways [because] it can generate human-like text, potentially reducing the need for human writers,” it wrote. “But it may also lead to job loss and ethical concerns.”
Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer
@mathewi a nonentity, beneath our notice, and a dangerous distraction, when we’re in real trouble
@mathewi Partner. Developers will adapt to it in large measure by outsourcing drudgery to the machines and using their free time for the more creative aspects of development.
@mathewi Depends, are we talking about large natural language AIs? (which is a disaster), or other forms of AIs? (which can be quite interesting).
@mathewi If I wanted Artificial Intelligence, I would take advice ONLY from people no older than 6yo
@mathewi the second item…
@mathewi It’s further proof that just because you can do something does not necessarily make it a good idea.
@mathewi Why not both?