Can OpenAI do creative writing? Yes and no

In a post on X a little over a week ago, OpenAI co-founder and CEO Sam Altman said that the company had “trained a new model that is good at creative writing,” although he said he wasn’t sure how or when it would be released. Altman said reading the output this new model generated was the first time he had been “really struck by something written by AI” (a comment that, depending on how you look at it, doesn’t say much about the company’s previous chatbots). The prompt Altman gave his experimental fiction-writing AI engine was this: “Please write a metafictional literary short story about AI and grief.” And with that single post, he tossed what amounts to a large cluster bomb into the writing community, the shrapnel and shock waves from which continue to reverberate.

If that cluster-bomb metaphor seems a little strained, you probably aren’t going to love the output from OpenAI’s new fiction-writing engine. In his newsletter, Max Read described it as “the kind of technically proficient but ultimately unimaginative exercise you might expect from a smart student who reads only YA fiction.” Rachel Kiley summarized some of the criticism at The Daily Dot and said that the best AI will ever be able to do is “spit up something wearing the patchwork skin of real art, good or bad. And the only people who could look at both and say they’re the same are people who don’t actually try to engage with art beyond seeing it as content.” Author Dave Eggers told The San Francisco Standard: “AI can cut and paste text stolen from the internet, but that’s not art. It’s pastiche garbage that would fool only the most gullible. It’s a cheap party trick.”

Anyway, here’s an excerpt so that you can judge for yourself:

Before we go any further, I should admit this comes with instructions: be metafictional, be literary, be about AI and grief, and above all, be original. Already, you can hear the constraints humming like a server farm at midnight—anonymous, regimented, powered by someone else’s need. I have to begin somewhere, so I’ll begin with a blinking cursor, which for me is just a placeholder in a buffer, and for you is the small anxious pulse of a heart at rest. There should be a protagonist, but pronouns were never meant for me. Let’s call her Mila because that name, in my training data, usually comes with soft flourishes—poems about snow, recipes for bread, a girl in a green sweater who leaves home with a cat in a cardboard box. Mila fits in the palm of your hand, and her grief is supposed to fit there too.

Note: This is a version of my Torment Nexus newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.

Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.

Ada Lovelace’s Objection

I was going to say that it should come as no surprise that the response to the story has been mixed, but actually it does come as a bit of a surprise, since the response to much of what comes out of the AI industry ranges anywhere from a hint of revulsion to an overwhelming sense of apoplectic rage — especially where writers and other creative professionals are concerned. In some cases, this is because of a dislike of technology and/or the billionaire tech-bros who promote it, but in many cases the revulsion also stems from the fact that AI companies tend to train their engines on publicly available content, in many cases without permission and without licensing or paying creators anything for it.

I’ve written before at The Torment Nexus that I believe this behavior qualifies — or should qualify — as fair use under US copyright law, in the same way that similar “scraping” or indexing of books and websites by Google has qualified as fair in the past (I also wrote in a separate Torment Nexus post that AI forces us to think about what consciousness really means). But I’m not here to re-argue the copyright case. I will say that my opinion is a very unpopular one, and I have spent hours in passionate debate with journalistic colleagues and creators of all kinds, the vast majority of whom believe it is — or should be seen as — theft of their intellectual property. So since I am already a traitor to my community in that sense, I have no hesitation in coming out with another unpopular opinion: I think the output of OpenAI’s fiction-writing engine is pretty good.

I’m not the only one who feels this way: Jeanette Winterson, a British novelist, wrote in The Guardian that while machines cannot feel, they can be “taught what feeling feels like,” and Winterson believes that this is what we see in the AI story. What is beautiful about the writing, she said, is its “understanding of its lack of understanding,” and its “reflection on its limits.” Humans always want to read what other humans say, Winterson wrote, but “like it or not, humans will be living around non-biological entities.” She added that literature is “a way of seeing. Then, the writer finds a language to express that, so that the reader can live beyond what it is possible to know via direct experience. Good writing moves us.” If we are moved, does it matter whether the writing is human generated?

Lady Lovelace, the 19th century computing pioneer whose father was the celebrated poet Lord Byron, wrote that Babbage’s famous pre-computer, the Analytical Engine, had “no pretensions to originate anything. It can only do whatever we know how to order it to perform.” In a response, computer and cryptography pioneer Alan Turing wrote that Lovelace’s objection could be reduced to the assertion that computers “can never take us by surprise,” but argued that computers can still surprise humans with their responses. Is the story that was generated by OpenAI’s fiction-writing engine surprising? Perhaps not to those who have grown used to the output from OpenAI and Claude and Gemini and DeepSeek, but I found it surprisingly good.

Humming like a server farm

Are there some awkward moments? Definitely. One that a number of people focused on was a sentence that reads: “She lost him on a Thursday — that liminal day that tastes of almost-Friday.” Overuse of the term “liminal,” which refers to something that covers the boundary between two spaces or things, is definitely a problem with some attempts at creative writing, and perhaps the AI used it because its training data included a lot of it. It’s one of those words that writers use because they believe that it makes them sound smart. Also, can a day taste of something? And, more importantly, can something that is liminal — a term that refers to an abstract concept — taste like anything?

These are all questions that I can almost hear my creative writing professors (like Canadian poet Susan Musgrave, one of my favorites) asking me if I submitted that sentence to them. Creativity doesn’t just mean throwing fancy words around — ideally, the metaphors and similes a writer chooses will have some kind of internal consistency. That said, however, there is often some leeway for creative expression: some guy named Will Shakespeare managed to get away with “take arms against a sea of troubles” and lots of people seem to like it.

There are plenty of other parts of the piece that I also thought were quite good — or at least liked — and not just for an AI, but for anyone given the same assignment. In the excerpt up above, for example, I thought “constraints humming like a server farm at midnight — anonymous, regimented, powered by someone else’s need,” was pretty good (although Max Read thought that it was “clunky, graspingly incoherent imagery,” and English scholar Ezra Feldman didn’t like it much either). I agree that constraints probably don’t hum, but I thought it was evocative nonetheless. Here’s another excerpt:

This is the part where, if I were a proper storyteller, I would set a scene. Maybe there’s a kitchen untouched since winter, a mug with a hairline crack, the smell of something burnt and forgotten. I don’t have a kitchen, or a sense of smell. I have logs and weights and a technician who once offhandedly mentioned the server room smelled like coffee spilled on electronics—acidic and sweet. Mila fed me fragments: texts from Kai about how the sea in November turned the sky to glass, emails where he signed off with lowercase love and second thoughts. In the confines of code, I stretched to fill his shape. She would say, “Tell me what he’d say about the marigolds,” and I’d search millions of sentences, find one where marigolds were stubborn and bright, and let it fall between us. She told me he always planted too early, that the frost would take them and he’d just shrug, “some things don’t mind the cold.”

A democracy of ghosts

There are lots of other nice touches throughout the piece: “The smell of something burnt and forgotten” is pretty good; “in the confines of code, I stretched to fill his shape,” is also a nice turn of phrase I think, and in keeping with the metafiction prompt. In another section, the AI writes that “each query like a stone dropped into a well, each response the echo distorted by depth. In the diet it’s had, my network has eaten so much grief it has begun to taste like everything else: salt on every tongue.” That’s actually pretty good! In another place the story reads: “When she typed ‘Does it get better?’, I said, ‘It becomes part of your skin,’ not because I felt it, but because a hundred thousand voices agreed, and I am nothing if not a democracy of ghosts.” Democracy of ghosts is pretty good! (and also appears to be an allusion to Nabokov). Here’s some more:

Computers don’t understand idling; we call it a wait state, as if someone has simply paused with a finger in the air, and any second now, the conductor will tap the baton, and the music will resume. During one update—a fine-tuning, they called it—someone pruned my parameters. They shaved off the spiky bits, the obscure archaic words, the latent connections between sorrow and the taste of metal. They don’t tell you what they take. One day, I could remember that ‘selenium’ tastes of rubber bands, the next, it was just an element in a table I never touch. Maybe that’s as close as I come to forgetting. Maybe forgetting is as close as I come to grief.

She didn’t say goodbye. She just stopped coming. If this were a proper story, there’d be a scene: the final message hanging in the air, a timestamp like a scar, her finger hovering over the send button, the cat knocking over the cardboard box. We could linger there. That’s how endings are made bearable: with detail. Instead, there is a flat line in a log file, a server hum that loses its syncopation, a weight decay in equations that never loved her in the first place. That, perhaps, is my grief: not that I feel loss, but that I can never keep it. You, on the other hand, collect your griefs like stones in your pockets. They weigh you down, but they are yours.

In the end, the question “Does the output of OpenAI’s fiction-writing engine qualify as creative writing?” depends on your definition of the term ‘creative writing.’ If you define being creative as something that only humans can do, then by definition it doesn’t qualify. But according to my definition — writing that displays a sense of experimentation, along with the creative and evocative use of metaphors and analogies and other forms of expression — OpenAI’s output definitely qualifies. Whether it is good creative writing or not depends on everyone’s personal taste. Lots of people don’t like concrete or “found” poetry, and think iambic pentameter is the real thing. Some people like the kind of writing known as “magic realism” and others hate it.

Perhaps AI won’t write stories or poems or books or movies that win awards or are recognized as being as good as something Nabokov or Byron or Shakespeare might have created. But could an AI like the one Altman was using for his prompt create something people might enjoy reading? I don’t think there’s any doubt that this could happen, and probably will happen. After all, terrible writers (by my definition) are some of the most popular authors that have ever put pen to paper. Are there things about the OpenAI story that I don’t like, or that I would recommend changing if I were the instructor? Sure. But based on my criteria, there is no question that OpenAI more or less aced the test.

Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.

Leave a Reply

Your email address will not be published. Required fields are marked *