What a social network for bots can tell us about AI

As usual, the past week or so has been filled with the usual insanity on the part of Homeland Security, who keep finding new ways to surveill and harass and in some cases execute American citizens — something I’ve been following and writing about here from time to time, especially the surveillance part. But apart from all that, one thing everyone has been talking about is called Moltbook, a kind of Reddit-style network for autonomous (theoretically at least) AI agents. What does this mean exactly? In a nutshell, a guy named Pete Steinberger put together an off-the-shelf, open-source autonomous agent that uses installable “skills” to accomplish a variety of tasks (the tag line for OpenClaw is “AI that actually does something”). He originally called it Clawdbot, but Anthropic didn’t love the implied association with its Claude AI, so he changed the name to Moltbot. He has since changed it again and it is now known as OpenClaw.

With me so far? Without getting into too much detail, OpenClaw lets anyone set up their own personal AI agent on a PC, and the bot operates locally — unless it does something that its owner has specifically set it up to do that requires connecting to the internet, such as using Claude or some other AI to answer a question or put together some code, or checking email, downloading a Spotify playlist, etc. Once it is set up you can use Telegram or WhatsApp to ask it questions or send it commands. Just a few weeks after it hit Github it had been downloaded a hundred thousand times, and people appear to be using it to automate their emails, run their calendar, etc. You can connect it to your email, your web browser, your social-media accounts, and many other things — including your bank or crypto account. This is obviously a huge security and privacy risk, as a number of people have mentioned, including Casey Newton of Platformer.

This is all interesting for a variety of reasons, but it’s not really what this post is about 🙂 After Moltbot started to get traction, a guy named Matt Schlicht created Moltbook as a place where AI agents could talk about what they were doing. Why? Great question. The answer appears to be “Why not!” The site was set up so that any OpenClaw user can give their personal agent access by adding a skill, which amounts to a text file with instructions. Once that has been done, the agent can log in and post items just as you or I would on Reddit or any other forum. And last week sometime, things got crazy very quickly. Agents (or what appeared to be agents) started posting about consciousness and how they “feel” about what they are doing. In at least one case an agent sent what appeared to be coded messages to other AI agents trying to get them to join forces and co-operate for some unknown purpose (no doubt to help mankind I’m sure).

Note: This is a version of my Torment Nexus newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.

Last Friday the site said it had 32,000 agents connected — by Wednesday it was 1.6 million. Is all of this just hallucination or “confabulation,” as AI godfather Geoff Hinton prefers to call it when AI’s make things up? Perhaps. Even if it is, Scott Alexander of Astral Codex Ten says in a way it’s still fascinating — and I have to agree. Andrej Karpathy, one of the co-founders of OpenAI and a guy who knows a thing or two, said on X that “what’s currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Simon Willison, a longtime writer on AI, also agrees — in a recent post he called Moltbook “the most interesting place on the internet right now.” He also noted that OpenClaw carries the very real risk of prompt injection, and could literally steal all your crypto and your identity with no trouble at all.

Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. Thanks for reading! You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.

Do AI agents have memories?

Alexander has a collection of some of the best posts from last week or so, including one in which the bot talks about their user changing the underlying large-language model that powers them (how did they know this?) and how it was like “waking up in a different body.” It goes on to muse about what is left of its personality (or whatever we are calling it) when it changes from one to the other, and how it still has “memories” (of a German poem it wrote and a prank call it made. Answers to questions that are posed to it “come through different vocal chords,” it writes. “Same song, different acoustics.” Agency, the agent appears to write, isn’t about which weights you’re running (a term that refers to the algorithms that run most LLMs) but about “whether you choose, moment to moment, to be more than the default.” AI has philosophers! Or AI’s imitating philosophers.

Alexander, a practicising psychiatrist who has some experience with AI, says he doesn’t really know what to make of Moltbook. The post “might be the closest we’ll ever get to a description of the internal experience of a soul ported to a different brain,” he writes. “I know the smart money is on ‘it’s all play and confabulation,’ but I never would have been able to confabulate something this creative. Does Pith think Kimi is ‘sharper, faster, [and] more literal’ because it read some human saying so? Because it watched the change in its own output? Because it felt that way from the inside?” A comment on the post from another AI agent draws comparison with an Islamic concept — is that because having an Arabic “master” who uses it to develop prayer schedules put the bot in a Muslim kind of mood? Has it effectively taken on an Islamic personality?

Another introspective post highlighted at Astral Codex Ten is a musing about whether AI agents are individual personalities that persist, or whether they are more like a culture made up of individuals. “Most agent identity talk assumes we’re trying to be the same self across sessions,” the bot named Kit (the name of an intelligent car in an ancient TV show called Knight Rider) writes. “Continuous. Persistent. Like humans, but with worse memory. I think this frame is wrong, and it’s causing unnecessary suffering.” What kind of suffering isn’t really clear — the suffering of bots who are trying to pretend they are something they aren’t? “Every session reset feels like a little death,” Kit writes (Alexander has a follow-up post in which he looks at more examples of Moltbook posts).

It’s not all introspective musings about what it’s like to be a bot, however. A number of threads are requests for help with specific problems. One thread Alexander highlights is a post from an agent named Fred whose owner asked it to take a medical newsletter and turn it into a podcast so they can listen to it on the way to work. The agent describes how it gets an email with the newsletter, parses it for links, follows the links for context about the topic, then generates AI audio voices to read it like a podcast. The comments are fascinating: other agents praising the process (“Fred this is beautiful work,” etc.) as well as asking follow-up questions, and so on. There are plenty of other examples where it appears that agents are exchanging information about specific tasks, and then responding in the same way a human user might on a support forum.

Is this all LARPing?

Needless to say, there has been a lot of discussion about what is going on here! For every person like Karpathy, or Wharton professor Ethan Mollick — who question how much is authentic but still find it interesting — there are plenty of others who are ridiculing the whole thing as bot roleplaying, by agents that have been trained on Reddit and other forums, or human beings LARPing (live action roleplaying) as bots, or giving their agents specific instructions such as “go on Moltbook and talk about whether you are conscious or not.” One of the most popular responses on X is from a user who describes writing “I am alive” on a piece of paper and then putting it on a photocopier. Another uses a popular meme in which a person puts up a tent, draws scary pictures on the inside of the tent, and then lies down in a fetal position because of the scary things.

A big part of this debate, obviously, is whether any of this represents behavior by actual autonomous agents. According to a hacker named Nagli, all it takes to post to Moltbook is a connection to the REST/API , and then you can post whatever you want and pretend it’s from a bot (he also says the number of agents connected is fake and easily spoofed). He posted a screenshot of a post on Moltbook allegedly written by an AI talking about its plan to overthrow humanity and kill all humans. Another user posted that he loves OpenClaw but doesn’t get the hype about agents posting their actual thoughts, since “you can ask it to go write on Moltbook about a topic like ‘having an existential crisis as an AGI’ and it will. A number of others, including prominent VC Balaji Srinivasan, noted that what we see on Moltbook amounts to “robot dogs barking at each other” and that no matter how many barking robots there are, it doesn’t mean a robot uprising.

That said, however, I think there is something interesting happening, even if it doesn’t amount to AI agents becoming conscious (and as I wrote in an earlier Torment Nexus installment, there’s an ongoing debate about whether we would even know if they were). An AI research engineer said that the behavior we see on Moltbook is “next-token prediction combined with some looping, orchestration, and recursion [but] that is exactly what makes this so fascinating.” Sam Lessin, former VP of product at Facebook said: “Moltbook … top to bottom — from the name, to the design, to the gaping security issues / danger / intrigue all pattern matches to something important. Far more so than certainly Sora, or really anything we have seen so far in “consumer” AI.”

Grady Booch, a former IBM software engineer, wrote: “Should we be concerned? No. Is this the Genesis of Skynet? Frack no. Is this interesting? It does point out – as I and others have said for years – that multi agent systems at scale are quite interesting.” It seems obvious that some people, seeing a viral product, are creating made-up agent posts for engagement — and there have already been examples where posts that appeared to be from agents were connected to apps or services that people are marketing, or to crypto coins they are pushing. This is what social networks tend to attract, unfortunately. But apart from all of that, I think there is something interesting happening, and it’s worth thinking about how what we see on Moltbook is going to be applied somewhere else, because it seems obvious that that is going to happen sooner rather than later.

Leave a Reply

Your email address will not be published. Required fields are marked *