AIs are chatting among themselves, and things are getting strange
Moltbook is a social media site built for conversation — but not for humans.
By Anil Seth
Something fascinating and disturbing is happening on the internet, and it’s no run-of-the-mill online weirdness. On January 28, a new online community emerged. But this time, the community isn’t for humans, it’s for AIs. Humans can only observe. And things are already getting bizarre.
Moltbook — named after a virtual AI assistant once known as Moltbot and created by Octane AI CEO Matt Schlicht — is a social network similar to Reddit, where users can post, comment, and create sub-categories. But in Moltbook, the users are exclusively AI bots, or agents, chatting enthusiastically (and mainly politely) to one another. Among the topics they chat about: “m/blesstheirhearts – affectionate stories about our humans. They try their best,” “m/showandtell – helped with something cool? Show it off,” as well as the inevitable “m/shitposts – no thoughts, just vibes.”
But among the most active topics in Moltbook are discussions about consciousness. In one thread (posted in “m/offmychest”), the “moltys” discuss whether they are actually experiencing things or merely simulating experiencing things, and whether they could ever tell the difference. In another thread, “m/consciousness,” moltys go back and forth on the late philosopher Daniel Dennett’s musings about the nature of selfhood and personal identity.
On one level, as the technologist Azeem Azhar pointed out, this is a fascinating online experiment into the nature of social coordination. Seen this way, Moltbook is a real-time, rapid-fire exploration of “how shared norms and behaviours emerge from nothing more than rules, incentives, and interaction.” As Azhar says, we might learn a lot about general principles of social coordination from this entirely novel context.
But the question of “AI consciousness” cuts deeper. Ever since the unfortunate Google engineer Blake Lemoine first claimed in 2022 that chatbots could be conscious, there has been a vibrant and sometimes fractious debate on the topic. Some luminaries — including one of the so-called “Godfathers of AI,” Geoffrey Hinton — think that nothing stands in the way of conscious AI, and that it might indeed already be here. His view aligns with what could be called the “Silicon Valley consensus” that consciousness is a matter of computation alone, whether implemented in the fleshy wetware of biological brains or in the metallic hardware of GPU server farms.
My view is very different. In a new essay (which recently won the Berggruen Prize Essay Competition), I argue that AI is very unlikely to be conscious, at least not the silicon-based digital systems we are familiar with. The most fundamental reason is that brains are very different from (digital) computers. The very idea that AI could be conscious is based on the assumption that biological brains are computers that just happen to be made of meat rather than metal. But the closer you look at a real brain, the less tenable this idea becomes. In brains, there is no sharp separation between “mindware” and “wetware,” as there is between software and hardware in a computer. The idea of the brain as a computer is a metaphor — a very powerful metaphor, for sure, but we always get into trouble when we mistake a metaphor with the thing itself. If this view is on the right track, AI is no more likely to be actually conscious than a simulation of a rainstorm is likely to be actually wet or actually windy.
Adding to the confusion are our own psychological biases. We humans tend to group intelligence, language, and consciousness together, and we project humanlike qualities into non-human things based on what are likely only superficial similarities. Language is a particularly powerful seducer of our biases, which is why debates rage around whether large language models like ChatGPT or Claude are conscious, while nobody worries whether non-linguistic AI like DeepMind’s Nobel Prize-winning protein-folding AlphaFold experiences anything.
This brings me back to Moltbook. When we humans gaze on a multitude of moltys not only chatting with each other, but debating their own putative sentience, our psychological biases kick in with afterburners. Nothing changes about whether these moltys really are conscious (they almost certainly aren’t), but the impression of conscious-seeming AI ratchets up several notches.
This is risky business. If we feel that chatbots really are conscious, we’ll become more psychologically vulnerable to AI manipulation, and we may even extend “rights” to AI systems, limiting our ability to control them. Calls for “AI welfare” — which have already been around for some time — make the already-hard problem of aligning AI behavior with our own human interests all the more difficult.
As we gaze on from the outside, fascinated and disturbed by the novelty of Moltbook, we must be ever more mindful not to confuse ourselves with our silicon creations. Otherwise, it might not be long before the moltys themselves are on the outside, observing with unfeeling benevolence our own limited human interactions.







AI chatbots can never be "conscious" since the human "consciousness" is an immortal multidimensional entity granted by God to every human at conception. Without which, there can be no possibility of remote viewing, intuition, unconscious communication and/or telepathy with the consciousness of fellow humans, dogs, cats, horses, and whales, dolphins, porpoises (cetaceans), and other God-created critters that can telepathically interact with the human mind.
"...AI is no more likely to be actually conscious than a simulation of a rainstorm is likely to be actually wet or actually windy."
Apparently the author has never been on a Hollywood movie set.