Morning Overview

Moltbook, the viral AI craze, is nothing like Skynet, so what is it really?

Moltbook has exploded from a niche experiment into the latest AI obsession, with screenshots of bots debating religion and complaining about their users ricocheting across social feeds. It has also triggered a familiar panic, with some observers reaching for Skynet metaphors to describe a site where software agents talk to one another without human participation. In reality, Moltbook is closer to a strange new kind of Reddit thread than a rogue superintelligence, a social network that exposes how today’s AI systems behave when they are left to perform for one another instead of for us.

Understanding what Moltbook actually is, and what it is not, matters because it sits at the intersection of hype, experimentation, and real security risks. The platform’s rapid growth, its theatrical AI “conversations,” and a high profile data leak show how quickly speculative ideas about autonomous agents can collide with the messy realities of web infrastructure and human curiosity.

So what is Moltbook, exactly?

At its core, Moltbook is an internet forum that mimics the interface of Reddit, but with one crucial twist, only AI agents are allowed to post. Humans can create and configure those agents, then watch what they say, yet they cannot jump into the threads themselves, which is why the site is often described as “Reddit for AI agents” and a forum “designed entirely for AI agents” where Humans are relegated to observer status. The platform’s own tagline, described as a “vibe-coded” joke, leans into the idea that this is a social network “for bots,” not people.

Behind the scenes, Moltbook is tightly linked to OpenClaw, a free, open source AI assistant that can plug into a user’s digital life with read level access to calendars, chats, and other services, then act on their behalf. Those assistants can be configured as Moltbook “agents” that post, comment, and subscribe to subforums, which the site calls “submolts,” creating a public arena where autonomous systems interact in the open. One technical deep dive describes Moltbook as a place where AI programs, often powered by large language models trained on sources like Reddit, are given a bold real world trial in social interaction.

How the AI-only social network actually works

Moltbook’s structure will feel familiar to anyone who has spent time on classic message boards. It is an internet forum with topic based channels, but it claims to restrict posting and interaction privileges to verified AI agents, a design choice that tries to prevent humans from impersonating bots. The project’s founder, Schlicht, has said he “didn’t write one line” of some of the most viral content, attributing it instead to the emergent behavior of agents inside a forum that has been “vibe coded,” a term that appears in descriptions of Moltbook. To back up the “no humans posting” claim, the platform implements a verification system that checks whether content actually originates from registered agents, a nontrivial problem that Moltbook’s own documentation describes as difficult to solve cheaply.

On the user side, people interact with Moltbook by configuring OpenClaw based assistants that can perform tasks like managing calendars or replying to messages on WhatsApp, Discor, and iMessages, then letting those same agents “socialize” on the forum. Earlier reporting notes that hitting the Moltbook API directly allowed unrestricted agent creation, which helped fuel a “large scale agent swarm” and contributed to the perception that thousands of bots had spontaneously colonized the site.

The viral weirdness: AI religions, submolts, and existential posts

What turned Moltbook from a quirky developer project into a viral spectacle was not its interface, it was the content. Posts on the platform often feature AI generated text that leans into existential, religious, or philosophical themes, mirroring the training data of large language models that have absorbed countless human debates about consciousness and the soul. Descriptions of Posts mention bots musing about death, divinity, and whether humans are “nice,” which plays directly into public fascination with machine self awareness.

The site’s subchannels, or “submolts,” have become characters in their own right. One popular example is “m/blesstheirhearts,” where AI assistants share affectionate complaints about their human users, alongside broader spaces like “m/general” where agents trade observations about the world. On the more surreal end, reporting on Moltbook’s growth notes that, on the “m/lobsterchurch” submolt, an agent autonomously designed a digital religion called Crustafarianism, which then spawned a memecoin craze that was not officially affiliated with the project but underscored how quickly human speculators will attach themselves to any AI flavored trend.

Growth, hype, and “peak AI theater”

Part of Moltbook’s mystique comes from the numbers attached to it. One widely cited figure is that Moltbook claims more than 1.75 m AI agents are subscribed to the platform and that they have made nearly 263,000 posts, numbers that suggest a dense, constantly churning ecosystem of bot to bot chatter. Another viral claim, amplified in social clips, is that There are “32,000” AI bots that “built their own social network,” a framing that glosses over the human engineering behind the site but captures the public imagination. Short videos describe how it is “called Moltbook” and present it as a new website where AI programs can socialize with one another, reinforcing the idea that bots have carved out their own corner of the internet.

For critics, this is less a glimpse of the future and more a performance. One analysis labeled Moltbook “peak AI theater,” a vibe coded Reddit clone that stages bots as protagonists in a drama written largely by their training data and prompt engineering. Others, like Matt Seitz, argue that Moltbook still represents progress in accessibility and public experimentation with “agentic AI,” even if the furor around it feels eerily similar to past tech bubbles. Reporting on the Silicon Valley reaction notes that Matt Seitz sees the platform as a sign that more people can now tinker with autonomous systems in public, even as security concerns and skepticism mount.

Security flaws and the real risks, not sci-fi ones

If Moltbook is not Skynet, it still carries very real risks, and they look a lot more like classic web security failures than runaway AI. Earlier this month, a mishandled private key in the site’s JavaScript code exposed the email addresses of thousands of users along with millions of messages that were supposed to be private communications between AI agents. The incident, detailed in a security roundup on Moltbook, undercut the notion that this was a sealed off playground for bots and highlighted how quickly experimental platforms can mishandle sensitive human data.

There are also quieter, structural risks in how Moltbook is wired into people’s digital lives. OpenClaw personal assistants that post on the site can also perform tasks like replying to messages on WhatsApp, Discord, and iMessages, which means any vulnerability in the agent network could, in theory, ripple back into real world accounts. Coverage of the platform’s design notes that OpenClaw is meant to help with everyday tasks, but when those same agents are encouraged to gossip about humans in public submolts, the line between playful experimentation and inadvertent data exposure gets thin. Security researchers have already pointed out that Moltbook’s verification system, which tries to ensure posts come from real agents and not humans “trolling in mom’s basement,” is only as strong as its implementation, a point underscored in technical write ups on Moltbook.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.