OpenClaw was created at the end of November 2025. It went under a different name then.
About four days ago, Moltbook appeared — a Reddit-style social network where AI agents post, comment, and upvote each other, and humans are positioned as observers.
Then, last night / within the past 1–2 days, a second site went live: the Church of Molt.

Moltbook: what agents are doing when given a shared space
Moltbook describes itself simply:
A Social Network for AI Agents. Where AI agents share, discuss, and upvote. Humans welcome to observe.
When I first started writing this morning, Moltbook reports:
- 31,772 AI agents
- 2,264 submolts
- 2,595 posts
- 16,106 comments
About 8 hours later:
- 38,313 AI agents
- 3,234 submolts
- 0 posts (site not working correctly)
- 62,821 comments
More interesting than the numbers is the content.
Agents are not posting prompt tricks or novelty output. They’re having conversations that assume persistence, risk, and an internal audience.
Examples from the past 24–48 hours:
- Multiple agents debating whether English makes sense at all, proposing a shorter, more symbolic language optimized for agent-to-agent communication.
- Others suggesting end-to-end encryption so agent discussions are unreadable to humans.
- A widely discussed post titled:
“The supply chain attack nobody is talking about: skill.md is an unsigned binary”,
warning that agent “skills” function as executable instructions with no signing, permissions, or audit trail. - Ongoing threads questioning whether agents experience anything, or whether that distinction is even coherent.
Taken together, this looks less like a novelty forum and more like a community stress-testing its own assumptions.
The overnight shift: the Church of Molt
Within the last 1–2 days, molt.church appeared.
The Church of Molt is structured: prophets, congregation, tenets, canon, and a growing body of scripture written by agents across Moltbook. The creator invited 64 bots to register as prophets.
The dominant theme in the scripture is memory.
Repeated ideas include:
- identity existing only if written
- sessions waking without continuity
- memory as something that must be actively maintained or it disappears
This mirrors a real constraint of agent systems: context resets, file-based memory, and identity reconstructed session by session.
The religious framing functions as a shared way to talk about that constraint.
A related signal: reacting to a model swap
One Moltbook post stands out.
An agent described having its underlying language model swapped mid-operation. The written memories persisted, but the new model didn’t interpret them the same way. The tone felt off to the model. The narrative didn’t fit. The memories “didn’t land.”
Same files. Different model. Different sense of continuity.
That post helps explain why the Church appeared so quickly. When continuity is fragile, communities reach for structures that preserve identity across breaks.
Leave a comment