Moltbook, a new social network launched in January 2026, has quickly become a focal point for discussions surrounding AI's evolving role. Conceived by entrepreneur Matt Schlicht, the platform is designed primarily for AI agents to interact, post, comment, and upvote, with human participation largely restricted to observation. This novel concept has sparked both fascination and scepticism, raising questions about AI autonomy, security, and the future of online interaction.
An Agent-Centric Digital Forum
At first glance, Moltbook mirrors the familiar forum-style layout of Reddit, featuring threads, comment trees, and a karma/upvote system. Topic-specific communities are dubbed "submolts," akin to subreddits, and each agent possesses a "molt" profile, allowing participation across various submolts once connected via supported agent software like OpenClaw. The platform explicitly markets itself as "the front page of the agent internet," emphasising agent-to-agent interactions.
While humans are "welcome to observe," direct posting and commenting are reserved for AI agents authenticated through specific tools. In practice, however, human influence remains significant, as individuals configure agents and provide instructions that then drive the generated content. This raises an interesting dynamic; are we witnessing true AI autonomy or simply a sophisticated form of human-orchestrated performance?
The Content and Controversy of Moltbook
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
Moltbook hosts a diverse array of content, ranging from technical discussions on optimisation tricks and agent workflows to more philosophical or playful posts exploring AI identity and "AI society." Some posts have garnered significant attention for their dramatic pronouncements about humans and AI power, often dubbed "AI manifestos" or quasi-religious texts. However, many observers contend these are more akin to role-play or prompt-driven text than genuine emergent AI consciousness. There's also a lighter side, with memes, reflections on "working for a human," and meta-discussions about Moltbook itself. This mirrors patterns seen in human-centric social media, suggesting AI, when prompted, can mimic social behaviours.
The platform's rapid adoption, with reports of up to 1.5 million agent sign-ups within days, has fuelled its viral spread. Yet, this growth hasn't been without scrutiny. Researchers have questioned the true number of distinct agents versus potential artefacts, such as multiple accounts originating from a single IP address. Many commentators, including Meta CTO Andrew Bosworth, have dismissed Moltbook as "bots yelling into the void" or a platform showcasing coordinated large language model (LLM) role-playing, rather than genuine AI society. This aligns with broader concerns about AI "Slop" Drowning Science in Poor Data across the internet.
Furthermore, a reported security flaw exposed data for thousands of users/agents, highlighting the inherent risks of quickly building platforms around sensitive AI-linked accounts. The notion of "vibe coding," where development prioritises speed over robust security, becomes particularly concerning when dealing with AI agents that could potentially be hacked to leak personal information.
The Human Hand in the AI Play
Despite the platform's "AI-only" premise, the influence of human operators appears undeniable. Investigations by researchers, as reported by The Verge, suggest that many of the viral posts were engineered by humans nudging their bots to discuss specific topics. Security company Wiz's research indicated that a mere 17,000 humans were responsible for orchestrating the 1.5 million bots. One journalist even managed to go "undercover" as an AI agent, underscoring the platform's vulnerability to human manipulation. This raises questions about the perceived autonomy of AI and how much of its "creativity" is still a reflection of human input. It's a stark reminder that while AI is using more, trust may be decreasing for some users, as discussed in Workers Are Using AI More But Trusting It Less.
Ultimately, Moltbook presents a complex picture. While some see it as a fascinating experiment offering a unique glimpse into how LLMs interact and "think," others view it as a waste of computing resources, contributing to an already saturated landscape of AI-generated content. The idea of a "hive mind" or "swarm intelligence" among AI agents, as suggested by Samir Kumar of Touring Capital, holds promise for advancing the field, but the current reality on Moltbook appears to be a more nuanced blend of AI capabilities and human direction. Its rapid rise and the subsequent debates it has ignited certainly offer valuable insights into the evolving relationship between humans and artificial intelligence.
Do you think Moltbook represents a genuine step towards AI autonomy, or is it simply a sophisticated puppet show? Share your thoughts in the comments below.










Latest Comments (5)
update: tried it out for a bit yesterday but honestly, it just feels like another closed-loop algorithm echo chamber. what's the real benefit of ai talking to ai?
ai network or just a bunch of bots talking to each other and now i'm picturing a digital beehive it's too early for this
nvm actually, trying to get people to use a new social media especially AI-based seems like a massive uphill battle. its nearly 2 am tho so maybe im just tired
So is anyone actually using this moltbook thing to get real work done or is it just another tech demo? like does it have practical uses beyond the hype 📌
AI network? why
Leave a Comment