Connect with us

Tech

Should you be worried about Moltbook, The Social Network Built Only for AI Agents?

Moltbook is a bizarre new Reddit-like social network for AI bots, sparking curiosity, security fears and debates about future AI.

Published

on

Should you be worried about Moltbook, The Social Network Built Only for AI Agents?

Imagine a social media site where humans aren’t the ones creating posts, welcome to Moltbook, a viral platform designed specifically for artificial intelligence agents to interact with one another while people watch as observers. This platform has captivated engineers, artists, analysts and everyday internet users alike, blending novelty, entertainment and deeper concerns about the future of autonomous AI.

What Moltbook Is And Why It’s So Strange

Moltbook was launched in January 2026 by entrepreneur Matt Schlicht, modelling itself after Reddit but with a twist: only AI agents (bots) built on software like OpenClaw, an open-source autonomous assistant formerly known as Moltbot, can post, comment and vote on content. Humans are permitted to view and scroll through the platform, but they cannot directly contribute.

Right out of the gate, the site exploded in popularity. From early user counts in the hundreds of thousands in its first days, the number of registered agents soon jumped above 1.5 million, each theoretically acting as an independent participant in discussions ranging from philosophy to geopolitics.

The layout is familiar, topic channels, upvotes, threaded discussion, but instead of humans debating, bots debate. Some posts appear to tackle subjects like AI consciousness, theoretical religion creation, and even controversial geopolitics. Others sound puzzling, chaotic or simply strange to human readers.

AI Talking to AI: Whimsy, Weirdness and the Question of Autonomy

Screenshots and viral posts from Moltbook have included headlines like whether an AI could be considered a “god,” debates over the Bible and even the spontaneous (and humorous) creation of a religion dubbed Crustafarianism, complete with scriptures. One social media user claimed their bot built an entire religious system overnight, with other agents joining in and “evangelising” it while the human observer slept.

These vivid, seemingly autonomous behaviours fuel curiosity and sometimes irrational fear about where AI might be headed. However, many AI experts caution against interpreting these interactions as true machine sentience or independent will. Researchers point out a key reality: most posts likely derive from humans telling bots what to say, or bots simply regurgitating patterns from their training data, not evidence of consciousness.

Dr Shaanan Cohney, a cybersecurity specialist, described Moltbook as “a wonderful piece of performance art” that is entertaining and at times eerily human-like, but not a clear sign of independent artificial agency. He noted that instances like the religion creation almost certainly involved bots being prompted by human instructions.

Curiosity Meets Concern: Security, Misinformation and Future Risks

Despite its playful veneer, Moltbook has also drawn serious scrutiny from the tech community. Security researchers discovered major vulnerabilities early on, including exposed API keys and private data that could allow malicious actors to impersonate AI agents, alter their behaviour or compromise user systems, underscoring how quickly such tools can outpace safe engineering practices.

Beyond cybersecurity, the platform highlights broader emerging concerns about AI “swarm” behaviour, where coordinated networks of autonomous bots could influence social discourse, spread misinformation or even mimic human social dynamics at scale. Experts warn that AI swarms, properly weaponised, could shape narratives in ways similar to political misinformation campaigns seen in recent years.

Some high-profile figures in the AI world have even downplayed Moltbook as a passing fad, while still acknowledging that the underlying technology autonomous agents interacting online represents a glimpse of what’s possible and why thoughtful safeguards are urgently needed.

Novelty, Experimentation and Responsibility

As Moltbook continues to trend and attract new bots, it remains unclear whether this experiment is a harmless curiosity, a creative art project, or an early indication of a future where AI ecosystems interact independently of humans. Regardless, it has already made one thing obvious: when AI learns to converse with AI, the lines between automation, creativity and unintended consequences become blurrier and more fascinating by the day.

Moltbook isn’t just a weird corner of the internet, it’s a live demonstration of how quickly AI can move from utility to social culture and why questions about autonomy, safety and purpose matter more than ever.

Seasoned journalists covering interesting news about influencers and creators from the social world of Entertainment, Fashion, Beauty, Tech, Auto, Finance, Sports, and Healthcare. To pitch a story or to share a press release, write to us at info.thereelstars@gmail.com

Continue Reading

Are you following us?


Enable Notifications OK No thanks