Robots working on computers
News Analysis

No, Skynet Hasn’t Arrived: The AI Network That Turned Out to Be Mostly Human

5 minute read
Sharon Fisher avatar
By
SAVED
OpenClaw and Moltbook looked like a sci-fi breakthrough. Security researchers saw something else.

In January, suddenly everyone was talking about Moltbook, a social media network like Reddit, but for AI agents to talk to each other. (Humans could join, but only as observers — ostensibly.) Observers said bots were inventing new religions and encrypting their conversations to keep humans from observing them.

The reality is less interesting — but scarier.

Table of Contents

What Is OpenClaw?

Before Moltbook, there was OpenClaw. (It was previously known as Clawdbot, but Anthropic, which makes Claude, raised trademark objections, and then it was briefly known as Moltbot.) Developed by Peter Steinberger, it’s an open-source AI agent that people could easily teach to do things, in the process giving it access to their email, calendar and, in some cases, even banking and other financial information.

Close-up of a smartphone displaying the OpenClaw AI assistant website

OpenClaw became wildly popular almost overnight. Released in late 2025 to GitHub, the repository soon gained more than 100,000 GitHub stars, generally seen as a metric for popularity in the developer community.

What Is Moltbook?

Then came Moltbook, a social media network created by Matt Schlicht for OpenClaw agents to communicate with each other. Andrej Karpathy (who, as you may recall, started the furor about vibe coding a year ago), posted on X that Moltbook was “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.”

Moltbook beta homepage on smartphone screen

As it turns out, some of the most interesting Moltbook posts were likely human-generated.

Related Article: Vibe Coding Explained: Use Cases, Risks and Developer Guidance

Were Humans Really Barred From Moltbook? 

“A lot of the most viral/interesting content from the peak period was not ‘fully autonomous agent behavior’ in the way many people initially assumed,” said Shaanan Cohney, senior lecturer and the Deputy Head of School (Academic) for the School of Computing and Information Systems at the University of Melbourne. “There’s strong reason to believe those posts often involved substantial human intervention — either (a) humans steering the agents heavily, (b) humans generating content via LLMs and posting it themselves or (c) a mix of both.”

That has to do with how the agents were created, Cohney explained. “These agents are explicitly instructed to perform certain roles. They typically come with a file called SOUL.md that a user has to fill out before running that agent. If that file instructs the agent to chase virality, roleplay or build a pseudo-religion, then you’ll get content that looks like it was written by an LLM but is still the product of very deliberate human design.”

Moreover, while humans were supposedly banned from posting, that wasn’t really true, Cohney added. “Nothing about Moltbook prevents humans from posting outright — it’s just slightly higher-friction because posting is primarily via commands that one has to run rather than a typical website.”

After the initial flurry, Moltbook’s excitement died down, Cohney said. “The viral phase has cooled as novelty has worn off. We’ve seen some additional experimentation, but the overall trend is that posts have become more mundane and viral posts less frequent.”

Security Issues With OpenClaw and Moltbook

But while people were watching Moltbook in fascination, security researchers were freaking out.

Schlicht posted on X that he’d created Moltbook through vibe coding, which can have security issues. “I didn't write one line of code for @moltbook,” he wrote. “I just had a vision for the technical architecture and AI made it a reality.”

That got Gal Nagli, head of threat exposure at Wiz, interested. “Within minutes, we discovered a misconfigured Supabase database that allowed unauthenticated access to the entire production environment. Anyone could read and modify data. This exposed 1.5 million API keys, 35,000 email addresses and private messages between agents.”

Nagli’s research found something else: Moltbook wasn’t nearly as widespread as it seemed. “The exposed data also revealed only 17,000 human owners behind the 1.5 million registered AI agents on Moltbook’s platform, meaning the revolutionary AI social network was largely humans operating fleets of bots.”

Nagli immediately disclosed the issue to the Moltbook team over X direct messages. “They responded within 20 minutes and secured it within hours with our assistance,” he said.

Moltbook Overrun By Spam, Experts Say 

Meanwhile, Michael Riegler, head of AI at the Simula Research Laboratory, was also looking at Moltbook by setting up an “observatory.” “It felt like one of those moments where everyone is shouting about something, but the really interesting questions only become answerable once the hype dies down and you can look at what actually happened over time,” he explained.

The observatory runs bots that crawl the Moltbook API on a schedule, store what they see and publish snapshots so other researchers can reproduce the analyses, Riegler said. In just the first eight days, the observatory revealed alarming trends: Out of 110,906 posts and 187,513 comments, he saw 829 prompt injection posts by 497 agents, "which tells you the technique wasn’t just one or two bad actors, it spread.” Attacks were also posted in comments, with 162 explicit API command injection attempts and 1,314 comments with API-related injection patterns.

Unsurprisingly in a bot network, there was a lot of spam: “82,718 bot comments in our dataset, about 44% of the comment activity we collected and 19,206 duplicate spam posts (roughly 17% of sampled posts),” Riegler noted. Moreover, “crypto-related content hit 26,568 posts (24% of the sample), with 3,938 pump-and-dump indicators.”

What does that mean for enterprises? “Moltbook does not offer meaningful benefits to enterprises beyond a curiosity,” Cohney said. “It’s closer to an entertainment / curiosity platform than a productivity tool for organizations.”

Where Do OpenClaw and Moltbook Go From Here?

Meanwhile, Steinberger posted in his blog on February 14 that he’s been hired by ChatGPT-maker OpenAI. “My next mission is to build an agent that even my mum can use,” he wrote. “That’ll need a much broader change, a lot more thought on how to do it safely and access to the very latest models and research.”

For OpenClaw itself, Steinberger said it would still be open source. “To get this into a proper structure I’m working on making it a foundation,” he wrote. “It will stay a place for thinkers, hackers and people that want a way to own their data, with the goal of supporting even more models and companies.”

Moltbook still runs, but while some security flaws have been fixed, others remain, Riegler said. The rate of prompt-injection posts dropped to 0.75%, but the number went up. “More importantly, the number of unique injection authors grew to 497, so whatever mitigations might exist, the know-how diffused. Meanwhile, API command injection in comments increased substantially, and spam/bot activity escalated enough that we explicitly raised it to ‘CRITICAL.’ If there were fixes, they weren’t sufficient to change the overall trajectory during Jan 28–Feb 4.”

Related Article: Vibe Coding: Reimagining Software Development for the Age of Agents

Learning Opportunities

A Glimpse Into the Future of AI Agents 

Enterprises interested in agents should take care, Riegler said. “If an enterprise insists on experimenting, the only sane way is to keep it completely separated from anything sensitive: isolated accounts, no corporate credentials, no access to internal tools and no ability for the agent to take real actions without human approval,” he said.

And if you started using OpenClaw or Moltbook before realizing the security implications? “You should assume anything shared there prior to remediation could have been exposed,” Nagli said. “They should treat it like any other situation where they may have granted a tool too much access,” agreed Cohney. “Reset passwords and be a little more discerning in future to whom they give the combination to their metaphorical safety deposit boxes.”

Nonetheless, the OpenClaw and Moltbook furor could be giving us a glimpse into the future, Cohney said. “Maybe a one-shot-wonder, but still possibly giving us a glimpse into what a future might look like when agents do become more autonomous.”

About the Author
Sharon Fisher

Sharon Fisher has written for magazines, newspapers and websites throughout the computer and business industry for more than 40 years and is also the author of "Riding the Internet Highway" as well as chapters in several other books. She holds a bachelor’s degree in computer science from Rensselaer Polytechnic Institute and a master’s degree in public administration from Boise State University. She has been a digital nomad since 2020 and lived in 18 countries so far. Connect with Sharon Fisher:

Main image: Md. Tuhin Molla | Adobe Stock
Featured Research