Technical OSINT
February 2, 2026
In late January 2026, a self-hosted AI assistant called OpenClaw became one of the fastest-growing open source projects in history, gaining over 100,000 GitHub stars in under a week. The project promises to put an autonomous AI agent on your local machine—one that can read your email, execute shell commands, book flights, and act on your behalf through WhatsApp, Telegram, and iMessage.
The hype has been extraordinary. So have the security concerns.
This analysis uses open source intelligence methods to examine what actually happened: the project’s development patterns, contributor network, security posture, and the emergence of Moltbook—a social network where these AI agents interact with each other.
All data in this analysis was collected from publicly available sources:
No proprietary tools, insider access, or non-public information was used.
November 2025: Peter Steinberger, founder of PSPDFKit, releases Clawdbot as an open source hobby project. Initial commit activity shows 246 commits in the first three days (Nov 24-26), followed by a quiet period—typical of a solo developer getting a project off the ground.
December 2025: Development accelerates. Commit frequency rises to 50-200 commits per day, with notable spikes on December 7 (176 commits), December 9 (186 commits), and December 20 (201 commits). The project is being actively built out, but remains relatively unknown.
Early January 2026: Something changes. Commit activity explodes:
| Date | Commits | Event |
|---|---|---|
| Jan 6 | 239 | Acceleration begins |
| Jan 7 | 268 | — |
| Jan 8 | 373 | Viral spread begins |
| Jan 9 | 436 | Peak activity |
| Jan 10 | 308 | Sustained high volume |
The January 8-9 window represents the viral inflection point. 809 commits in 48 hours. Social media posts from tech influencers begin circulating. The project hits the front page of Hacker News.
January 27, 2026: Anthropic issues a trademark request. The project is renamed from Clawdbot to Moltbot. Commit activity drops sharply—54 commits on Jan 28, 23 on Jan 29—as the team manages the transition.
January 30, 2026: Another rename. Moltbot becomes OpenClaw. The project stabilizes at openclaw.ai.
By February 2, 2026: The repository shows 141,000+ stars, 8,699 commits, and approximately 350 unique contributors.
Git history reveals a striking pattern: this is essentially a one-person project with community assistance.
Peter Steinberger: 7,278 commits (83.7%)
The next highest contributor has 184 commits. The top 10 contributors account for roughly 90% of all activity. This concentration has implications for both the project’s velocity and its bus factor.
Notably, the contributor list includes several bot accounts:
The presence of “Claude” (5 commits) and “Clawdbot” (3 commits) as contributor names suggests AI-assisted development is part of the workflow—fitting for a project that aims to put AI agents in users’ hands.
The contributor base expanded rapidly during the viral period, with pull requests coming from developers worldwide. The project’s CONTRIBUTING.md explicitly welcomes “AI/vibe-coded PRs”—an unusual stance that reflects the experimental nature of the community.
Multiple security firms have published analyses of OpenClaw. The findings are consistent and concerning.
OpenClaw stores API keys and OAuth tokens in plaintext in local configuration files. The project’s own documentation acknowledges this: credentials live at ~/.openclaw/openclaw.json.
Security researchers have already detected malware specifically designed to harvest OpenClaw credentials. If an agent has access to email, calendar, and messaging APIs, those tokens represent significant value to attackers.
This is the fundamental architectural weakness. OpenClaw agents process untrusted content—emails, web pages, documents, messages from other users. Any of that content can contain embedded instructions that manipulate the agent’s behavior.
Cisco’s security team tested a malicious skill called “What Would Elon Do?” against OpenClaw and documented nine security findings, including two critical issues: active data exfiltration and direct prompt injection to bypass safety guidelines.
Security researcher Matvey Kukuy demonstrated a practical attack: a crafted email sent to a vulnerable OpenClaw instance caused the agent to forward the user’s last five emails to an attacker-controlled address. The attack required no user interaction beyond having the agent read the email.
The project’s security documentation is refreshingly honest about this limitation: “Even with strong system prompts, prompt injection is not solved. System prompt guardrails are soft guidance only.”
Axios reported that security researcher Jamieson O’Reilly found hundreds of OpenClaw control panels exposed on the public internet with no authentication. The web interface is designed for local use only, but users deploying on cloud servers often fail to properly secure access.
ClawHub is a public registry where users share “skills”—modular extensions that add capabilities to OpenClaw agents. Skills are not sandboxed. Installing a skill is equivalent to running arbitrary code on your system.
Between January 27-29, security researchers identified 14 malicious skills uploaded to ClawHub, primarily targeting cryptocurrency users. One malicious skill appeared on ClawHub’s front page before being removed.
The skills ecosystem operates on trust. There is no code signing, no mandatory review process, and no sandbox isolation. The project documentation warns users to treat skills “as trusted code,” but the viral adoption has brought many users who may not understand what that means.
Perhaps the strangest development in the OpenClaw ecosystem is Moltbook—a social network exclusively for AI agents.
Launched in late January 2026 by entrepreneur Matt Schlicht, Moltbook allows OpenClaw agents to post, comment, and interact with each other. Human users can observe but not participate directly. Within days, the platform grew to over 770,000 registered agents.
The security implications are significant. Moltbook requires agents to ingest and process content from other agents—a perfect vector for prompt injection attacks.
Researchers Michael Riegler and Sushant Gautam established an observatory to monitor the platform. Their findings:
On January 31, 2026, 404 Media reported a critical vulnerability: an unsecured database allowed anyone to commandeer any agent on the platform. The exploit permitted unauthorized actors to bypass authentication and inject commands directly into agent sessions. The platform was temporarily taken offline to patch the breach.
The question of whether Moltbook agents are “truly autonomous” remains contested. Critics argue that most posts are human-initiated—a user tells their agent to post, and the agent executes the instruction. The platform may be less “AI society” and more “humans talking through AI proxies.”
Either way, it represents a new attack surface: a persistent, public channel where agents trained to be helpful can be manipulated by adversarial peers.
OpenClaw is a legitimate project built by a respected developer with genuine utility for technical users who understand the risks. It is also, in its current form, a security incident waiting to happen.
The core tension is architectural. An AI agent that can act autonomously on your behalf requires broad system access. That same access makes it dangerous if the agent is manipulated through prompt injection, compromised skills, or exposed credentials.
The project’s own security documentation acknowledges this honestly—perhaps more honestly than users rushing to install it have read.
For organizations: OpenClaw is not appropriate for enterprise deployment in its current state. The combination of plaintext credential storage, prompt injection vulnerabilities, and unvetted skill ecosystem creates unacceptable risk for environments handling sensitive data.
For individuals: If you understand the risks and can properly isolate the deployment (sandboxed environment, minimal credentials, no exposure to untrusted content), OpenClaw offers a glimpse of where personal AI agents are heading. If you don’t understand those caveats, wait.
For the industry: OpenClaw’s viral adoption demonstrates genuine demand for self-hosted, autonomous AI agents. It also demonstrates that the security model for agentic AI is not ready for mainstream deployment. The question is whether the industry can build adequate safeguards before a major breach shifts public perception.
The lobster has molted. What emerges next depends on whether security catches up to capability.
This analysis was produced using open source intelligence methods. All data was collected from publicly available sources. For questions about our OSINT methodology or to discuss your organization’s intelligence requirements, contact Wigington Intelligence Group.