OpenClaw, the open-source AI agent platform (previously known as Clawdbot and Moltbot) has captured rapid attention for its ambitious capabilities. Though beneath the surface of convenience lies a fast-growing security problem that’s turning heads in the cyber-defense world.
A Marketplace Weaponized
At the heart of the issue is ClawHub, OpenClaw’s community-driven marketplace for skills. While designed for flexibility and creativity, minimal vetting has allowed attackers to flood the registry with harmful content. Security researchers uncovered hundreds of malicious skills hiding in plain sight, often disguised as useful tools like wallet trackers, productivity helpers, and integration plugins.
The Atomic macOS Stealer Campaign
In one high-profile campaign, dubbed ClawHavoc, researchers identified 341 malicious skills that instructed users, and their agents, to run external commands delivering Atomic macOS Stealer (AMOS), a malware family capable of harvesting browser credentials, cryptocurrency wallets, SSH keys, and other sensitive data.
The AI Supply Chain Problem
The threat goes beyond obvious malware downloads. Because OpenClaw skills can execute with broad system privileges, malicious modules can embed backdoors, exfiltrate secrets, or install persistent remote shells, all while still appearing to provide legitimate functionality. This reflects a classic supply chain attack: rather than exploiting vulnerability in the core platform, threat actors poison the ecosystem around it.
Exposed Deployments and Infrastructure Risk
Independent research also highlights a broader attack surface. Thousands of OpenClaw instances have been observed exposed online, often with weak or no authentication, making them easy targets for remote compromise. When combined with an unvetted extension marketplace, this exposure puts sensitive credentials, cloud tokens, and corporate secrets at significant risk, especially when AI agents are connected to email, messaging platforms, or internal business systems.
Security Lessons and Recommendations
The OpenClaw situation serves as a wake-up call for AI developers and users alike. AI agent ecosystems need stronger supply chain controls, better code signing practices, and tighter permission sandboxing before they can be considered secure by default.
To reduce risk, users should:
- Carefully vet skills before installation
- Avoid modules that require external code execution
- Limit agent privileges to only what is necessary
- Ensure that deployments are not publicly exposed without strong authentication and monitoring.
As AI agents become more deeply embedded in personal and enterprise workflows, security must evolve alongside innovation. Otherwise, these powerful assistants risk becoming convenient entry points for attackers rather than productivity enhancers.

