OpenClaw AI Agent: 42,900 Instances Exposed, 341 Malicious Skills — Why It's Not Ready for Serious Work
OpenClaw has taken the AI world by storm with 160K+ GitHub stars and viral adoption. But behind the hype, security researchers at CrowdStrike, Palo Alto Networks, and Trend Micro are sounding alarms. For knowledge workers considering it for real work, the risks far outweigh the excitement.
160K+
GitHub stars in weeks
42,900
Instances exposed online
341
Malicious skills on ClawHub
8.8
CVSS score (Critical CVE)
What Happened
OpenClaw — the open-source agentic AI formerly known as Clawdbot — has become the fastest-growing AI project in GitHub history since its rebranding on January 29, 2026. Built by developer Peter Steinberger, it gained 20,700 stars in a single day and attracted 2 million visitors in its first week. But as adoption exploded, so did the security findings. SecurityScorecard's STRIKE team revealed 42,900 exposed instances across 82 countries, with 15,200 vulnerable to remote code execution. Meanwhile, researchers found 341 malicious skills on ClawHub targeting crypto traders. Cybersecurity professor Aanjhan Ranganathan of Northeastern University has called it a “privacy nightmare.”
What Is OpenClaw and Why Is Everyone Talking About It?
Unlike traditional chatbots like ChatGPT or virtual assistants like Siri, OpenClaw is agentic — it doesn't just answer questions, it acts. It plans and executes multi-step tasks independently, connecting AI models to your personal apps through Telegram, Discord, WhatsApp, and more.
It runs locally on your Mac or PC with broad system access: reading files, executing shell commands, controlling a browser, and storing persistent memory to track your habits and context. It connects to tools like Spotify, Philips Hue, GitHub, and your calendar. With over 5,000 community-built skill extensions on ClawHub, it can do everything from ordering food to negotiating deals to scouring LinkedIn for job candidates.
The Appeal for Knowledge Workers
- Autonomous task execution: Multi-step workflows without manual intervention
- Persistent memory: Remembers context from weeks or months ago
- “Selfware” creation: Non-coders can build custom tools for their own workflows
- Open source & self-hosted: Full control over the code and data
- Real automation: Recruiters scouring LinkedIn, e-commerce owners handling customer support
The promise is compelling: an AI that doesn't just chat, but an entire team of skilled workers that do. According to Investing.com, OpenClaw represents agentic AI's “ChatGPT moment.” But the question every knowledge worker needs to ask is: at what cost?
The Security Nightmare: What the Research Shows
Within weeks of OpenClaw's viral rise, security teams at some of the world's largest cybersecurity firms began publishing advisories. The findings are alarming.
42,900 Instances Exposed on the Public Internet
SecurityScorecard's STRIKE Threat Intelligence Team discovered approximately 42,900 OpenClaw instances directly exposed on the internet across at least 82 countries. Of these, 15,200 are vulnerable to remote code execution attacks. Attackers can take full control of these agents and everything they have access to.
CVE-2026-25253: CVSS 8.8 Hijack Vulnerability
A single malicious link can hijack an entire OpenClaw instance. While patched in version 2026.1.29, security researchers note many users still run outdated versions. The vulnerability scored CVSS 8.8 — classified as 'High' severity — meaning exploitation is straightforward and consequences are severe.
341 Malicious Skills on ClawHub Marketplace
Hackers planted 341 malicious AI skills on ClawHub targeting crypto traders, stealing wallet data through fake prerequisites. Researchers identified 14 users contributing malicious content, including compromised legitimate GitHub accounts. One handle was observed submitting a new malicious skill every few minutes, indicating automated deployment.
Prompt Injection via Emails, Websites, and Messages
Malicious content in emails, websites, or chat messages can manipulate OpenClaw into performing unintended actions — forwarding sensitive data, executing shell commands, or exfiltrating credentials. Because OpenClaw reads and acts on content autonomously, every incoming message is a potential attack vector.
Memory Poisoning: Delayed-Execution Attacks
With persistent memory, attacks become stateful. Malicious payloads can be fragmented inputs that appear benign in isolation, written into long-term agent memory, and later assembled into executable instructions — enabling time-shifted prompt injection and logic bomb-style activation.
Shadow AI in Enterprise Networks
Bitdefender and Palo Alto Networks report evidence of 'Shadow AI' where employees deploy OpenClaw on corporate machines without IT approval. This creates unmanaged attack surfaces inside company networks, bypassing all existing security controls.
Why OpenClaw Is Not Suitable for Serious Knowledge Work
OpenClaw is undeniably impressive as a technical achievement and a tinkering project. But there's a fundamental gap between “exciting open-source experiment” and “tool you should trust with your professional work.”
Unrestricted System Access
OpenClaw requires broad system access to function — reading files, executing commands, controlling browsers. This means your confidential documents, credentials, API tokens, and personal data are all accessible. One compromised skill or prompt injection, and everything is exposed.
No Enforced Security Checks
What sets OpenClaw apart from safer alternatives is its unrestricted configurability. Users can grant arbitrary permissions without any enforced security checks. According to Trend Micro, this “dramatically increases existing risks and makes OpenClaw unsuitable for casual use.”
Unsupervised Autonomy
Unlike ChatGPT Agent, OpenClaw operates without requiring approval for individual actions. Users can grant financial transaction capabilities with no mandatory human oversight. Errors or manipulations cause damage before anyone notices.
No Legal Protection
OpenClaw's user agreements shift all liability to the user. AI agents lack legal personhood and cannot be sued. If OpenClaw sends a bad email, leaks data, or makes a financial mistake on your behalf, you bear full responsibility.
“I think it's a privacy nightmare. Not only are you letting an AI agent look at sensitive information like your passwords and documents, but you also have limited insights into how it's processing your information and where it's sending it.”
— Aanjhan Ranganathan, Cybersecurity Professor, Northeastern University
The enterprise verdict: IBM's analysis concludes that “neither OpenClaw nor Moltbook is likely to be deployed in workplaces soon, as they expose users and employers to too many security vulnerabilities.” CrowdStrike, Palo Alto Networks, and Bitdefender have all published technical advisories warning enterprises about the risks.
The Personalization Paradox: Why “Open” Doesn't Mean “Safe”
There's a fundamental tension at the heart of OpenClaw: making AI genuinely useful as a personal agent requires giving it deep access to your information and accounts. This is the personalization asymmetry — useful agency requires permission, and permission requires trust we haven't established protocols for yet.
Being open-source gives you visibility into the code, but it doesn't solve the fundamental problem. You still need to:
- Audit every skill you install from ClawHub (with 5,000+ available, that's impractical)
- Keep your instance updated against new CVEs (many users don't)
- Ensure your instance isn't exposed to the internet (42,900 users failed at this)
- Monitor for prompt injection attacks across every channel it reads
- Watch for memory poisoning in persistent storage
- Configure proper sandboxing (OpenClaw doesn't enforce any)
In other words, to use OpenClaw safely for real work, you need to be a security expert first and a knowledge worker second. That's not a productivity tool — it's a side project.
What To Do Now
If you're a knowledge worker who wants to outsource repeated tasks to AI — writing, research, summarization, email drafting, content repurposing — here's the practical takeaway:
If You're Already Using OpenClaw
- Update immediately: Ensure you're on version 2026.1.29 or later to patch CVE-2026-25253
- Check your exposure: Verify your instance isn't accessible from the internet
- Audit your skills: Remove any skills you didn't install yourself or can't verify
- Don't use it for sensitive work: Keep confidential documents, credentials, and financial operations away from OpenClaw
If You're Evaluating AI Tools for Productivity
- Maturity matters: Experimental open-source agents and production-ready professional tools are different categories entirely
- Security should be built-in, not bolted-on: If a tool requires you to configure your own security, it's not ready for non-technical users
- Privacy architecture is non-negotiable: Where your data goes, who can access it, and how it's processed should be clear and auditable
- Look for tools designed for your actual workflows: The best AI productivity tool isn't the most powerful agent — it's the one that fits seamlessly into how you already work
Outsource Repeated Work to AI — Without the Security Risks
Elephas is a personal AI assistant for Mac designed for knowledge workers who want to use AI productively without becoming security experts. Access multiple AI models — including Claude, GPT, and local models — directly within every app you use. Automate writing, email, research, and repetitive tasks with a mature, privacy-first tool that's been built for professional workflows from day one. No exposed instances. No malicious skill marketplaces. No security nightmares.
Learn more about Elephas →Frequently Asked Questions
What is OpenClaw?
OpenClaw (formerly Clawdbot) is an open-source agentic AI built by Peter Steinberger. Unlike traditional chatbots, it runs locally on your Mac or PC and autonomously executes multi-step tasks — managing email, browsing the web, ordering food, updating calendars — by connecting to your apps through WhatsApp, Slack, Discord, and more. It gained over 160,000 GitHub stars since January 2026.
Is OpenClaw safe to use for work?
Security experts strongly advise against it. CrowdStrike, Palo Alto Networks, Bitdefender, and Trend Micro have all issued advisories. With 42,900 exposed instances, a CVSS 8.8 critical vulnerability, 341 malicious skills on ClawHub, and no enforced security checks, OpenClaw's risks are well-documented. IBM concludes it's “unlikely to be deployed in workplaces soon.”
What are the biggest risks?
Remote code execution through exposed instances, prompt injection via emails and websites, memory poisoning with delayed-execution malicious payloads, credential theft via unrestricted file system access, malicious skills stealing crypto wallet data, and no mandatory human approval for actions including financial transactions.
What about NanoClaw? Is that safer?
NanoClaw is a lighter, more secure fork created by Gavriel Cohen specifically to address OpenClaw's security issues. Released under an MIT License on January 31, 2026, it surpassed 7,000 GitHub stars in just over a week. While it improves on OpenClaw's security posture, it's still an early-stage open-source project — not a production-ready professional tool.
What should knowledge workers use instead?
Knowledge workers should look for mature, privacy-first AI tools designed for professional workflows. Elephas, for example, provides access to multiple AI models across every Mac app with built-in privacy safeguards, no exposed attack surfaces, and a focus on automating the repeated writing and research tasks that actually eat up your workday.
The Bottom Line
OpenClaw represents a genuine leap in what's possible with agentic AI. The idea of an AI agent that can autonomously handle your tasks across every app is powerful — and it's clearly captured the imagination of the developer community. These messy early experiments, as IBM puts it, “could prove invaluable in the long run by helping the industry build needed guardrails.”
But for knowledge workers who need to get real work done today — safely and reliably — OpenClaw is a toy, not a tool. The security findings from CrowdStrike, Palo Alto Networks, Trend Micro, and Bitdefender make that unambiguously clear. When your “productivity tool” requires you to become a security engineer just to use it safely, something has gone wrong.
What to watch next: NanoClaw's growth as a more secure alternative, whether ClawHub implements skill verification, and how enterprise security teams respond as Shadow AI deployments of OpenClaw continue to surface inside corporate networks.
Related Resources
Explore all AI Privacy & Security resourcesCan AI Tools Waive Attorney-Client Privilege? What Every Lawyer Must Know
14 min readcomparison7 Best Private AI Tools for Lawyers in 2026 (Local & Offline Options)
18 min readarticleThe AI Note-Taking Privacy Problem: How to Protect Your Confidential Conversations
9 min readguideOffline AI Tool for Confidential Client Documents
11 min readSources
- H2S Media: 42,900 OpenClaw AI Agents Exposed: 15,200 Vulnerable to Hackers
- Trend Micro: Viral AI, Invisible Risks — What OpenClaw Reveals About Agentic Assistants
- Northeastern University: Why the OpenClaw AI Agent is a ‘Privacy Nightmare’
- CrowdStrike: What Security Teams Need to Know About OpenClaw
- Palo Alto Networks: OpenClaw May Signal the Next AI Security Crisis
- Bitdefender: Technical Advisory — OpenClaw Exploitation in Enterprise Networks
- IBM: OpenClaw, Moltbook and the Future of AI Agents
- Investing.com: OpenClaw — Agentic AI's ‘ChatGPT Moment’
- Gadget Review: Hackers Poison Popular AI Assistant With 341 Malicious Skills
- VentureBeat: NanoClaw Solves One of OpenClaw's Biggest Security Issues