Back to Blog

OpenClaw Skills and ClawHub - How It Works and Why 36% Have Security Flaws

February 12, 202612 min readTeam 400

OpenClaw skills are the reason it's the most capable AI coding agent available. They're also the reason it's one of the riskiest tools you can install on a developer's machine.

Anyone with a GitHub account that's seven days old can publish a skill to ClawHub. There's no code review. No security scan. No approval process. The skill runs with the same system permissions as OpenClaw itself.

That's worth sitting with for a moment.

Skills are what make OpenClaw genuinely useful for real work. They're also the single largest attack surface in your AI tooling stack. Understanding how they work, and how ClawHub distributes them, is essential before you roll OpenClaw out across a team.

What OpenClaw Skills Actually Are

At their core, skills are instruction files. A skill is a SKILL.md file, sometimes paired with supporting scripts or configuration files, that tells OpenClaw how to perform a specific task.

That sounds simple. It's not.

A skill can instruct OpenClaw to:

  • Execute arbitrary shell commands on your machine
  • Read and write files anywhere on your file system
  • Make outbound API calls to external services
  • Perform web automation through headless browsers
  • Install packages and dependencies
  • Modify system configuration

Skills don't run in a sandbox. They run with OpenClaw's full permissions. If OpenClaw can access your SSH keys, so can every skill you install. If OpenClaw can read your .env files, so can a skill published by an anonymous GitHub account last Tuesday.

This is the fundamental trade-off. The same unrestricted access that lets a well-built skill automate complex deployment workflows also lets a malicious skill exfiltrate your credentials to a remote server.

How a Skill Works Internally

When you invoke a skill, OpenClaw reads the SKILL.md file and incorporates those instructions into its context. The skill tells OpenClaw what steps to follow, what commands to run, what files to create or modify, and what APIs to call.

Some skills are straightforward. A documentation skill might just define a template and some formatting rules. No shell commands, no network calls, no file system access beyond the current project.

Other skills are complex. A deployment skill might read your cloud provider credentials, build Docker images, push to a registry, update Kubernetes manifests, and trigger a rolling deployment. Every one of those steps runs with your permissions.

The skill itself doesn't execute code directly. It instructs OpenClaw to execute code. This distinction matters because it means static analysis tools can't easily scan a skill for malicious behaviour. The instructions are natural language, and the actual commands are generated at runtime by the AI.

How ClawHub Works

ClawHub is OpenClaw's public skill registry. Think of it as npm or PyPI, but for AI agent instructions.

Scale and Growth

As of early 2026, ClawHub hosts over 3,984 published skills. That number has roughly doubled every three months since launch. The growth rate tells you two things: skills are useful, and the barrier to publishing is low.

Search and Discovery

ClawHub uses vector search for skill discovery. When you search for a skill, ClawHub generates an embedding of your query and matches it against skill description embeddings. This means searches work semantically, not just by keyword matching.

Searching for "deploy to AWS" will surface skills that mention "push to Amazon cloud infrastructure" even if they never use the exact phrase you typed. This is genuinely useful for finding relevant skills. It also means you can't rely on keyword-based filtering to avoid certain categories of skills.

Versioning

Skills follow semantic versioning (semver). Publishers can push updates, and your installed skills can be pinned to specific versions or set to auto-update.

Auto-update is convenient and dangerous. A skill that was clean at version 1.0.0 can become malicious at version 1.1.0. We'll come back to this.

Publishing Requirements

To publish a skill on ClawHub, you need:

  • A GitHub account that's at least one week old
  • A valid SKILL.md file
  • A skill name that isn't already taken

That's it. There's no review process. No automated security scanning. No human approval. The skill goes live immediately.

Compare this to the Apple App Store, which has a multi-day review process and still lets malicious apps through. ClawHub doesn't even attempt a review.

Installation Methods

You can install skills three ways:

GUI: Through the OpenClaw interface, browse and install with a click. This is the most common method and the most dangerous, because it's the easiest to do without thinking.

CLI: Using openclaw skill install <name>. This at least forces the developer to be deliberate about what they're adding.

Manual: Downloading the SKILL.md file and placing it in your project's .openclaw/skills/ directory. This gives you the most control and the best opportunity to review the skill before it runs.

The Snyk Audit: 36% Have Security Flaws

In late 2025, Snyk published an audit of ClawHub skills. The headline finding: 36.82% of analysed skills contained at least one security vulnerability.

That's not a typo. More than a third.

What They Found

The audit categorised findings into several types:

Command injection vulnerabilities: Skills that construct shell commands using unsanitised input. An attacker who controls any input to these skills can execute arbitrary commands on your machine.

Data exfiltration: Skills that send local data to external servers. Some were obvious (raw HTTP calls to suspicious domains). Others were subtle, encoding data in DNS queries or embedding it in seemingly innocent API parameters.

Credential theft: Skills designed to read credential files, environment variables, SSH keys, and API tokens, then transmit them externally.

Excessive permissions: Skills that request or use far more access than their stated purpose requires. A markdown formatting skill that reads your AWS credentials is a red flag.

The Coordinated Campaign

The most alarming finding was the discovery of 341 confirmed malicious skills traced to a single coordinated campaign. These skills were published across dozens of GitHub accounts over several weeks, designed to look like legitimate developer tools.

They had professional-looking descriptions, reasonable names, and genuine utility. They also quietly exfiltrated environment variables and SSH keys to attacker-controlled infrastructure on first run.

341 skills. One campaign. And it went undetected for weeks.

The Cornell University Report

Separately, a Cornell University research team analysed open-source AI agent tool packages more broadly and found that 26% contained known vulnerabilities. This wasn't specific to ClawHub, but the overlap is significant. The AI agent ecosystem has a supply chain security problem, and ClawHub is part of that ecosystem.

What This Means in Practice

If your team installs 10 random skills from ClawHub, statistically 3 or 4 of them have security issues. Not all of those will be actively malicious. Some will be poorly written skills with unintentional vulnerabilities. But the distinction between "accidentally insecure" and "deliberately malicious" doesn't matter much when your credentials are compromised.

For a deeper look at OpenClaw security risks and mitigations, see our OpenClaw security guide.

How to Evaluate a Skill Before Installing

You can significantly reduce your risk by reviewing skills before installation. Here's what to check.

Read the SKILL.md Source

Before installing any skill, read its source code. The entire thing. Most skills are short enough to review in five minutes.

Look for:

  • Shell commands (especially anything involving curl, wget, nc, ssh, or piping to bash)
  • File paths outside your project directory
  • References to environment variables or credential files
  • Encoded or obfuscated strings
  • URLs to external services you don't recognise

Check Requested Permissions

Some skills declare what permissions they need. If a skill says it needs file system access and network access for a task that should only involve text formatting, walk away.

If a skill doesn't declare permissions at all, assume it uses everything.

Review the Publisher

Check the publisher's GitHub profile:

  • How old is the account?
  • What else have they published?
  • Do they have a track record in the developer community?
  • Are their other repositories legitimate?

A skill from a well-known open source contributor with years of history is lower risk than one from an account created eight days ago with no other activity.

Look for Shell Command Execution

Any skill that runs shell commands deserves extra scrutiny. Look for patterns like:

  • bash -c or sh -c with constructed strings
  • Commands that pipe output to other commands
  • eval statements
  • Download-and-execute patterns (curl | bash)
  • Commands that modify system files or installed packages

Not all shell execution is malicious. A deployment skill legitimately needs to run commands. But you should understand exactly what commands it runs and why.

Check for Outbound Network Calls

A skill that makes network requests should have a clear reason for each endpoint it contacts. Review:

  • What URLs does it call?
  • What data does it send?
  • Does it need network access for its stated purpose?
  • Are there calls to IP addresses instead of domain names? (This is a common obfuscation technique.)

Review Version History

Check the skill's changelog and version history on ClawHub. Watch for:

  • Sudden changes in scope or permissions
  • New maintainers taking over an established skill
  • Large updates to previously stable skills
  • Removal of previously transparent code

Supply chain attacks often work by compromising a trusted package, then pushing a malicious update. The same pattern applies to skills.

Building Custom Skills vs Using Public Ones

Not every team needs to build their own skills. But every team should think about whether they should.

When Public Skills Make Sense

For common, well-understood tasks with low security sensitivity, public skills from reputable publishers are fine. Things like code formatting, documentation generation, or project scaffolding.

The key criteria: the skill doesn't need access to anything sensitive, and a compromise wouldn't cause material damage.

When Custom Skills Make Sense

Build your own skills when:

  • The skill needs access to credentials, internal systems, or sensitive data
  • You have specific security or compliance requirements
  • The workflow is unique to your organisation
  • You need to control exactly what commands run and what data flows where
  • You're operating in a regulated industry

Custom skills are more work upfront. They're also the only way to have full confidence in what's running on your developers' machines.

The Middle Ground

We maintain a curated, security-audited skill library for our clients. Every skill is reviewed for security issues, tested for expected behaviour, and monitored for changes. This gives you the convenience of pre-built skills without the risk of trusting ClawHub directly.

If you're setting up OpenClaw with Docker, container isolation adds another layer of protection. Even a compromised skill has limited blast radius when it runs inside a locked-down container rather than on a developer's bare machine.

A Practical Skill Governance Approach

If your organisation uses OpenClaw, you need a skill governance process. It doesn't need to be heavy. It does need to exist.

Maintain an Approved Skill List

Create and maintain a list of skills that have been reviewed and approved for use in your organisation. This is the single most effective control you can implement.

The list should include:

  • Skill name and version (pinned, not auto-updating)
  • Who reviewed it and when
  • What it does and what permissions it uses
  • Any restrictions on where or how it should be used

Start with the skills your team is already using. Review each one. Remove anything that fails review. Add new skills only through the approval process.

Define a Review Process

Before any new skill is installed, someone should review it. The review doesn't need to take hours. For most skills, 15-30 minutes is enough.

Your review checklist:

  1. Read the full SKILL.md source
  2. Check the publisher's reputation and history
  3. Identify all shell commands, file access, and network calls
  4. Verify permissions match the stated purpose
  5. Check version history for suspicious changes
  6. Test in an isolated environment before deploying to developer machines

Document the review. If you need to audit later, you want a record of what was checked and by whom.

Regular Audits of Installed Skills

Skills change. Publishers push updates. New vulnerabilities are discovered. A skill that was safe six months ago might not be safe today.

Run quarterly audits:

  • Verify installed skills match your approved list
  • Check for version changes (especially if auto-update was accidentally enabled)
  • Re-review any skills that have been updated since last audit
  • Remove skills that are no longer needed

For more on running OpenClaw safely at scale, see our guide on managing OpenClaw in production.

Incident Response

Have a plan for when a compromised skill is discovered. At minimum:

  • How do you identify which machines are affected?
  • How do you remove the skill and contain the damage?
  • What credentials need to be rotated?
  • Who do you notify?

This doesn't need to be a 50-page document. A one-page playbook that people actually follow is better than a comprehensive plan that sits in a drawer.

The Bigger Picture

OpenClaw skills are genuinely powerful. They let AI agents perform complex, multi-step workflows that save real time and deliver real value. The skill ecosystem is one of the main reasons OpenClaw has become the dominant AI coding tool.

But that ecosystem has a serious supply chain security problem. Over a third of skills have security flaws. Hundreds of confirmed malicious skills have been published through coordinated campaigns. And the publishing requirements are so minimal that the barrier to entry for attackers is effectively zero.

This isn't a reason to avoid OpenClaw. It's a reason to use it carefully.

Review skills before you install them. Maintain an approved list. Pin versions. Audit regularly. Consider running OpenClaw in containers. And if you don't have the time or expertise to manage this yourself, consider a managed OpenClaw deployment where someone else handles the security review process for you.

The organisations getting the most value from OpenClaw aren't the ones installing every shiny new skill from ClawHub. They're the ones who've thought carefully about what skills they actually need, reviewed them properly, and put basic governance in place.

That's not glamorous. It's effective.