Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Security scanner for OpenClaw skill packages. Scans skills for malicious code, evasion techniques, prompt injection, and misaligned behavior BEFORE installation. Use to audit any skill from ClawHub or local directories.
Security scanner for OpenClaw skill packages. Scans skills for malicious code, evasion techniques, prompt injection, and misaligned behavior BEFORE installation. Use to audit any skill from ClawHub or local directories.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Multi-layered security scanner for OpenClaw skill packages. Detects malicious code, evasion techniques, prompt injection, and misaligned behavior through static analysis and optional LLM-powered deep inspection. Run this BEFORE installing or enabling any untrusted skill.
6 analysis layers โ pattern matching, AST/evasion, prompt injection, LLM deep analysis, alignment verification, meta-analysis 60+ detection rules โ execution threats, credential theft, data exfiltration, obfuscation, behavioral signatures Context-aware scoring โ reduces false positives for legitimate API skills ClawHub integration โ scan skills directly from the registry by slug Multiple output modes โ text report (default), --json, --compact, --quiet Exit codes โ 0 for safe, 1 for risky (easy scripting integration)
MANDATORY before installing or enabling: Skills from ClawHub (any skill not authored by you) Skills shared by other users or teams Skills from public repositories Any skill package you haven't personally reviewed RECOMMENDED for periodic audits of already-installed skills.
# Scan a local skill directory skill-scan scan /path/to/skill # Scan a skill from ClawHub before installing it skill-scan scan-hub some-skill-slug # Batch scan all installed skills skill-scan batch /path/to/skills-directory # JSON output for programmatic use skill-scan scan-hub some-skill-slug --json # Quiet mode (just score + verdict) skill-scan scan-hub some-skill-slug --quiet
RiskScoreActionLOW80-100Safe to installMEDIUM50-79Review findings before installingHIGH20-49Do NOT install โ serious threats detectedCRITICAL0-19Do NOT install โ multiple critical threats
0 โ LOW risk (safe to proceed) 1 โ MEDIUM or higher (block installation)
skill-scan scan-hub slug --llm # Always run deep analysis (layers 4-5) skill-scan scan-hub slug --llm-auto # LLM only if pattern scan finds MEDIUM+ skill-scan scan-hub slug --llm-only # Skip patterns, LLM analysis only Provider auto-detected from environment: OPENAI_API_KEY -> gpt-4o-mini ANTHROPIC_API_KEY -> claude-sonnet-4-5
When the user asks to install a skill from ClawHub, follow this workflow: Scan first โ run skill-scan scan-hub <slug> BEFORE installing Check the result: If LOW risk (exit code 0): proceed with installation If MEDIUM risk: show findings to the user and ask whether to proceed If HIGH or CRITICAL risk: block installation and show the report Report โ always show the user the scan summary (score, risk level, finding count)
When a scan finds MEDIUM or higher risk: ๐ก๏ธ Skill-Scan Alert: {RISK_LEVEL} ({SCORE}/100) Skill: {slug} Findings: {count} issue(s) detected Top threat: {highest severity finding title} Action: Installation blocked pending review.
To integrate skill-scan into your agent's workflow, add ONE of the following templates to your AGENTS.md (or equivalent agent instructions file).
All ClawHub skill installs are automatically scanned before installation. No manual step needed. ## Skill-Scan โ Automatic Pre-Install Security Scanning BEFORE installing any skill from ClawHub, you MUST scan it with skill-scan first. This is non-negotiable โ never skip the scan. ### Workflow 1. When the user asks to install a skill (e.g. `clawhub install <slug>`), run the scan FIRST: ```bash skill-scan scan-hub <slug> Check the exit code and output: Exit code 0 (LOW risk) โ proceed with clawhub install <slug> Exit code 1 (MEDIUM risk) โ show findings to the user, ask whether to proceed Exit code 1 (HIGH/CRITICAL risk) โ BLOCK installation, show the full report Never install a HIGH or CRITICAL risk skill without explicit user override Always show the scan summary to the user before proceeding with installation
When asked to audit installed skills, batch scan the skills directory: skill-scan batch {skillsDir}
When this skill is first installed, ask the user: How should skill-scan integrate with your workflow? Automatic (Recommended) โ Every clawhub install is scanned first. HIGH/CRITICAL installs are blocked automatically. Manual โ Scanning only happens when you explicitly ask for it. Based on their choice, add the corresponding template (Option A or Option B above) to the project's AGENTS.md.
Execution threats โ eval(), exec(), child_process, dynamic imports Credential theft โ .env access, API keys, tokens, private keys, wallet files Data exfiltration โ fetch(), axios, requests, sockets, webhooks Filesystem manipulation โ Write/delete/rename operations Obfuscation โ Base64, hex, unicode encoding, string construction Prompt injection โ Jailbreaks, invisible characters, homoglyphs, roleplay framing, encoded instructions Behavioral signatures โ Compound patterns: data exfiltration, trojan skills, evasive malware, persistent backdoors
Python 3.10+ httpx>=0.27 (for LLM API calls only) API key only needed for --llm modes (static analysis is self-contained)
input-guard โ External input scanning memory-scan โ Agent memory security guardrails โ Security policy configuration
Identity, auth, scanning, governance, audit, and operational guardrails.
Largest current source with strong distribution and engagement signals.