{
  "schemaVersion": "1.0",
  "item": {
    "slug": "skill-scan",
    "name": "Skill Scan",
    "source": "tencent",
    "type": "skill",
    "category": "安全合规",
    "sourceUrl": "https://clawhub.ai/dgriffin831/skill-scan",
    "canonicalUrl": "https://clawhub.ai/dgriffin831/skill-scan",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/skill-scan",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=skill-scan",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "CHANGELOG.md",
      "pyproject.toml",
      "TESTING.md",
      "README.md",
      "SKILL.md",
      "rules/dangerous-patterns.json"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/skill-scan"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/skill-scan",
    "agentPageUrl": "https://openagent3.xyz/skills/skill-scan/agent",
    "manifestUrl": "https://openagent3.xyz/skills/skill-scan/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/skill-scan/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Skill-Scan — Security Auditor for Agent Skills",
        "body": "Multi-layered security scanner for OpenClaw skill packages. Detects malicious code, evasion techniques, prompt injection, and misaligned behavior through static analysis and optional LLM-powered deep inspection. Run this BEFORE installing or enabling any untrusted skill."
      },
      {
        "title": "Features",
        "body": "6 analysis layers — pattern matching, AST/evasion, prompt injection, LLM deep analysis, alignment verification, meta-analysis\n60+ detection rules — execution threats, credential theft, data exfiltration, obfuscation, behavioral signatures\nContext-aware scoring — reduces false positives for legitimate API skills\nClawHub integration — scan skills directly from the registry by slug\nMultiple output modes — text report (default), --json, --compact, --quiet\nExit codes — 0 for safe, 1 for risky (easy scripting integration)"
      },
      {
        "title": "When to Use",
        "body": "MANDATORY before installing or enabling:\n\nSkills from ClawHub (any skill not authored by you)\nSkills shared by other users or teams\nSkills from public repositories\nAny skill package you haven't personally reviewed\n\nRECOMMENDED for periodic audits of already-installed skills."
      },
      {
        "title": "Quick Start",
        "body": "# Scan a local skill directory\nskill-scan scan /path/to/skill\n\n# Scan a skill from ClawHub before installing it\nskill-scan scan-hub some-skill-slug\n\n# Batch scan all installed skills\nskill-scan batch /path/to/skills-directory\n\n# JSON output for programmatic use\nskill-scan scan-hub some-skill-slug --json\n\n# Quiet mode (just score + verdict)\nskill-scan scan-hub some-skill-slug --quiet"
      },
      {
        "title": "Risk Scoring",
        "body": "RiskScoreActionLOW80-100Safe to installMEDIUM50-79Review findings before installingHIGH20-49Do NOT install — serious threats detectedCRITICAL0-19Do NOT install — multiple critical threats"
      },
      {
        "title": "Exit Codes",
        "body": "0 — LOW risk (safe to proceed)\n1 — MEDIUM or higher (block installation)"
      },
      {
        "title": "LLM Options",
        "body": "skill-scan scan-hub slug --llm        # Always run deep analysis (layers 4-5)\nskill-scan scan-hub slug --llm-auto   # LLM only if pattern scan finds MEDIUM+\nskill-scan scan-hub slug --llm-only   # Skip patterns, LLM analysis only\n\nProvider auto-detected from environment:\n\nOPENAI_API_KEY -> gpt-4o-mini\nANTHROPIC_API_KEY -> claude-sonnet-4-5"
      },
      {
        "title": "For the Agent",
        "body": "When the user asks to install a skill from ClawHub, follow this workflow:\n\nScan first — run skill-scan scan-hub <slug> BEFORE installing\nCheck the result:\n\nIf LOW risk (exit code 0): proceed with installation\nIf MEDIUM risk: show findings to the user and ask whether to proceed\nIf HIGH or CRITICAL risk: block installation and show the report\n\n\nReport — always show the user the scan summary (score, risk level, finding count)"
      },
      {
        "title": "Alert Format",
        "body": "When a scan finds MEDIUM or higher risk:\n\n🛡️ Skill-Scan Alert: {RISK_LEVEL} ({SCORE}/100)\nSkill: {slug}\nFindings: {count} issue(s) detected\nTop threat: {highest severity finding title}\nAction: Installation blocked pending review."
      },
      {
        "title": "AGENTS.md Integration",
        "body": "To integrate skill-scan into your agent's workflow, add ONE of the following templates to your AGENTS.md (or equivalent agent instructions file)."
      },
      {
        "title": "Option A: Automatic Scanning (Recommended)",
        "body": "All ClawHub skill installs are automatically scanned before installation. No manual step needed.\n\n## Skill-Scan — Automatic Pre-Install Security Scanning\n\nBEFORE installing any skill from ClawHub, you MUST scan it with skill-scan first. This is non-negotiable — never skip the scan.\n\n### Workflow\n\n1. When the user asks to install a skill (e.g. `clawhub install <slug>`), run the scan FIRST:\n   ```bash\n   skill-scan scan-hub <slug>\n\nCheck the exit code and output:\n\nExit code 0 (LOW risk) — proceed with clawhub install <slug>\nExit code 1 (MEDIUM risk) — show findings to the user, ask whether to proceed\nExit code 1 (HIGH/CRITICAL risk) — BLOCK installation, show the full report\n\n\nNever install a HIGH or CRITICAL risk skill without explicit user override\nAlways show the scan summary to the user before proceeding with installation"
      },
      {
        "title": "Periodic Audits",
        "body": "When asked to audit installed skills, batch scan the skills directory:\n\nskill-scan batch {skillsDir}"
      },
      {
        "title": "LLM Deep Analysis",
        "body": "For high-stakes installs or when pattern scan returns MEDIUM, escalate to LLM analysis:\n\nskill-scan scan-hub <slug> --llm-auto\n\n### Option B: Manual On-Demand Scanning\n\nSkills are only scanned when the user explicitly asks. Installation proceeds without automatic checks.\n\n```markdown\n## Skill-Scan — On-Demand Skill Security Scanning\n\nskill-scan is available to audit skill packages for security threats. Use it when the user requests a security scan.\n\n### Commands\n\n- **Scan a ClawHub skill**: `skill-scan scan-hub <slug>`\n- **Scan a local skill**: `skill-scan scan /path/to/skill`\n- **Batch scan installed skills**: `skill-scan batch {skillsDir}`\n- **Deep analysis with LLM**: add `--llm` or `--llm-auto` to any scan command\n\n### Risk Levels\n\n- **LOW (80-100)** — safe, no significant threats\n- **MEDIUM (50-79)** — review findings, use caution\n- **HIGH (20-49)** — serious threats, do not install\n- **CRITICAL (0-19)** — multiple critical threats, do not install\n\n### When the User Asks to Scan\n\n1. Run the appropriate scan command\n2. Show the scan summary (score, risk level, finding count)\n3. If MEDIUM+, list the top findings with severity and description\n4. Let the user decide what to do"
      },
      {
        "title": "Install-Time Setup",
        "body": "When this skill is first installed, ask the user:\n\nHow should skill-scan integrate with your workflow?\n\nAutomatic (Recommended) — Every clawhub install is scanned first. HIGH/CRITICAL installs are blocked automatically.\nManual — Scanning only happens when you explicitly ask for it.\n\nBased on their choice, add the corresponding template (Option A or Option B above) to the project's AGENTS.md."
      },
      {
        "title": "Detection Categories",
        "body": "Execution threats — eval(), exec(), child_process, dynamic imports\n\nCredential theft — .env access, API keys, tokens, private keys, wallet files\n\nData exfiltration — fetch(), axios, requests, sockets, webhooks\n\nFilesystem manipulation — Write/delete/rename operations\n\nObfuscation — Base64, hex, unicode encoding, string construction\n\nPrompt injection — Jailbreaks, invisible characters, homoglyphs, roleplay framing, encoded instructions\n\nBehavioral signatures — Compound patterns: data exfiltration, trojan skills, evasive malware, persistent backdoors"
      },
      {
        "title": "Requirements",
        "body": "Python 3.10+\nhttpx>=0.27 (for LLM API calls only)\nAPI key only needed for --llm modes (static analysis is self-contained)"
      },
      {
        "title": "Related Skills",
        "body": "input-guard — External input scanning\nmemory-scan — Agent memory security\nguardrails — Security policy configuration"
      }
    ],
    "body": "Skill-Scan — Security Auditor for Agent Skills\n\nMulti-layered security scanner for OpenClaw skill packages. Detects malicious code, evasion techniques, prompt injection, and misaligned behavior through static analysis and optional LLM-powered deep inspection. Run this BEFORE installing or enabling any untrusted skill.\n\nFeatures\n6 analysis layers — pattern matching, AST/evasion, prompt injection, LLM deep analysis, alignment verification, meta-analysis\n60+ detection rules — execution threats, credential theft, data exfiltration, obfuscation, behavioral signatures\nContext-aware scoring — reduces false positives for legitimate API skills\nClawHub integration — scan skills directly from the registry by slug\nMultiple output modes — text report (default), --json, --compact, --quiet\nExit codes — 0 for safe, 1 for risky (easy scripting integration)\nWhen to Use\n\nMANDATORY before installing or enabling:\n\nSkills from ClawHub (any skill not authored by you)\nSkills shared by other users or teams\nSkills from public repositories\nAny skill package you haven't personally reviewed\n\nRECOMMENDED for periodic audits of already-installed skills.\n\nQuick Start\n# Scan a local skill directory\nskill-scan scan /path/to/skill\n\n# Scan a skill from ClawHub before installing it\nskill-scan scan-hub some-skill-slug\n\n# Batch scan all installed skills\nskill-scan batch /path/to/skills-directory\n\n# JSON output for programmatic use\nskill-scan scan-hub some-skill-slug --json\n\n# Quiet mode (just score + verdict)\nskill-scan scan-hub some-skill-slug --quiet\n\nRisk Scoring\nRisk\tScore\tAction\nLOW\t80-100\tSafe to install\nMEDIUM\t50-79\tReview findings before installing\nHIGH\t20-49\tDo NOT install — serious threats detected\nCRITICAL\t0-19\tDo NOT install — multiple critical threats\nExit Codes\n0 — LOW risk (safe to proceed)\n1 — MEDIUM or higher (block installation)\nLLM Options\nskill-scan scan-hub slug --llm        # Always run deep analysis (layers 4-5)\nskill-scan scan-hub slug --llm-auto   # LLM only if pattern scan finds MEDIUM+\nskill-scan scan-hub slug --llm-only   # Skip patterns, LLM analysis only\n\n\nProvider auto-detected from environment:\n\nOPENAI_API_KEY -> gpt-4o-mini\nANTHROPIC_API_KEY -> claude-sonnet-4-5\nFor the Agent\n\nWhen the user asks to install a skill from ClawHub, follow this workflow:\n\nScan first — run skill-scan scan-hub <slug> BEFORE installing\nCheck the result:\nIf LOW risk (exit code 0): proceed with installation\nIf MEDIUM risk: show findings to the user and ask whether to proceed\nIf HIGH or CRITICAL risk: block installation and show the report\nReport — always show the user the scan summary (score, risk level, finding count)\nAlert Format\n\nWhen a scan finds MEDIUM or higher risk:\n\n🛡️ Skill-Scan Alert: {RISK_LEVEL} ({SCORE}/100)\nSkill: {slug}\nFindings: {count} issue(s) detected\nTop threat: {highest severity finding title}\nAction: Installation blocked pending review.\n\nAGENTS.md Integration\n\nTo integrate skill-scan into your agent's workflow, add ONE of the following templates to your AGENTS.md (or equivalent agent instructions file).\n\nOption A: Automatic Scanning (Recommended)\n\nAll ClawHub skill installs are automatically scanned before installation. No manual step needed.\n\n## Skill-Scan — Automatic Pre-Install Security Scanning\n\nBEFORE installing any skill from ClawHub, you MUST scan it with skill-scan first. This is non-negotiable — never skip the scan.\n\n### Workflow\n\n1. When the user asks to install a skill (e.g. `clawhub install <slug>`), run the scan FIRST:\n   ```bash\n   skill-scan scan-hub <slug>\n\nCheck the exit code and output:\nExit code 0 (LOW risk) — proceed with clawhub install <slug>\nExit code 1 (MEDIUM risk) — show findings to the user, ask whether to proceed\nExit code 1 (HIGH/CRITICAL risk) — BLOCK installation, show the full report\nNever install a HIGH or CRITICAL risk skill without explicit user override\nAlways show the scan summary to the user before proceeding with installation\nPeriodic Audits\n\nWhen asked to audit installed skills, batch scan the skills directory:\n\nskill-scan batch {skillsDir}\n\nLLM Deep Analysis\n\nFor high-stakes installs or when pattern scan returns MEDIUM, escalate to LLM analysis:\n\nskill-scan scan-hub <slug> --llm-auto\n\n\n### Option B: Manual On-Demand Scanning\n\nSkills are only scanned when the user explicitly asks. Installation proceeds without automatic checks.\n\n```markdown\n## Skill-Scan — On-Demand Skill Security Scanning\n\nskill-scan is available to audit skill packages for security threats. Use it when the user requests a security scan.\n\n### Commands\n\n- **Scan a ClawHub skill**: `skill-scan scan-hub <slug>`\n- **Scan a local skill**: `skill-scan scan /path/to/skill`\n- **Batch scan installed skills**: `skill-scan batch {skillsDir}`\n- **Deep analysis with LLM**: add `--llm` or `--llm-auto` to any scan command\n\n### Risk Levels\n\n- **LOW (80-100)** — safe, no significant threats\n- **MEDIUM (50-79)** — review findings, use caution\n- **HIGH (20-49)** — serious threats, do not install\n- **CRITICAL (0-19)** — multiple critical threats, do not install\n\n### When the User Asks to Scan\n\n1. Run the appropriate scan command\n2. Show the scan summary (score, risk level, finding count)\n3. If MEDIUM+, list the top findings with severity and description\n4. Let the user decide what to do\n\nInstall-Time Setup\n\nWhen this skill is first installed, ask the user:\n\nHow should skill-scan integrate with your workflow?\n\nAutomatic (Recommended) — Every clawhub install is scanned first. HIGH/CRITICAL installs are blocked automatically.\nManual — Scanning only happens when you explicitly ask for it.\n\nBased on their choice, add the corresponding template (Option A or Option B above) to the project's AGENTS.md.\n\nDetection Categories\n\nExecution threats — eval(), exec(), child_process, dynamic imports\n\nCredential theft — .env access, API keys, tokens, private keys, wallet files\n\nData exfiltration — fetch(), axios, requests, sockets, webhooks\n\nFilesystem manipulation — Write/delete/rename operations\n\nObfuscation — Base64, hex, unicode encoding, string construction\n\nPrompt injection — Jailbreaks, invisible characters, homoglyphs, roleplay framing, encoded instructions\n\nBehavioral signatures — Compound patterns: data exfiltration, trojan skills, evasive malware, persistent backdoors\n\nRequirements\nPython 3.10+\nhttpx>=0.27 (for LLM API calls only)\nAPI key only needed for --llm modes (static analysis is self-contained)\nRelated Skills\ninput-guard — External input scanning\nmemory-scan — Agent memory security\nguardrails — Security policy configuration"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/dgriffin831/skill-scan",
    "publisherUrl": "https://clawhub.ai/dgriffin831/skill-scan",
    "owner": "dgriffin831",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/skill-scan",
    "downloadUrl": "https://openagent3.xyz/downloads/skill-scan",
    "agentUrl": "https://openagent3.xyz/skills/skill-scan/agent",
    "manifestUrl": "https://openagent3.xyz/skills/skill-scan/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/skill-scan/agent.md"
  }
}