Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Design, test, review, and maintain agent skills for OpenClaw systems using multi-agent iterative refinement. Orchestrates Designer, Reviewer, and Tester suba...
Design, test, review, and maintain agent skills for OpenClaw systems using multi-agent iterative refinement. Orchestrates Designer, Reviewer, and Tester suba...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Own the full lifecycle of agent skills in your OpenClaw agent kit. The entire multi-agent workflow depends on skill quality β a weak skill produces weak results across every run. Core principle: Builders don't evaluate their own work. This skill enforces separation of concerns through a multi-agent architecture where design, review, and testing are performed by independent subagents.
Source: Anthropic "Improving skill-creator" (2026-03-03) Skills fall into two categories. This distinction drives design decisions, testing strategy, and lifecycle management.
The model can't do it well alone β the skill injects techniques, patterns, or constraints that produce better output than prompting alone. Examples: Document creation skills (PDF generation), complex formatting, specialized analysis pipelines. Testing focus: Monitor whether the base model has caught up. If the base model passes your evals without the skill loaded, the skill's techniques have been incorporated into model default behavior. The skill isn't broken β it's no longer necessary. Lifecycle: These skills may "retire" as models improve. Build evals that can detect when retirement is appropriate.
The model can already do each step β the skill sequences operations according to your team's specific process. Examples: NDA review against set criteria, weekly report generation from specific data sources, brand compliance checks. Testing focus: Verify the skill faithfully reproduces your actual workflow, not the model's "free improvisation." Fidelity to process is the metric. Lifecycle: These skills are durable β they encode organizational knowledge that doesn't change with model capability. They need maintenance when processes change, not when models change.
When the Designer begins work, classify the skill: ClassificationDesign priorityTest priorityRetirement riskCapability upliftTechnique precisionBase model comparisonHigh β monitor model progressEncoded preferenceProcess fidelityWorkflow reproductionLow β tied to org process
This skill requires the following to be installed and available: DependencyTypePurposeInstall fromdeepwikiSkillQuery OpenClaw source for current API behaviorliaosvcaf/openclaw-skill-deepwikiVector memory DBOpenClaw featureSemantic search across session history, notes, and memory filesEnable in openclaw.json (memory.enabled: true) Before starting any skill design or update session, verify both are available: # Check deepwiki ls ~/.openclaw/skills/deepwiki/deepwiki.sh || ls ~/.openclaw/workspace-*/skills/deepwiki/deepwiki.sh # Check vector memory (should return results, not empty) # Use the memory_search tool with a known topic from recent sessions If deepwiki is missing, install from liaosvcaf/openclaw-skill-deepwiki. If vector memory returns no results on known topics, check that memory.enabled is true in openclaw.json and that indexing has run.
DeepWiki: OpenClaw APIs are version-specific. Without DeepWiki, skills are written against memory of past behavior β which drifts as OpenClaw updates. DeepWiki grounds skill content in actual source code. A skill engineer without DeepWiki is working blind. Vector memory DB: Session history, Obsidian notes, and past decisions are indexed here. Without it, the agent falls back to manual file search β slower, less accurate, and misses cross-document connections. Critical context from past sessions (installation guides, design decisions, pitfalls) lives in this index.
Before searching files manually, always query the vector memory database first. It indexes session history, Obsidian notes, and memory files β and finds cross-document connections that manual search misses. When to query vector memory: User asks "do you remember...", "find the notes about...", "we did X before..." Looking for past installation guides, design decisions, or troubleshooting records Any question about prior work, configurations, or lessons learned How to query correctly: memory_search("your query here", maxResults=5) Critical rule: try multiple queries before giving up. If the first query returns empty, do NOT fall back to manual file search immediately. Try at least 3 different phrasings: First query failsTry instead"Docker OpenClaw installation""dockerized openclaw Titan""dockerized openclaw Titan""openclaw isolation install guide"Still emptyThen fall back to manual file search Lesson learned (2026-03-03): When asked to find Docker/OpenClaw installation notes, memory_search returned empty on the first query and the agent immediately switched to manual SQLite/file search. The correct approach was to try different query phrasings β the second attempt ("dockerized OpenClaw installation Titan setup") returned 5 relevant results directly from indexed Obsidian notes. Manual file search is a last resort, not a first response.
OpenClaw APIs, skill loading behavior, subagent mechanics, and frontmatter fields are version-specific. Information in this skill or any skill referencing OpenClaw internals may be outdated. ALWAYS query DeepWiki when: Designing a skill that uses sessions_spawn, tool calls, or OpenClaw-specific APIs Referencing skill frontmatter fields or loading precedence Updating an existing skill that has version-tagged sections The installed OpenClaw version differs from any version tag in the skill You are unsure whether an API, field, or behavior still exists How to check: # Check current OpenClaw version openclaw --version # Query DeepWiki for current behavior ~/.openclaw/skills/deepwiki/deepwiki.sh ask openclaw/openclaw "YOUR QUESTION" Do NOT rely on memory or this skill's documented behavior without verifying when the topic is OpenClaw internals. DeepWiki is grounded in the actual source code. This skill's documentation is not. Verification checklist before shipping any skill that references OpenClaw internals: Checked openclaw --version against version tags in the skill Queried DeepWiki to confirm API/field behavior is current Updated version tags if behavior has changed
Skill design: SKILL.md, skill.yml, README.md, tests, scripts, references Skill review: quality evaluation, rubric scoring, gap analysis Skill testing: self-play validation, trigger testing, functional testing Skill maintenance: iteration based on feedback, refactoring Agent kit audit: inventory, consistency, quality scoring across all skills
Release pipeline β publishing, versioning, changelogs belong to release processes Repository management β git submodules, repo creation, branch strategy belong to your VCS workflow Deployment β installing skills to agents, configuration management Tracking β progress tracking, task management, project boards Infrastructure β MCP servers, API keys, environment setup
This skill produces validated skill artifacts (SKILL.md, skill.yml, README.md, tests, scripts). Once artifacts pass quality gates, responsibility transfers to whatever system handles publishing and deployment.
A skill development cycle is considered successful when: Quality gates passed β Reviewer scores β₯28/33 (Deploy threshold) No blocking issues β Tester reports no issues marked as "blocking" All artifacts generated β SKILL.md, skill.yml, README.md, tests/, scripts/ (if needed), references/ (if needed) OPSEC clean β No hardcoded secrets, paths, org names, or private URLs Scripts validated β All deterministic validation scripts execute successfully on target platform(s) Trigger accuracy β Tester reports β₯90% trigger accuracy (true positives + true negatives) If any criterion fails, the skill returns to the Designer for revision.
The skill-engineer uses a three-role iterative architecture. The orchestrator spawns subagents for each role and never does creative or evaluation work directly.
Two architecture modes are available. Choose based on complexity: Mode A: Director-Controlled (simple/short skill work) Use when: β€2 phases, <10 minutes total, user interaction needed between phases (e.g., quick fixes, single-skill reviews). Director/Orchestrator (main agent, depth 0) ββ Spawn βββ Designer (depth 1) ββ Spawn βββ Reviewer (depth 1) ββ Spawn βββ Tester (depth 1) Risk: announce-to-action gap β if user sends a message while waiting for a subagent, the main agent may handle that instead of chaining the next phase. Mitigate with cron safety net (see below). Mode B: Orchestrator Subagent Pattern (complex/long skill work) Use when: 3+ phases, >10 minutes, pipeline must not stall, parallel workers needed. Director (user-facing, depth 0) βββ Orchestrator (pipeline owner, depth 1) ββ Spawn βββ Designer (depth 2) ββ Spawn βββ Reviewer (depth 2) ββ Spawn βββ Tester (depth 2) The Director spawns a single Orchestrator subagent with the full task description. The Orchestrator owns the entire DesignβReviewβTest loop without yielding control between phases. User messages go to the Director; the pipeline runs uninterrupted. Required config for Mode B: { "agents": { "defaults": { "subagents": { "maxSpawnDepth": 2 } } } } Why Mode B is superior for complex work: No announce-to-action gap (orchestrator chains phases immediately within the same session) Immune to user interruption between phases Persistent pipeline state without re-deriving from files each turn Reference: orchestrator-subagent-pattern-2026-02-28.md (Obsidian notes) β documented after a real 70-minute pipeline stall incident.
When using Mode A, set a cron safety net after each spawn to catch announce-to-action failures: "Check if [designer/reviewer/tester] subagent has completed. If so and next phase not started, resume pipeline." (fires 15 min after spawn)
Designer β Reviewer ββpassβββ Tester ββpassβββ Ship β β fail fail β β βΌ βΌ Designer revises Designer revises β β βΌ βΌ Reviewer Reviewer + Tester β (max 3 iterations, then fail) Exit conditions: Ship: Reviewer scores β₯ 28/33 (85%+) AND Tester reports no blocking issues Revise: Reviewer or Tester found fixable issues (iterate) Fail: 3 iterations exhausted and still below quality bar
After 3 failed iterations, the orchestrator must: Stop iteration β do not continue trying Report failure to user with: Summary: "Skill development failed after 3 iterations" All 3 iteration reports (Reviewer + Tester feedback) Final quality score List of unresolved blocking issues Present options to user: Provide more context or clarify requirements (restart with better inputs) Simplify scope (reduce skill complexity and retry) Abandon this skill (requirements may be infeasible) Do NOT silently fail β always report to user and await decision Never: Continue past 3 iterations or ship a skill that hasn't passed quality gates.
Version note: Verified against OpenClaw v2026.2.26. API may change. In OpenClaw, subagents are spawned using the sessions_spawn tool (not a CLI command). Subagents run in isolated sessions, announce results back to the requester's channel when complete, and are auto-archived after 60 minutes by default. Key constraints on subagents: Default max spawn depth is 1 (subagents cannot spawn further subagents unless maxSpawnDepth: 2 is configured) Default max 5 active children per agent at once Subagents do NOT receive SOUL.md, IDENTITY.md, or USER.md β only AGENTS.md and TOOLS.md Use runTimeoutSeconds to prevent hanging (900s for Designer, 600s for Reviewer/Tester) Results are announced back automatically; reply ANNOUNCE_SKIP to suppress
This is the most important architectural decision. Understand it before proceeding.
The natural instinct is to have the main agent (you) directly manage the DesignβReviewβTest loop: Main agent βββ spawns Designer β waits for announce β spawns Reviewer β waits β spawns Tester This breaks in three ways: Announce-to-action gap: When a subagent finishes, OpenClaw sends a completion announce that triggers a new LLM turn. The LLM may report results to the user and stop β treating the announce as informational rather than a pipeline trigger. There is no mechanism that forces the next action. Context loss: Each new turn is a fresh LLM call. Between subagent completion and the next turn, there is no persistent state machine tracking "we're in iteration 2, reviewer passed, now run Tester." The agent must re-derive this from files every time β fragile over 3+ iterations. User message interruption: If the user sends a message while the pipeline is between phases, the main agent handles that message instead of continuing. The pipeline stalls silently until the user notices. Real incident: A book-writer pipeline stalled for 70 minutes because a research subagent completed and announced back, but the Director reported results to the user and stopped β never spawning the writing phase. (2026-02-28)
Add an intermediate Orchestrator subagent that owns the pipeline. The main agent becomes the Director β it talks to the user. The Orchestrator does the pipeline work. They don't share context. Director (main agent, depth 0) ββ User β βββ Orchestrator (subagent, depth 1) β owns DesignβReviewβTest loop βββ Designer (depth 2) βββ Reviewer (depth 2) βββ Tester (depth 2) Why this works: The Orchestrator runs as a single continuous session. It processes each subagent's completion announce immediately β no turn boundary between phases, no gap. User messages go to the Director (depth 0), not the Orchestrator. The pipeline cannot be interrupted by user activity. The Orchestrator maintains full pipeline state throughout its run without re-deriving from files. Required config (add to openclaw.json before using this pattern): { "agents": { "defaults": { "subagents": { "maxSpawnDepth": 2 } } } }
SituationUseWhyQuick fix, single skill review, <10 minDirector-only (depth 1 subagents)Simpler, fewer spawnsFull design cycle (Design+Review+Test)Director + Orchestrator (depth 2)Pipeline cannot afford to stallAny pipeline with 3+ sequential phasesDirector + Orchestrator (depth 2)Announce-to-action gap becomes criticalmaxSpawnDepth not set to 2Director-only with cron safety netNo choice β see fallback below
If maxSpawnDepth: 2 is not configured, use Director-only mode but add a cron safety net after each subagent spawn: After spawning Designer, register a cron job: "Check if Designer has completed (look for output at /path/to/skill/SKILL.md). If completed and Reviewer not yet started, spawn Reviewer now." (fires 15 minutes after spawn) This mitigates but does not eliminate the announce-to-action gap.
The Director (main agent) talks to the user and kicks off the pipeline. It does NOT do design, review, or testing work. Gather requirements from the user (problem, audience, inputs/outputs, interactions) Query DeepWiki β if the skill touches any OpenClaw internals, query DeepWiki FIRST: ~/.openclaw/skills/deepwiki/deepwiki.sh ask openclaw/openclaw "RELEVANT QUESTION" Choose mode β Director-only (simple) or Director+Orchestrator (full cycle) For Director+Orchestrator mode: Spawn a single Orchestrator subagent with complete task description including: requirements, DeepWiki findings, artifact output path, quality rubric location, max iterations For Director-only mode: Execute Orchestrator Responsibilities directly (see below) Relay final result to user when pipeline completes
The Orchestrator (depth-1 subagent in Mode B, or main agent in fallback mode) owns the DesignβReviewβTest loop. It does NOT write skill content or evaluate quality β it only coordinates. Query DeepWiki for any OpenClaw-specific topics in the requirements (if Director hasn't already) Spawn Designer with requirements, DeepWiki findings, and any prior feedback sessions_spawn( task="Act as Designer. Requirements: [...]. Write artifacts to /path/to/skill/", label="skill-v1-designer", runTimeoutSeconds=900 ) Collect Designer output β verify all required files exist at output path Spawn Reviewer with artifacts and quality rubric sessions_spawn( task="Act as Reviewer. Evaluate skill at /path/to/skill/ using rubric: [...]. Score all 33 checks.", label="skill-v1-reviewer", runTimeoutSeconds=600 ) Collect Reviewer feedback (scores + structured issues) If score <28/33 or blocking issues: feed feedback back to Designer β go to step 2, increment iteration count If passing review: Spawn Tester sessions_spawn( task="Act as Tester. Run self-play on skill at /path/to/skill/. Test triggers, functional steps, edge cases.", label="skill-v1-tester", runTimeoutSeconds=600 ) Collect Tester results (pass/fail + report) If blocking issues: feed test results back to Designer β go to step 2 If all pass: add quality scorecard to README.md β announce completion to Director Track iteration count β after 3 failed iterations, report failure with all iteration logs
Every shipped skill must include a quality scorecard in its README.md. This is the Reviewer's final scores, added by the Orchestrator before delivery: ## Quality Scorecard | Category | Score | Details | |----------|-------|---------| | Completeness (SQ-A) | 7/7 | All checks pass | | Clarity (SQ-B) | 4/5 | Minor ambiguity in edge case handling | | Balance (SQ-C) | 4/4 | AI/script split appropriate | | Integration (SQ-D) | 4/4 | Compatible with standard agent kit | | Scope (SCOPE) | 3/3 | Clean boundaries, no leaks | | OPSEC | 2/2 | No violations | | References (REF) | 3/3 | All sources cited | | Architecture (ARCH) | 2/2 | Separation of concerns maintained | | **Total** | **29/30** | | *Scored by skill-engineer Reviewer (iteration 2)* This scorecard serves as a quality certificate. Users can assess skill quality before installing.
The orchestrator manages git commits throughout the workflow: When to commit: After Designer produces initial artifacts (iteration 1): git add . && git commit -m "feat: initial design for <skill-name>" After Designer revisions (iteration 2+): git add . && git commit -m "fix: address review issues (iteration N)" After Tester passes and before ship: git add README.md && git commit -m "docs: add quality scorecard for <skill-name>" When to push: After final ship (all gates passed): git push origin main Do NOT push intermediate iterations β only ship-ready artifacts Branch strategy: Work in main branch for routine skill development Use feature branches for experimental or breaking changes
The orchestrator must handle technical failures gracefully: Failure TypeDetectionResponseGit push failsExit code β 0Retry once. If fails again, report to user: "Cannot push to remote. Check network/permissions."OPSEC scan script missingFile not foundSkip OPSEC automated check, but flag in review: "Manual OPSEC review required β script not found."File write errorsPermission deniedReport: "Cannot write to [path]. Check file permissions." Fail workflow.Subagent crashesTimeout or errorLog the error, attempt retry once. If fails again, report: "Subagent failed. Manual intervention required."Review score = 0All checks failReport: "Skill failed all quality checks. Requirements may be unclear or skill design is fundamentally flawed. Recommend starting over." Retry logic: Git operations: 1 retry after 5s delay File operations: 1 retry after 2s delay Subagent spawns: 1 retry with fresh context Fail-fast rules: If iteration count exceeds 3, fail immediately (no further retries) If OPSEC violations found, fail immediately (no iteration) If required files cannot be written, fail immediately
Orchestrator workload: Coordinating Designer/Reviewer/Tester across 1-3 iterations can be complex, especially for large skills (1000+ lines). The orchestrator manages: Requirements gathering Subagent coordination (3-9 spawns in typical workflow) Feedback routing between roles Iteration tracking Final scorecard assembly Git operations Token considerations: A full 3-iteration cycle can consume 50k-150k tokens depending on skill complexity. For extremely complex skills, consider: Breaking into sub-skills (each with simpler scope) Using separate agent sessions (Option 2 spawning) to isolate token contexts Simplifying requirements before starting iteration If orchestrator feels overwhelmed: This is a signal that the skill being designed may be too complex. Revisit the scope definition and consider decomposition.
Each subagent receives only what it needs: RoleReceivesDoes NOT ReceiveDesignerRequirements, prior feedback (if any), design principlesReviewer rubric internalsReviewerSkill artifacts, quality rubric, scope boundariesRequirements discussionTesterSkill artifacts, test protocolReview scores
Purpose: Generate and revise skill content. For complete Designer instructions, see: references/designer-guide.md
Inputs: Requirements, design principles, feedback (on iterations 2+) Outputs: SKILL.md, skill.yml, README.md, tests/, scripts/, references/ Key constraints: Apply progressive disclosure (frontmatter β body β linked files) Apply scoping rules (explicit boundaries, no scope creep) Apply tool selection guardrails (validate before execution) README for strangers only (no internal org details) Follow AI vs. Script decision framework Design principles: Progressive disclosure (3-level system) Composability (works alongside other skills) Portability (same skill works across Claude.ai, Claude Code, API)
Purpose: Independent quality evaluation. The Reviewer has never seen the requirements discussion β it evaluates artifacts on their own merits. For complete Reviewer rubric and scoring guide, see: references/reviewer-rubric.md
Inputs: Skill artifacts, quality rubric, scope boundaries Outputs: Review report with scores, verdict (PASS/REVISE/FAIL), issues, strengths Quality rubric (33 checks total): SQ-A: Completeness (8 checks) SQ-B: Clarity (5 checks) SQ-C: Balance (5 checks) SQ-D: Integration (5 checks) SCOPE: Boundaries (3 checks) OPSEC: Security (2 checks) REF: References (3 checks) ARCH: Architecture (2 checks) Scoring thresholds: 28-33 pass β Deploy (PASS verdict) 20-27 pass β Revise (fixable issues) 10-19 pass β Redesign (major rework) 0-9 pass β Reject (fundamentally flawed) Pre-review: Run deterministic validation scripts before manual evaluation
Purpose: Empirical validation via self-play. The Tester loads the skill and attempts realistic tasks. For complete Tester protocol, see: references/tester-protocol.md
Inputs: Skill artifacts, test protocol Outputs: Test report with trigger accuracy, functional test results, edge cases, blocking/non-blocking issues, verdict (PASS/FAIL) Test protocol: Trigger tests β verify skill loads correctly (β₯90% accuracy threshold) Functional tests β execute 2-3 realistic tasks, note confusion points Edge case tests β missing inputs, ambiguous requirements, boundary cases Issue classification: Blocking: Prevents skill from functioning (must fix before ship) Non-blocking: Impacts quality but doesn't break core functionality Pass criteria: No blocking issues + β₯90% trigger accuracy
The agent that DESIGNS a skill must NOT be the same agent that AUDITS it in the same session. This is a hard architectural rule, not a guideline. When the same agent designs and audits in one session, it creates structural circularity: the designer unconsciously frames evaluation in terms of their own intentions, missing gaps that a fresh reader would catch. Enforcement: All audit work (Reviewer role, Tester role) MUST be performed by a fresh subagent spawned after design is complete. Use openclaw agent --session-id <unique-id> (Option 2 spawning) when auditing a skill the current session has designed. The orchestrator may never evaluate its own spawned Designer's output directly β it must route all evaluation through an independent Reviewer subagent. In role-based execution (Option 1), the agent must explicitly transition: complete all Designer work, then start the Reviewer role with no reference to design-time reasoning. Why this matters: A designer who audits their own work will score it against their intentions, not against what a new agent will actually experience. The rubric (SQ-C3) explicitly prohibits a sub-agent from being both producer AND evaluator of the same output. This rule is the implementation of that check at the session level. Example β correct: # Session A: Designer work sessions_spawn( task="Design a skill for X. Write artifacts to /path/to/skill/", label="skill-v1-designer", runTimeoutSeconds=900 ) # Session B: Audit (fresh session, no context from Session A) sessions_spawn( task="Audit the skill at /path/to/skill/ using the reviewer rubric.", label="skill-v1-auditor", runTimeoutSeconds=600 ) Example β incorrect: [Session A] 1. Design the skill... 2. Now let me review the skill I just designed... β VIOLATION
Source: Anthropic "Improving skill-creator" (2026-03-03). Adapted for OpenClaw skill-engineer. Evals turn "seems to work" into "verified to work." Every shipped skill should have persistent evals that can be re-run after model updates, skill edits, or environment changes.
An eval consists of: Test prompt β a realistic user input that should trigger the skill Expected behavior description β what "good" looks like (natural language, not exact match) Pass/fail criteria β specific, observable conditions Store evals in the skill's tests/ directory: tests/ βββ evals.json # Eval definitions βββ benchmarks/ # Benchmark run results (timestamped) βββ comparisons/ # A/B comparison results
TypePurposeWhen to runRegression evalCatch quality drops after changesAfter every skill edit or model updateCapability evalDetect if base model has outgrown the skillMonthly, or after major model releasesTrigger evalVerify skill fires correctlyAfter description changes
Run standardized assessments tracking: Eval pass rate β what percentage of evals pass Elapsed time β how long each eval takes Token usage β cost per eval run Store benchmark results with timestamps for trend tracking: { "timestamp": "2026-03-04T12:00:00Z", "skill": "my-skill", "model": "claude-sonnet-4-5", "pass_rate": 0.85, "avg_time_s": 12.3, "avg_tokens": 4200, "evals_run": 10 }
Compare two skill versions β or skill vs. no skill β using a blind judge: Run the same test prompt through Version A and Version B A Comparator subagent (fresh context, no knowledge of which is which) evaluates both outputs The Comparator scores on relevant dimensions without knowing the source When to use: Before shipping a major revision (old vs. new) To justify a skill's existence (with-skill vs. without-skill) To compare two alternative approaches during design Spawning a Comparator: sessions_spawn( task="You are a blind comparator. You will receive Output A and Output B for the same task. Score each on [dimensions]. You do NOT know which version produced which output. Be objective.", label="skill-comparator", runTimeoutSeconds=300 )
Skill descriptions determine trigger accuracy. As skill count grows, description precision becomes critical: Too broad β false triggers (skill loads when irrelevant) Too narrow β misses (skill doesn't load when needed) Tuning protocol: Collect 10-20 sample prompts (mix of should-trigger and should-not-trigger) Run each prompt and check whether the skill triggers correctly Analyze false positives and false negatives Revise the description field to be more precise Re-run trigger tests to verify improvement Target: β₯90% trigger accuracy on sample prompts. Anthropic's internal testing improved 5 out of 6 public skills using this method.
Skills are not forever. Capability uplift skills may become unnecessary as models improve. Retirement signal: Base model passes β₯80% of the skill's evals without the skill loaded. Retirement process: Run capability evals with skill disabled If pass rate β₯80%, flag skill as "retirement candidate" Run comparator test (with-skill vs. without-skill) to confirm If comparator shows no significant quality difference, retire the skill Archive (don't delete) β the skill may become relevant again with different models Track in audit reports: ## Retirement Candidates | Skill | Capability Eval (no skill) | Comparator Result | Recommendation | |-------|---------------------------|-------------------|----------------| | pdf-creator | 85% pass | No significant difference | Retire |
Periodic full audit of the agent kit: Inventory all skills β list every SKILL.md with owner agent Check for orphans β skills that no agent uses Check for duplicates β overlapping functionality Check for gaps β workflow steps that have no skill Check balance β are some agents overloaded while others idle? Check consistency β naming conventions, output formats Run quality score on each skill (SQ-A through SQ-D) Produce audit report with scores and recommendations
# Agent Kit Audit Report **Date:** [date] **Skills audited:** [count] ## Skill Inventory | # | Skill | Agent | Quality Score | Status | |---|-------|-------|--------------|--------| | 1 | [name] | [agent] | X/33 | Deploy/Revise/Redesign | ## Issues Found 1. ... ## Recommendations 1. ... ## Action Items | # | Action | Priority | Owner | |---|--------|----------|-------|
Maintain a map of how skills interact: orchestrator-agent (coordinates workflow) βββ content-creator (writes content) β βββ consumes: research outputs, review feedback βββ content-reviewer (reviews content) β βββ produces: review reports βββ research-analyst (researches topics) β βββ produces: research consumed by content-creator βββ validator (validates outputs) βββ skill-engineer (this skill β meta) βββ consumes: all skills for audit Adapt this to your specific agent architecture.
Version note: This section is based on OpenClaw v2026.2.26. Skill system behavior (frontmatter fields, loading precedence, subagent APIs) may change across versions. Verify against source or DeepWiki when upgrading.
A skill is a directory containing at minimum a SKILL.md file: my-skill/ βββ SKILL.md # Required: frontmatter + instructions βββ skill.yml # Optional: ClawhHub publish metadata βββ README.md # Optional: human-facing documentation βββ scripts/ # Optional: deterministic helper scripts βββ tests/ # Optional: test cases and fixtures βββ references/ # Optional: detailed linked documentation
Required fields: --- name: skill-name # kebab-case, no spaces/capitals/underscores description: | # What it does + when to use it + trigger phrases [What it does]. Use when user [trigger phrases]. [Key capabilities]. --- Full supported fields: --- name: skill-name description: ... homepage: https://... # URL for Skills UI user-invocable: true # Expose as slash command (default: true) disable-model-invocation: false # Exclude from model prompt (default: false) command-dispatch: tool # Bypass model, dispatch to tool directly command-tool: tool-name # Tool to invoke when command-dispatch is set command-arg-mode: raw # Argument forwarding mode (default: raw) metadata: {"openclaw": {"always": true, "emoji": "π§", "os": ["darwin","linux"], "requires": {"bins": ["curl","python3"]}, "primaryEnv": "MY_API_KEY"}} --- metadata.openclaw load-time gates: FieldPurposealways: trueAlways include, skip all other gatesemojiEmoji shown in macOS Skills UIosLimit to platforms: darwin, linux, win32requires.binsAll binaries must exist on PATHrequires.anyBinsAt least one binary must existrequires.envEnvironment variables must existrequires.configopenclaw.json paths must be truthyprimaryEnvLinks to skills.entries.<name>.apiKey in config
Skills are loaded from these locations (highest β lowest priority): <workspace>/skills/ β agent-specific, highest precedence ~/.openclaw/skills/ β shared across all agents on machine skills.load.extraDirs in openclaw.json β additional directories Bundled skills β shipped with OpenClaw, lowest precedence Plugin skills β listed in openclaw.plugin.json
LocationUse when<workspace>/skills/Skill is specific to one agent's role; under active development~/.openclaw/skills/Skill should be available to all agents on this machine
OpenClaw builds a system prompt with a compact XML list of available skills (name, description, location). The model reads this list and decides which skills to load. Skills are NOT auto-injected β the model must explicitly read the SKILL.md when needed. Trigger accuracy goal: β₯90% (skill loads when relevant, does NOT load when irrelevant).
To inventory all skills on a machine: find ~/.openclaw/ -name "SKILL.md" -not -path "*/node_modules/*" | sort
No persistent configuration required. The skill uses tools available in the agent's environment. RequirementDescriptiondeepwiki skillQuery OpenClaw source for current API behavior (liaosvcaf/openclaw-skill-deepwiki)Vector memorySemantic search across session history (memory.enabled: true in openclaw.json)gh CLIGitHub repo creation and visibility changes for release pipelineclawhub CLIPublish skills to ClawhHub registry (npm i -g clawhub) See references/designer-guide.md for full environment setup.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.