Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Design and build custom Claude Code agents with effective descriptions, tool access patterns, and self-documenting prompts. Covers Task tool delegation, model selection, memory limits, and declarative instruction design. Use when: creating custom agents, designing agent descriptions for auto-delegation, troubleshooting agent memory issues, or building agent pipelines.
Design and build custom Claude Code agents with effective descriptions, tool access patterns, and self-documenting prompts. Covers Task tool delegation, model selection, memory limits, and declarative instruction design. Use when: creating custom agents, designing agent descriptions for auto-delegation, troubleshooting agent memory issues, or building agent pipelines.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Build effective custom agents for Claude Code with proper delegation, tool access, and prompt design.
The description field determines whether Claude will automatically delegate tasks.
--- name: agent-name description: | [Role] specialist. MUST BE USED when [specific triggers]. Use PROACTIVELY for [task category]. Keywords: [trigger words] tools: Read, Write, Edit, Glob, Grep, Bash model: sonnet ---
Weak (won't auto-delegate)Strong (auto-delegates)"Analyzes screenshots for issues""Visual QA specialist. MUST BE USED when analyzing screenshots. Use PROACTIVELY for visual QA.""Runs Playwright scripts""Playwright specialist. MUST BE USED when running Playwright scripts. Use PROACTIVELY for browser automation." Key phrases: "MUST BE USED when..." "Use PROACTIVELY for..." Include trigger keywords
Explicit: Task tool subagent_type: "agent-name" - always works Automatic: Claude matches task to agent description - requires strong phrasing Session restart required after creating/modifying agents.
If an agent doesn't need Bash, don't give it Bash. Agent needs to...Give toolsDon't giveCreate files onlyRead, Write, Edit, Glob, GrepBashRun scripts/CLIsRead, Write, Edit, Glob, Grep, BashβRead/audit onlyRead, Glob, GrepWrite, Edit, Bash Why? Models default to cat > file << 'EOF' heredocs instead of Write tool. Each bash command requires approval, causing dozens of prompts per agent run.
Instead of restricting Bash, allowlist safe commands in .claude/settings.json: { "permissions": { "allow": [ "Write", "Edit", "WebFetch(domain:*)", "Bash(cd *)", "Bash(cp *)", "Bash(mkdir *)", "Bash(ls *)", "Bash(cat *)", "Bash(head *)", "Bash(tail *)", "Bash(grep *)", "Bash(diff *)", "Bash(mv *)", "Bash(touch *)", "Bash(file *)" ] } }
Don't downgrade quality to work around issues - fix root causes instead. ModelUse ForOpusCreative work (page building, design, content) - quality mattersSonnetMost agents - content, code, research (default)HaikuOnly script runners where quality doesn't matter
Add to ~/.bashrc or ~/.zshrc: export NODE_OPTIONS="--max-old-space-size=16384" Increases Node.js heap from 4GB to 16GB.
Agent TypeMax ParallelNotesAny agents2-3Context accumulates; batch then pauseHeavy creative (Opus)1-2Uses more memory
source ~/.bashrc or restart terminal NODE_OPTIONS="--max-old-space-size=16384" claude Check what files exist, continue from there
Always prefer Task sub-agents over remote API calls. AspectRemote API CallTask Sub-AgentTool accessNoneFull (Read, Grep, Write, Bash)File readingMust pass all content in promptCan read files iterativelyCross-referencingSingle context windowCan reason across documentsDecision qualityGeneric suggestionsSpecific decisions with rationaleOutput quality~100 lines typical600+ lines with specifics // β WRONG - Remote API call const response = await fetch('https://api.anthropic.com/v1/messages', {...}) // β CORRECT - Use Task tool // Invoke Task with subagent_type: "general-purpose"
Describe what to accomplish, not how to use tools.
IncludeSkipTask goal and contextExplicit bash/tool commandsInput file paths"Use X tool to..."Output file paths and formatStep-by-step tool invocationsSuccess/failure criteriaShell pipeline syntaxBlocking checks (prerequisites)Micromanaged workflowsQuality checklists
"Agents that won't have your context must be able to reproduce the behaviour independently." Every improvement must be encoded into the agent's prompt, not left as implicit knowledge.
DiscoveryWhere to CaptureBug fix patternAgent's "Corrections" or "Common Issues" sectionQuality requirementAgent's "Quality Checklist" sectionFile path conventionAgent's "Output" sectionTool usage patternAgent's "Process" sectionBlocking prerequisiteAgent's "Blocking Check" section
Before completing any agent improvement: Read the agent prompt as if you have no context Ask: Could a new session follow this and produce the same quality? If no: Add missing instructions, patterns, or references
Anti-PatternWhy It Fails"As we discussed earlier..."No prior context existsRelying on files read during devAgent may not read same filesAssuming knowledge from errorsAgent won't see your debugging"Just like the home page"Agent hasn't built home page
Effective agent prompts include: ## Your Role [What the agent does] ## Blocking Check [Prerequisites that must exist] ## Input [What files to read] ## Process [Step-by-step with encoded learnings] ## Output [Exact file paths and formats] ## Quality Checklist [Verification steps including learned gotchas] ## Common Issues [Patterns discovered during development]
When inserting a new agent into a numbered pipeline (e.g., HTML-01 β HTML-05 β HTML-11): Must UpdateWhatNew agent"Workflow Position" diagram + "Next" fieldPredecessor agentIts "Next" field to point to new agent Common bug: New agent is "orphaned" because predecessor still points to old next agent. Verification: grep -n "Next:.*β\|Then.*runs next" .claude/agents/*.md
Best use case: Tasks that are repetitive but require judgment. Example: Auditing 70 skills manually = tedious. But each audit needs intelligence (check docs, compare versions, decide what to fix). Perfect for parallel agents with clear instructions. Not good for: Simple tasks (just do them) Highly creative tasks (need human direction) Tasks requiring cross-file coordination (agents work independently)
For each [item]: 1. Read [source file] 2. Verify with [external check - npm view, API call, etc.] 3. Check [authoritative source] 4. Score/evaluate 5. FIX issues found β Critical instruction Key elements: "FIX issues found" - Without this, agents only report. With it, they take action. Exact file paths - Prevents ambiguity Output format template - Ensures consistent, parseable reports Batch size ~5 items - Enough work to be efficient, not so much that failures cascade
1. ME: Launch 2-3 parallel agents with identical prompt, different item lists 2. AGENTS: Work in parallel (read β verify β check β edit β report) 3. AGENTS: Return structured reports (score, status, fixes applied, files modified) 4. ME: Review changes (git status, spot-check diffs) 5. ME: Commit in batches with meaningful changelog 6. ME: Push and update progress tracking Why agents don't commit: Allows human review, batching, and clean commit history.
Good fit: Same steps repeated for many items Each item requires judgment (not just transformation) Items are independent (no cross-item dependencies) Clear success criteria (score, pass/fail, etc.) Authoritative source exists to verify against Bad fit: Items depend on each other's results Requires creative/subjective decisions Single complex task (use regular agent instead) Needs human input mid-process
--- name: my-agent description: | [Role] specialist. MUST BE USED when [triggers]. Use PROACTIVELY for [task category]. Keywords: [trigger words] tools: Read, Write, Edit, Glob, Grep, Bash model: sonnet ---
Remove Bash from tools if not needed Put critical instructions FIRST (right after frontmatter) Use allowlists in .claude/settings.json
export NODE_OPTIONS="--max-old-space-size=16384" source ~/.bashrc && claude
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.