Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.
SituationActionCommand/operation failsLog to .learnings/ERRORS.mdUser corrects youLog to .learnings/LEARNINGS.md with category correctionUser wants missing featureLog to .learnings/FEATURE_REQUESTS.mdAPI/external tool failsLog to .learnings/ERRORS.md with integration detailsKnowledge was outdatedLog to .learnings/LEARNINGS.md with category knowledge_gapFound better approachLog to .learnings/LEARNINGS.md with category best_practiceSimilar to existing entryLink with **See Also**, consider priority bumpBroadly applicable learningPromote to CLAUDE.md, AGENTS.md, and/or .github/copilot-instructions.mdWorkflow improvementsPromote to AGENTS.md (clawdbot workspace)Tool gotchasPromote to TOOLS.md (clawdbot workspace)Behavioral patternsPromote to SOUL.md (clawdbot workspace)
Create .learnings/ directory in project root if it doesn't exist: mkdir -p .learnings Copy templates from assets/ or create files with headers.
Format: TYPE-YYYYMMDD-XXX TYPE: LRN (learning), ERR (error), FEAT (feature) YYYYMMDD: Current date XXX: Sequential number or random 3 chars (e.g., 001, A7B) Examples: LRN-20250115-001, ERR-20250115-A3F, FEAT-20250115-002
When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.
Learning applies across multiple files/features Knowledge any contributor (human or AI) should know Prevents recurring mistakes Documents project-specific conventions
TargetWhat Belongs ThereCLAUDE.mdProject facts, conventions, gotchas for all Claude interactionsAGENTS.mdAgent-specific workflows, tool usage patterns, automation rules.github/copilot-instructions.mdProject context and conventions for GitHub CopilotSOUL.mdBehavioral guidelines, communication style, principles (clawdbot)TOOLS.mdTool capabilities, usage patterns, integration gotchas (clawdbot)
Distill the learning into a concise rule or fact Add to appropriate section in target file (create file if needed) Update original entry: Change **Status**: pending โ **Status**: promoted Add **Promoted**: CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md
If logging something similar to an existing entry: Search first: grep -r "keyword" .learnings/ Link entries: Add **See Also**: ERR-20250110-001 in Metadata Bump priority if issue keeps recurring Consider systemic fix: Recurring issues often indicate: Missing documentation (โ promote to CLAUDE.md or .github/copilot-instructions.md) Missing automation (โ add to AGENTS.md) Architectural problem (โ create tech debt ticket)
Review .learnings/ at natural breakpoints:
Before starting a new major task After completing a feature When working in an area with past learnings Weekly during active development
# Count pending items grep -h "Status\*\*: pending" .learnings/*.md | wc -l # List pending high-priority items grep -B5 "Priority\*\*: high" .learnings/*.md | grep "^## \[" # Find learnings for a specific area grep -l "Area\*\*: backend" .learnings/*.md
Resolve fixed items Promote applicable learnings Link related entries Escalate recurring issues
Automatically log when you notice: Corrections (โ learning with correction category): "No, that's not right..." "Actually, it should be..." "You're wrong about..." "That's outdated..." Feature Requests (โ feature request): "Can you also..." "I wish you could..." "Is there a way to..." "Why can't you..." Knowledge Gaps (โ learning with knowledge_gap category): User provides information you didn't know Documentation you referenced is outdated API behavior differs from your understanding Errors (โ error entry): Command returns non-zero exit code Exception or stack trace Unexpected output or behavior Timeout or connection failure
PriorityWhen to UsecriticalBlocks core functionality, data loss risk, security issuehighSignificant impact, affects common workflows, recurring issuemediumModerate impact, workaround existslowMinor inconvenience, edge case, nice-to-have
Use to filter learnings by codebase region: AreaScopefrontendUI, components, client-side codebackendAPI, services, server-side codeinfraCI/CD, deployment, Docker, cloudtestsTest files, testing utilities, coveragedocsDocumentation, comments, READMEsconfigConfiguration files, environment, settings
Log immediately - context is freshest right after the issue Be specific - future agents need to understand quickly Include reproduction steps - especially for errors Link related files - makes fixes easier Suggest concrete fixes - not just "investigate" Use consistent categories - enables filtering Promote aggressively - if in doubt, add to CLAUDE.md or .github/copilot-instructions.md Review regularly - stale learnings lose value
Keep learnings local (per-developer): .learnings/ Track learnings in repo (team-wide): Don't add to .gitignore - learnings become shared knowledge. Hybrid (track templates, ignore entries): .learnings/*.md !.learnings/.gitkeep
Enable automatic reminders through agent hooks. This is opt-in - you must explicitly configure hooks.
Create .claude/settings.json in your project: { "hooks": { "UserPromptSubmit": [{ "matcher": "", "hooks": [{ "type": "command", "command": "./skills/self-improvement/scripts/activator.sh" }] }] } } This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).
{ "hooks": { "UserPromptSubmit": [{ "matcher": "", "hooks": [{ "type": "command", "command": "./skills/self-improvement/scripts/activator.sh" }] }], "PostToolUse": [{ "matcher": "Bash", "hooks": [{ "type": "command", "command": "./skills/self-improvement/scripts/error-detector.sh" }] }] } }
ScriptHook TypePurposescripts/activator.shUserPromptSubmitReminds to evaluate learnings after tasksscripts/error-detector.shPostToolUse (Bash)Triggers on command errors See references/hooks-setup.md for detailed configuration and troubleshooting.
When a learning is valuable enough to become a reusable skill, extract it using the provided helper.
A learning qualifies for skill extraction when ANY of these apply: CriterionDescriptionRecurringHas See Also links to 2+ similar issuesVerifiedStatus is resolved with working fixNon-obviousRequired actual debugging/investigation to discoverBroadly applicableNot project-specific; useful across codebasesUser-flaggedUser says "save this as a skill" or similar
Identify candidate: Learning meets extraction criteria Run helper (or create manually): ./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run ./skills/self-improvement/scripts/extract-skill.sh skill-name Customize SKILL.md: Fill in template with learning content Update learning: Set status to promoted_to_skill, add Skill-Path Verify: Read skill in fresh session to ensure it's self-contained
If you prefer manual creation: Create skills/<skill-name>/SKILL.md Use template from assets/SKILL-TEMPLATE.md Follow Agent Skills spec: YAML frontmatter with name and description Name must match folder name No README.md inside skill folder
Watch for these signals that a learning should become a skill: In conversation: "Save this as a skill" "I keep running into this" "This would be useful for other projects" "Remember this pattern" In learning entries: Multiple See Also links (recurring issue) High priority + resolved status Category: best_practice with broad applicability User feedback praising the solution
Before extraction, verify: Solution is tested and working Description is clear without original context Code examples are self-contained No project-specific hardcoded values Follows skill naming conventions (lowercase, hyphens)
This skill works across different AI coding agents with agent-specific activation.
Activation: Hooks (UserPromptSubmit, PostToolUse) Setup: .claude/settings.json with hook configuration Detection: Automatic via hook scripts
Activation: Hooks (same pattern as Claude Code) Setup: .codex/settings.json with hook configuration Detection: Automatic via hook scripts
Activation: Manual (no hook support) Setup: Add to .github/copilot-instructions.md: ## Self-Improvement After solving non-obvious issues, consider logging to `.learnings/`: 1. Use format from self-improvement skill 2. Link related entries with See Also 3. Promote high-value learnings to skills Ask in chat: "Should I log this as a learning?" Detection: Manual review at session end
Activation: Workspace injection + inter-agent messaging Setup: Configure workspace path in ~/.clawdbot/clawdbot.json Detection: Via session tools and workspace files (AGENTS.md, SOUL.md, TOOLS.md) Clawdbot uses a workspace-based model with injected prompt files. See references/clawdbot-integration.md for detailed setup.
Regardless of agent, apply self-improvement when you: Discover something non-obvious - solution wasn't immediate Correct yourself - initial approach was wrong Learn project conventions - discovered undocumented patterns Hit unexpected errors - especially if diagnosis was difficult Find better approaches - improved on your original solution
For Copilot users, add this to your prompts when relevant: After completing this task, evaluate if any learnings should be logged to .learnings/ using the self-improvement skill format. Or use quick prompts: "Log this to learnings" "Create a skill from this solution" "Check .learnings/ for related issues"
Clawdbot uses workspace-based prompt injection with specialized files for different concerns.
~/clawd/ # Default workspace (configurable) โโโ AGENTS.md # Multi-agent workflows, delegation patterns โโโ SOUL.md # Behavioral guidelines, communication style โโโ TOOLS.md # Tool capabilities, MCP integrations โโโ sessions/ # Session transcripts (auto-managed)
Learning TypePromote ToExampleAgent coordinationAGENTS.md"Delegate file searches to explore agent"Communication styleSOUL.md"Be concise, avoid disclaimers"Tool gotchasTOOLS.md"MCP server X requires auth refresh"Project factsCLAUDE.mdStandard project conventions
Clawdbot supports session-based communication: sessions_list - See active/recent sessions sessions_history - Read transcript from another session sessions_send - Send message to another session
When using both: Keep .learnings/ for project-specific learnings Use clawdbot workspace files for cross-project patterns Sync high-value learnings to both systems See references/clawdbot-integration.md for complete setup, promotion formats, and troubleshooting.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.