Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Self-improvement through conversation analysis. Extracts learnings from corrections and success patterns, proposes updates to agent files or creates new skil...
Self-improvement through conversation analysis. Extracts learnings from corrections and success patterns, proposes updates to agent files or creates new skil...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
CommandAction/reflectAnalyze conversation for learnings/reflect onEnable auto-reflection/reflect offDisable auto-reflection/reflect statusShow state and metrics/reflect reviewReview low-confidence learnings/reflect [agent]Focus on specific agent
"Correct once, never again." When users correct behavior, those corrections become permanent improvements encoded into the agent system - across all future sessions.
Check and initialize state files using the state manager: # Check for existing state python scripts/state_manager.py init # State directory is configurable via REFLECT_STATE_DIR env var # Default: ~/.reflect/ (portable) or ~/.claude/session/ (Claude Code) State includes: reflect-state.yaml - Toggle state, pending reviews reflect-metrics.yaml - Aggregate metrics learnings.yaml - Log of all applied learnings
Use the signal detector to identify learnings: python scripts/signal_detector.py --input conversation.txt Signal Confidence Levels ConfidenceTriggersExamplesHIGHExplicit corrections"never", "always", "wrong", "stop", "the rule is"MEDIUMApproved approaches"perfect", "exactly", accepted outputLOWObservationsPatterns that worked, not validated See signal_patterns.md for full detection rules.
Map each signal to the appropriate target: Learning Categories: CategoryTarget FilesCode Stylecode-reviewer, backend-developer, frontend-developerArchitecturesolution-architect, api-architect, architecture-reviewerProcessCLAUDE.md, orchestrator agentsDomainDomain-specific agents, CLAUDE.mdToolsCLAUDE.md, relevant specialistsNew Skill.claude/skills/{name}/SKILL.md See agent_mappings.md for mapping rules.
Some learnings should become new skills rather than agent updates: Skill-Worthy Criteria: Non-obvious debugging (>10 min investigation) Misleading error (root cause different from message) Workaround discovered through experimentation Configuration insight (differs from documented) Reusable pattern (helps in similar situations) Quality Gates (must pass all): Reusable: Will help with future tasks Non-trivial: Requires discovery, not just docs Specific: Can describe exact trigger conditions Verified: Solution actually worked No duplication: Doesn't exist already See skill_template.md for skill creation guidelines.
Quality Gate Check: Reusable: [why] Non-trivial: [why] Specific: [trigger conditions] Verified: [how verified] No duplication: [checked against] Will create: .claude/skills/[skill-name]/SKILL.md
No conflicts with existing rules detected OR: Warning - potential conflict with [file:line]
Apply these changes? Y - Apply all changes and commit N - Discard all changes modify - Adjust specific changes 1,3 - Apply only changes 1 and 3 s1 - Apply only skill 1 all-skills - Apply all skills, skip agent updates ### Step 6: Handle User Response **On `Y` (approve):** 1. Apply each change using Edit tool 2. Run `git add` on modified files 3. Commit with generated message 4. Update learnings log 5. Update metrics **On `N` (reject):** 1. Discard proposed changes 2. Log rejection for analysis 3. Ask if user wants to modify any signals **On `modify`:** 1. Present each change individually 2. Allow editing the proposed addition 3. Reconfirm before applying **On selective (e.g., `1,3`):** 1. Apply only specified changes 2. Log partial acceptance 3. Commit only applied changes ### Step 7: Update Metrics ```bash python scripts/metrics_updater.py --accepted 3 --rejected 1 --confidence high:2,medium:1
/reflect on # Sets auto_reflect: true in state file # Will trigger on PreCompact hook
/reflect off # Sets auto_reflect: false in state file
/reflect status # Shows current state and metrics
/reflect review # Shows low-confidence learnings awaiting validation
Project-level (versioned with repo): .claude/reflections/YYYY-MM-DD_HH-MM-SS.md - Full reflection .claude/reflections/index.md - Project summary .claude/skills/{name}/SKILL.md - New skills Global (user-level): ~/.claude/reflections/by-project/{project}/ - Cross-project ~/.claude/reflections/by-agent/{agent}/learnings.md - Per-agent ~/.claude/reflections/index.md - Global summary
Some learnings belong in auto-memory (~/.claude/projects/*/memory/MEMORY.md) rather than agent files: Learning TypeBest TargetBehavioral correction ("always do X")Agent fileProject-specific patternMEMORY.mdRecurring bug/workaroundNew skill OR MEMORY.mdTool preferenceCLAUDE.mdDomain knowledgeMEMORY.md or compound-docs When a signal is LOW confidence and project-specific, prefer writing to MEMORY.md over modifying agents.
NEVER apply changes without explicit user approval Always show full diff before applying Allow selective application
All changes committed with descriptive messages Easy rollback via git revert Learning history preserved
ONLY add to existing sections NEVER delete or rewrite existing rules Preserve original structure
Check if proposed rule contradicts existing Warn user if conflict detected Suggest resolution strategy
If auto-reflection is enabled, PreCompact hook triggers reflection before handover.
At 70%+ context (Yellow status), reminders to run /reflect are injected.
The skill includes hook scripts for automatic integration: # Install hook to your Claude hooks directory cp hooks/precompact_reflect.py ~/.claude/hooks/ Configure in ~/.claude/settings.json: { "hooks": { "PreCompact": [ { "hooks": [ { "type": "command", "command": "uv run ~/.claude/hooks/precompact_reflect.py --auto" } ] } ] } } See hooks/README.md for full configuration options.
This skill works with any LLM tool that supports: File read/write operations Text pattern matching Git operations (optional, for commits)
# Set custom state directory export REFLECT_STATE_DIR=/path/to/state # Or use default # ~/.reflect/ (portable default) # ~/.claude/session/ (Claude Code default)
Unlike the previous agent-based approach, this skill executes directly without spawning subagents. The LLM reads SKILL.md and follows the workflow.
Commits are wrapped with availability checks - if not in a git repo, changes are still saved but not committed.
No signals detected: Session may not have had corrections Try /reflect review to check pending items Conflict warning: Review the existing rule cited Decide if new rule should override Can modify before applying Agent file not found: Check agent name spelling Use /reflect status to see available targets May need to create agent file first
reflect/ βββ SKILL.md # This file βββ scripts/ β βββ state_manager.py # State file CRUD β βββ signal_detector.py # Pattern matching β βββ metrics_updater.py # Metrics aggregation β βββ output_generator.py # Reflection file & index generation βββ hooks/ β βββ precompact_reflect.py # PreCompact hook integration β βββ settings-snippet.json # Settings.json examples β βββ README.md # Hook configuration guide βββ references/ β βββ signal_patterns.md # Detection rules β βββ agent_mappings.md # Target mappings β βββ skill_template.md # Skill generation βββ assets/ βββ reflection_template.md # Output template βββ learnings_schema.yaml # Schema definition
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.