← All skills
Tencent SkillHub Β· AI

Reflect

Self-improvement through conversation analysis. Extracts learnings from corrections and success patterns, proposes updates to agent files or creates new skil...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Self-improvement through conversation analysis. Extracts learnings from corrections and success patterns, proposes updates to agent files or creates new skil...

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
CLAWDHUB-PUBLISHING-GUIDE.md, SKILL.md, assets/learnings_schema.yaml, assets/reflection_template.md, hooks/README.md, hooks/precompact_reflect.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
2.1.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 30 sections Open source page

Quick Reference

CommandAction/reflectAnalyze conversation for learnings/reflect onEnable auto-reflection/reflect offDisable auto-reflection/reflect statusShow state and metrics/reflect reviewReview low-confidence learnings/reflect [agent]Focus on specific agent

Core Philosophy

"Correct once, never again." When users correct behavior, those corrections become permanent improvements encoded into the agent system - across all future sessions.

Step 1: Initialize State

Check and initialize state files using the state manager: # Check for existing state python scripts/state_manager.py init # State directory is configurable via REFLECT_STATE_DIR env var # Default: ~/.reflect/ (portable) or ~/.claude/session/ (Claude Code) State includes: reflect-state.yaml - Toggle state, pending reviews reflect-metrics.yaml - Aggregate metrics learnings.yaml - Log of all applied learnings

Step 2: Scan Conversation for Signals

Use the signal detector to identify learnings: python scripts/signal_detector.py --input conversation.txt Signal Confidence Levels ConfidenceTriggersExamplesHIGHExplicit corrections"never", "always", "wrong", "stop", "the rule is"MEDIUMApproved approaches"perfect", "exactly", accepted outputLOWObservationsPatterns that worked, not validated See signal_patterns.md for full detection rules.

Step 3: Classify & Match to Target Files

Map each signal to the appropriate target: Learning Categories: CategoryTarget FilesCode Stylecode-reviewer, backend-developer, frontend-developerArchitecturesolution-architect, api-architect, architecture-reviewerProcessCLAUDE.md, orchestrator agentsDomainDomain-specific agents, CLAUDE.mdToolsCLAUDE.md, relevant specialistsNew Skill.claude/skills/{name}/SKILL.md See agent_mappings.md for mapping rules.

Step 4: Check for Skill-Worthy Signals

Some learnings should become new skills rather than agent updates: Skill-Worthy Criteria: Non-obvious debugging (>10 min investigation) Misleading error (root cause different from message) Workaround discovered through experimentation Configuration insight (differs from documented) Reusable pattern (helps in similar situations) Quality Gates (must pass all): Reusable: Will help with future tasks Non-trivial: Requires discovery, not just docs Specific: Can describe exact trigger conditions Verified: Solution actually worked No duplication: Doesn't exist already See skill_template.md for skill creation guidelines.

Step 5: Generate Proposals

  • Produce output in this format:
  • # Reflection Analysis
  • ## Session Context
  • **Date**: [timestamp]
  • **Messages Analyzed**: [count]
  • **Focus**: [all agents OR specific agent name]
  • ## Signals Detected
  • | # | Signal | Confidence | Source Quote | Category |
  • |---|--------|------------|--------------|----------|
  • | 1 | [learning] | HIGH | "[exact words]" | Code Style |
  • | 2 | [learning] | MEDIUM | "[context]" | Architecture |
  • ## Proposed Agent Updates
  • ### Change 1: Update [agent-name]
  • **Target**: `[file path]`
  • **Section**: [section name]
  • **Confidence**: [HIGH/MEDIUM/LOW]
  • **Rationale**: [why this change]
  • ```diff
  • --- a/path/to/agent.md
  • +++ b/path/to/agent.md
  • @@ -82,6 +82,7 @@
  • ## Section
  • * Existing rule
  • +* New rule from learning

Skill 1: [skill-name]

Quality Gate Check: Reusable: [why] Non-trivial: [why] Specific: [trigger conditions] Verified: [how verified] No duplication: [checked against] Will create: .claude/skills/[skill-name]/SKILL.md

Conflict Check

No conflicts with existing rules detected OR: Warning - potential conflict with [file:line]

Commit Message

  • reflect: add learnings from session [date]
  • Agent updates:
  • [learning 1 summary]
  • New skills:
  • [skill-name]: [brief description]
  • Extracted: [N] signals ([H] high, [M] medium, [L] low confidence)

Review Prompt

Apply these changes? Y - Apply all changes and commit N - Discard all changes modify - Adjust specific changes 1,3 - Apply only changes 1 and 3 s1 - Apply only skill 1 all-skills - Apply all skills, skip agent updates ### Step 6: Handle User Response **On `Y` (approve):** 1. Apply each change using Edit tool 2. Run `git add` on modified files 3. Commit with generated message 4. Update learnings log 5. Update metrics **On `N` (reject):** 1. Discard proposed changes 2. Log rejection for analysis 3. Ask if user wants to modify any signals **On `modify`:** 1. Present each change individually 2. Allow editing the proposed addition 3. Reconfirm before applying **On selective (e.g., `1,3`):** 1. Apply only specified changes 2. Log partial acceptance 3. Commit only applied changes ### Step 7: Update Metrics ```bash python scripts/metrics_updater.py --accepted 3 --rejected 1 --confidence high:2,medium:1

Enable Auto-Reflection

/reflect on # Sets auto_reflect: true in state file # Will trigger on PreCompact hook

Disable Auto-Reflection

/reflect off # Sets auto_reflect: false in state file

Check Status

/reflect status # Shows current state and metrics

Review Pending

/reflect review # Shows low-confidence learnings awaiting validation

Output Locations

Project-level (versioned with repo): .claude/reflections/YYYY-MM-DD_HH-MM-SS.md - Full reflection .claude/reflections/index.md - Project summary .claude/skills/{name}/SKILL.md - New skills Global (user-level): ~/.claude/reflections/by-project/{project}/ - Cross-project ~/.claude/reflections/by-agent/{agent}/learnings.md - Per-agent ~/.claude/reflections/index.md - Global summary

Memory Integration

Some learnings belong in auto-memory (~/.claude/projects/*/memory/MEMORY.md) rather than agent files: Learning TypeBest TargetBehavioral correction ("always do X")Agent fileProject-specific patternMEMORY.mdRecurring bug/workaroundNew skill OR MEMORY.mdTool preferenceCLAUDE.mdDomain knowledgeMEMORY.md or compound-docs When a signal is LOW confidence and project-specific, prefer writing to MEMORY.md over modifying agents.

Human-in-the-Loop

NEVER apply changes without explicit user approval Always show full diff before applying Allow selective application

Git Versioning

All changes committed with descriptive messages Easy rollback via git revert Learning history preserved

Incremental Updates

ONLY add to existing sections NEVER delete or rewrite existing rules Preserve original structure

Conflict Detection

Check if proposed rule contradicts existing Warn user if conflict detected Suggest resolution strategy

With /handover

If auto-reflection is enabled, PreCompact hook triggers reflection before handover.

With Session Health

At 70%+ context (Yellow status), reminders to run /reflect are injected.

Hook Integration (Claude Code)

The skill includes hook scripts for automatic integration: # Install hook to your Claude hooks directory cp hooks/precompact_reflect.py ~/.claude/hooks/ Configure in ~/.claude/settings.json: { "hooks": { "PreCompact": [ { "hooks": [ { "type": "command", "command": "uv run ~/.claude/hooks/precompact_reflect.py --auto" } ] } ] } } See hooks/README.md for full configuration options.

Portability

This skill works with any LLM tool that supports: File read/write operations Text pattern matching Git operations (optional, for commits)

Configurable State Location

# Set custom state directory export REFLECT_STATE_DIR=/path/to/state # Or use default # ~/.reflect/ (portable default) # ~/.claude/session/ (Claude Code default)

No Task Tool Dependency

Unlike the previous agent-based approach, this skill executes directly without spawning subagents. The LLM reads SKILL.md and follows the workflow.

Git Operations Optional

Commits are wrapped with availability checks - if not in a git repo, changes are still saved but not committed.

Troubleshooting

No signals detected: Session may not have had corrections Try /reflect review to check pending items Conflict warning: Review the existing rule cited Decide if new rule should override Can modify before applying Agent file not found: Check agent name spelling Use /reflect status to see available targets May need to create agent file first

File Structure

reflect/ β”œβ”€β”€ SKILL.md # This file β”œβ”€β”€ scripts/ β”‚ β”œβ”€β”€ state_manager.py # State file CRUD β”‚ β”œβ”€β”€ signal_detector.py # Pattern matching β”‚ β”œβ”€β”€ metrics_updater.py # Metrics aggregation β”‚ └── output_generator.py # Reflection file & index generation β”œβ”€β”€ hooks/ β”‚ β”œβ”€β”€ precompact_reflect.py # PreCompact hook integration β”‚ β”œβ”€β”€ settings-snippet.json # Settings.json examples β”‚ └── README.md # Hook configuration guide β”œβ”€β”€ references/ β”‚ β”œβ”€β”€ signal_patterns.md # Detection rules β”‚ β”œβ”€β”€ agent_mappings.md # Target mappings β”‚ └── skill_template.md # Skill generation └── assets/ β”œβ”€β”€ reflection_template.md # Output template └── learnings_schema.yaml # Schema definition

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
4 Docs1 Scripts1 Config
  • SKILL.md Primary doc
  • assets/reflection_template.md Docs
  • CLAWDHUB-PUBLISHING-GUIDE.md Docs
  • hooks/README.md Docs
  • hooks/precompact_reflect.py Scripts
  • assets/learnings_schema.yaml Config