โ† All skills
Tencent SkillHub ยท Productivity

Agente Conhecimento

Logs and organizes learnings, errors, and feature requests for continuous improvement, integrating with OpenClaw workspace for knowledge management.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Logs and organizes learnings, errors, and feature requests for continuous improvement, integrating with OpenClaw workspace for knowledge management.

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 47 sections Open source page

Self-Improvement Skill

Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.

Quick Reference

SituationActionCommand/operation failsLog to .learnings/ERRORS.mdUser corrects youLog to .learnings/LEARNINGS.md with category correctionUser wants missing featureLog to .learnings/FEATURE_REQUESTS.mdAPI/external tool failsLog to .learnings/ERRORS.md with integration detailsKnowledge was outdatedLog to .learnings/LEARNINGS.md with category knowledge_gapFound better approachLog to .learnings/LEARNINGS.md with category best_practiceSimilar to existing entryLink with **See Also**, consider priority bumpBroadly applicable learningPromote to CLAUDE.md, AGENTS.md, and/or .github/copilot-instructions.mdWorkflow improvementsPromote to AGENTS.md (OpenClaw workspace)Tool gotchasPromote to TOOLS.md (OpenClaw workspace)Behavioral patternsPromote to SOUL.md (OpenClaw workspace)

OpenClaw Setup (Recommended)

OpenClaw is the primary platform for this skill. It uses workspace-based prompt injection with automatic skill loading.

Installation

Via ClawdHub (recommended): clawdhub install self-improving-agent Manual: git clone https://github.com/peterskoett/self-improving-agent.git ~/.openclaw/skills/self-improving-agent

Workspace Structure

OpenClaw injects these files into every session: ~/.openclaw/workspace/ โ”œโ”€โ”€ AGENTS.md # Multi-agent workflows, delegation patterns โ”œโ”€โ”€ SOUL.md # Behavioral guidelines, personality, principles โ”œโ”€โ”€ TOOLS.md # Tool capabilities, integration gotchas โ”œโ”€โ”€ MEMORY.md # Long-term memory (main session only) โ”œโ”€โ”€ memory/ # Daily memory files โ”‚ โ””โ”€โ”€ YYYY-MM-DD.md โ””โ”€โ”€ .learnings/ # This skill's log files โ”œโ”€โ”€ LEARNINGS.md โ”œโ”€โ”€ ERRORS.md โ””โ”€โ”€ FEATURE_REQUESTS.md

Create Learning Files

mkdir -p ~/.openclaw/workspace/.learnings Then create the log files (or copy from assets/): LEARNINGS.md โ€” corrections, knowledge gaps, best practices ERRORS.md โ€” command failures, exceptions FEATURE_REQUESTS.md โ€” user-requested capabilities

Promotion Targets

When learnings prove broadly applicable, promote them to workspace files: Learning TypePromote ToExampleBehavioral patternsSOUL.md"Be concise, avoid disclaimers"Workflow improvementsAGENTS.md"Spawn sub-agents for long tasks"Tool gotchasTOOLS.md"Git push needs auth configured first"

Inter-Session Communication

OpenClaw provides tools to share learnings across sessions: sessions_list โ€” View active/recent sessions sessions_history โ€” Read another session's transcript sessions_send โ€” Send a learning to another session sessions_spawn โ€” Spawn a sub-agent for background work

Optional: Enable Hook

For automatic reminders at session start: # Copy hook to OpenClaw hooks directory cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement # Enable it openclaw hooks enable self-improvement See references/openclaw-integration.md for complete details.

Generic Setup (Other Agents)

For Claude Code, Codex, Copilot, or other agents, create .learnings/ in your project: mkdir -p .learnings Copy templates from assets/ or create files with headers.

Learning Entry

  • Append to .learnings/LEARNINGS.md:
  • ## [LRN-YYYYMMDD-XXX] category
  • **Logged**: ISO-8601 timestamp
  • **Priority**: low | medium | high | critical
  • **Status**: pending
  • **Area**: frontend | backend | infra | tests | docs | config
  • ### Summary
  • One-line description of what was learned
  • ### Details
  • Full context: what happened, what was wrong, what's correct
  • ### Suggested Action
  • Specific fix or improvement to make
  • ### Metadata
  • Source: conversation | error | user_feedback
  • Related Files: path/to/file.ext
  • Tags: tag1, tag2
  • See Also: LRN-20250110-001 (if related to existing entry)
  • ---

Error Entry

  • Append to .learnings/ERRORS.md:
  • ## [ERR-YYYYMMDD-XXX] skill_or_command_name
  • **Logged**: ISO-8601 timestamp
  • **Priority**: high
  • **Status**: pending
  • **Area**: frontend | backend | infra | tests | docs | config
  • ### Summary
  • Brief description of what failed
  • ### Error
  • Actual error message or output
  • ### Context
  • Command/operation attempted
  • Input or parameters used
  • Environment details if relevant
  • ### Suggested Fix
  • If identifiable, what might resolve this
  • ### Metadata
  • Reproducible: yes | no | unknown
  • Related Files: path/to/file.ext
  • See Also: ERR-20250110-001 (if recurring)
  • ---

Feature Request Entry

  • Append to .learnings/FEATURE_REQUESTS.md:
  • ## [FEAT-YYYYMMDD-XXX] capability_name
  • **Logged**: ISO-8601 timestamp
  • **Priority**: medium
  • **Status**: pending
  • **Area**: frontend | backend | infra | tests | docs | config
  • ### Requested Capability
  • What the user wanted to do
  • ### User Context
  • Why they needed it, what problem they're solving
  • ### Complexity Estimate
  • simple | medium | complex
  • ### Suggested Implementation
  • How this could be built, what it might extend
  • ### Metadata
  • Frequency: first_time | recurring
  • Related Features: existing_feature_name
  • ---

ID Generation

Format: TYPE-YYYYMMDD-XXX TYPE: LRN (learning), ERR (error), FEAT (feature) YYYYMMDD: Current date XXX: Sequential number or random 3 chars (e.g., 001, A7B) Examples: LRN-20250115-001, ERR-20250115-A3F, FEAT-20250115-002

Resolving Entries

  • When an issue is fixed, update the entry:
  • Change **Status**: pending โ†’ **Status**: resolved
  • Add resolution block after Metadata:
  • ### Resolution
  • **Resolved**: 2025-01-16T09:00:00Z
  • **Commit/PR**: abc123 or #42
  • **Notes**: Brief description of what was done
  • Other status values:
  • in_progress - Actively being worked on
  • wont_fix - Decided not to address (add reason in Resolution notes)
  • promoted - Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md

Promoting to Project Memory

When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.

When to Promote

Learning applies across multiple files/features Knowledge any contributor (human or AI) should know Prevents recurring mistakes Documents project-specific conventions

Promotion Targets

TargetWhat Belongs ThereCLAUDE.mdProject facts, conventions, gotchas for all Claude interactionsAGENTS.mdAgent-specific workflows, tool usage patterns, automation rules.github/copilot-instructions.mdProject context and conventions for GitHub CopilotSOUL.mdBehavioral guidelines, communication style, principles (OpenClaw workspace)TOOLS.mdTool capabilities, usage patterns, integration gotchas (OpenClaw workspace)

How to Promote

Distill the learning into a concise rule or fact Add to appropriate section in target file (create file if needed) Update original entry: Change **Status**: pending โ†’ **Status**: promoted Add **Promoted**: CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md

Promotion Examples

  • Learning (verbose):
  • Project uses pnpm workspaces. Attempted npm install but failed.
  • Lock file is pnpm-lock.yaml. Must use pnpm install.
  • In CLAUDE.md (concise):
  • ## Build & Dependencies
  • Package manager: pnpm (not npm) - use `pnpm install`
  • Learning (verbose):
  • When modifying API endpoints, must regenerate TypeScript client.
  • Forgetting this causes type mismatches at runtime.
  • In AGENTS.md (actionable):
  • ## After API Changes
  • 1. Regenerate client: `pnpm run generate:api`
  • 2. Check for type errors: `pnpm tsc --noEmit`

Recurring Pattern Detection

If logging something similar to an existing entry: Search first: grep -r "keyword" .learnings/ Link entries: Add **See Also**: ERR-20250110-001 in Metadata Bump priority if issue keeps recurring Consider systemic fix: Recurring issues often indicate: Missing documentation (โ†’ promote to CLAUDE.md or .github/copilot-instructions.md) Missing automation (โ†’ add to AGENTS.md) Architectural problem (โ†’ create tech debt ticket)

Periodic Review

Review .learnings/ at natural breakpoints:

When to Review

Before starting a new major task After completing a feature When working in an area with past learnings Weekly during active development

Quick Status Check

# Count pending items grep -h "Status\*\*: pending" .learnings/*.md | wc -l # List pending high-priority items grep -B5 "Priority\*\*: high" .learnings/*.md | grep "^## \[" # Find learnings for a specific area grep -l "Area\*\*: backend" .learnings/*.md

Review Actions

Resolve fixed items Promote applicable learnings Link related entries Escalate recurring issues

Detection Triggers

Automatically log when you notice: Corrections (โ†’ learning with correction category): "No, that's not right..." "Actually, it should be..." "You're wrong about..." "That's outdated..." Feature Requests (โ†’ feature request): "Can you also..." "I wish you could..." "Is there a way to..." "Why can't you..." Knowledge Gaps (โ†’ learning with knowledge_gap category): User provides information you didn't know Documentation you referenced is outdated API behavior differs from your understanding Errors (โ†’ error entry): Command returns non-zero exit code Exception or stack trace Unexpected output or behavior Timeout or connection failure

Priority Guidelines

PriorityWhen to UsecriticalBlocks core functionality, data loss risk, security issuehighSignificant impact, affects common workflows, recurring issuemediumModerate impact, workaround existslowMinor inconvenience, edge case, nice-to-have

Area Tags

Use to filter learnings by codebase region: AreaScopefrontendUI, components, client-side codebackendAPI, services, server-side codeinfraCI/CD, deployment, Docker, cloudtestsTest files, testing utilities, coveragedocsDocumentation, comments, READMEsconfigConfiguration files, environment, settings

Best Practices

Log immediately - context is freshest right after the issue Be specific - future agents need to understand quickly Include reproduction steps - especially for errors Link related files - makes fixes easier Suggest concrete fixes - not just "investigate" Use consistent categories - enables filtering Promote aggressively - if in doubt, add to CLAUDE.md or .github/copilot-instructions.md Review regularly - stale learnings lose value

Gitignore Options

Keep learnings local (per-developer): .learnings/ Track learnings in repo (team-wide): Don't add to .gitignore - learnings become shared knowledge. Hybrid (track templates, ignore entries): .learnings/*.md !.learnings/.gitkeep

Hook Integration

Enable automatic reminders through agent hooks. This is opt-in - you must explicitly configure hooks.

Quick Setup (Claude Code / Codex)

Create .claude/settings.json in your project: { "hooks": { "UserPromptSubmit": [{ "matcher": "", "hooks": [{ "type": "command", "command": "./skills/self-improvement/scripts/activator.sh" }] }] } } This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).

Full Setup (With Error Detection)

{ "hooks": { "UserPromptSubmit": [{ "matcher": "", "hooks": [{ "type": "command", "command": "./skills/self-improvement/scripts/activator.sh" }] }], "PostToolUse": [{ "matcher": "Bash", "hooks": [{ "type": "command", "command": "./skills/self-improvement/scripts/error-detector.sh" }] }] } }

Available Hook Scripts

ScriptHook TypePurposescripts/activator.shUserPromptSubmitReminds to evaluate learnings after tasksscripts/error-detector.shPostToolUse (Bash)Triggers on command errors See references/hooks-setup.md for detailed configuration and troubleshooting.

Automatic Skill Extraction

When a learning is valuable enough to become a reusable skill, extract it using the provided helper.

Skill Extraction Criteria

A learning qualifies for skill extraction when ANY of these apply: CriterionDescriptionRecurringHas See Also links to 2+ similar issuesVerifiedStatus is resolved with working fixNon-obviousRequired actual debugging/investigation to discoverBroadly applicableNot project-specific; useful across codebasesUser-flaggedUser says "save this as a skill" or similar

Extraction Workflow

Identify candidate: Learning meets extraction criteria Run helper (or create manually): ./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run ./skills/self-improvement/scripts/extract-skill.sh skill-name Customize SKILL.md: Fill in template with learning content Update learning: Set status to promoted_to_skill, add Skill-Path Verify: Read skill in fresh session to ensure it's self-contained

Manual Extraction

If you prefer manual creation: Create skills/<skill-name>/SKILL.md Use template from assets/SKILL-TEMPLATE.md Follow Agent Skills spec: YAML frontmatter with name and description Name must match folder name No README.md inside skill folder

Extraction Detection Triggers

Watch for these signals that a learning should become a skill: In conversation: "Save this as a skill" "I keep running into this" "This would be useful for other projects" "Remember this pattern" In learning entries: Multiple See Also links (recurring issue) High priority + resolved status Category: best_practice with broad applicability User feedback praising the solution

Skill Quality Gates

Before extraction, verify: Solution is tested and working Description is clear without original context Code examples are self-contained No project-specific hardcoded values Follows skill naming conventions (lowercase, hyphens)

Multi-Agent Support

This skill works across different AI coding agents with agent-specific activation.

Claude Code

Activation: Hooks (UserPromptSubmit, PostToolUse) Setup: .claude/settings.json with hook configuration Detection: Automatic via hook scripts

Codex CLI

Activation: Hooks (same pattern as Claude Code) Setup: .codex/settings.json with hook configuration Detection: Automatic via hook scripts

GitHub Copilot

Activation: Manual (no hook support) Setup: Add to .github/copilot-instructions.md: ## Self-Improvement After solving non-obvious issues, consider logging to `.learnings/`: 1. Use format from self-improvement skill 2. Link related entries with See Also 3. Promote high-value learnings to skills Ask in chat: "Should I log this as a learning?" Detection: Manual review at session end

OpenClaw

Activation: Workspace injection + inter-agent messaging Setup: See "OpenClaw Setup" section above Detection: Via session tools and workspace files

Agent-Agnostic Guidance

Regardless of agent, apply self-improvement when you: Discover something non-obvious - solution wasn't immediate Correct yourself - initial approach was wrong Learn project conventions - discovered undocumented patterns Hit unexpected errors - especially if diagnosis was difficult Find better approaches - improved on your original solution

Copilot Chat Integration

For Copilot users, add this to your prompts when relevant: After completing this task, evaluate if any learnings should be logged to .learnings/ using the self-improvement skill format. Or use quick prompts: "Log this to learnings" "Create a skill from this solution" "Check .learnings/ for related issues"

Category context

Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
1 Docs
  • SKILL.md Primary doc