Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Periodic self-reflection on recent sessions. Analyzes what went well, what went wrong, and writes concise, actionable insights to the appropriate workspace f...
Periodic self-reflection on recent sessions. Analyzes what went well, what went wrong, and writes concise, actionable insights to the appropriate workspace f...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Reflect on recent sessions and extract actionable insights. Runs hourly via cron.
# List sessions active in the last 2 hours openclaw sessions --active 120 --json Parse the output to get session keys and IDs. Skip subagent sessions (they're task workers, not interesting for reflection). Focus on: Telegram group/topic sessions (real user interactions) Direct sessions (1:1 with Brenner) Cron-triggered sessions (how did automated tasks go?)
For each interesting session from Step 1, read the JSONL transcript: # Read the last ~50 lines of each session file (keep it bounded!) tail -50 ~/.openclaw/agents/main/sessions/<sessionId>.jsonl โ ๏ธ CRITICAL: Never load full session files. Use tail -50 or Read with offset/limit. Sessions can be 100k+ tokens. Parse the JSONL to understand what happened. Look for: type: "user" or type: "human" โ what was asked type: "assistant" โ what you responded type: "tool_use" / type: "tool_result" โ what tools were called and results Error patterns, retries, confusion
For each session, ask yourself:
Tasks completed smoothly on first try Good tool usage patterns worth reinforcing Efficient approaches to remember
Errors, retries, wrong approaches Misunderstandings of user intent Tools that didn't work as expected Context that was missing
"Next time, do X instead of Y" "Remember that Z works this way" "Tool A needs parameter B or it fails" "When user says X, they usually mean Y" Quality bar: Each insight must be: Specific โ not "be more careful" but "check if file exists before editing" Actionable โ something future-you can directly apply Non-obvious โ skip things any competent agent would know New โ don't repeat insights already captured
Each insight belongs somewhere specific. Route them:
Process improvements (how to handle sessions, memory, etc.) New conventions or workflow rules Safety lessons
Tool-specific gotchas ("gog needs --json flag for parsing") Environment details (paths, configs, quirks) New tool patterns discovered
Session-specific context ("Brenner asked about X project") Temporary facts that matter today but not forever What happened today (events, decisions, requests)
New preferences discovered Communication style observations Project/interest updates
Improvements to specific skill instructions Bug fixes in skill workflows New parameters or approaches for a skill
Updates to the memory index if new memory files are created
After writing all insights, produce a brief summary of what you reflected on and what you wrote. This is your output โ keep it to 2-4 sentences max. If there's nothing interesting to reflect on (quiet period, only heartbeats), just say so. Don't manufacture insights.
Before writing any insight: Is this actually new? (Check existing files first) Is this specific and actionable? Am I routing it to the right file? Am I keeping daily memory files concise (not dumping full transcripts)? Did I respect the token budget (no huge file reads)?
โ Don't summarize every session โ only extract lessons โ Don't read full JSONL files โ tail/limit only โ Don't write vague insights ("improve response quality") โ Don't duplicate existing knowledge โ Don't create new files when appending to existing ones works โ Don't reflect on your own reflection sessions (skip cron:self-reflection sessions)
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.