Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Prevent context loss during LLM compaction via Write-Ahead Logging (WAL), Working Buffer, and automatic recovery. Three mechanisms that ensure critical state...
Prevent context loss during LLM compaction via Write-Ahead Logging (WAL), Working Buffer, and automatic recovery. Three mechanisms that ensure critical state...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Compaction destroys specifics: file paths, exact values, config details, reasoning chains. This skill ensures critical state survives. The problem: When your context window fills up, OpenClaw compacts older messages into a summary. Summaries lose precision β exact numbers become "approximately," file paths vanish, decisions lose their rationale. Your agent wakes up dumber after every compaction. The fix: Three mechanisms that capture critical state before compaction hits, and recover it after.
On EVERY incoming message, scan for: βοΈ Corrections β "It's X, not Y" / "Actually..." π Proper nouns β names, places, companies, products π¨ Preferences β styles, approaches, "I like/don't like" π Decisions β "Let's do X" / "Go with Y" π Draft changes β edits to active work π’ Specific values β numbers, dates, IDs, URLs, paths If ANY appear: STOP β do not compose response yet WRITE β update SESSION-STATE.md with the detail THEN β respond to the human The trigger fires on the human's INPUT, not your memory. Write what they said, not what you think.
At 60% context utilization (check via session_status): Create/clear memory/working-buffer.md, write header: # Working Buffer (Danger Zone) **Status:** ACTIVE **Started:** [timestamp] Every exchange after 60%: append human's message + your response summary Buffer is a file β it survives compaction Leave buffer as-is until next 60% threshold in a new session Location: memory/working-buffer.md
Auto-trigger when: Session starts with <summary> tag in context You should know something but don't Human says "where were we?" / "continue" / "what were we doing?" Recovery steps (in order): Read memory/working-buffer.md β raw danger-zone exchanges Read SESSION-STATE.md β active task state Read today's + yesterday's memory/YYYY-MM-DD.md Run memory_search if still missing context Extract important context from buffer β update SESSION-STATE.md Report: "Recovered context. Last task was X. Continuing." NEVER ask "what were we discussing?" β the buffer has the answer.
ββββββββββββββββββββββββββββ β Human sends message β ββββββββββββββ¬ββββββββββββββ β ββββββββββββββΌββββββββββββββ β WAL: Scan for specifics β β Found? Write first. β ββββββββββββββ¬ββββββββββββββ β βββββββββββββββββββΌββββββββββββββββββ β Context > 60%? Buffer everything β βββββββββββββββββββ¬ββββββββββββββββββ β ββββββββββββββΌββββββββββββββ β Respond to human β ββββββββββββββ¬ββββββββββββββ β ββββββββββΌβββββββββ β COMPACTION HIT β ββββββββββ¬βββββββββ β ββββββββββββββΌββββββββββββββ β Recovery: Read buffer, β β SESSION-STATE, daily log β β β Full context restored β ββββββββββββββββββββββββββββ
Works alongside MEMORY.md (long-term) and memory/YYYY-MM-DD.md (daily logs) SESSION-STATE.md = working memory for current task Working buffer = emergency capture for the danger zone All three layers stack: WAL β Buffer β Recovery No dependencies. No API keys. Pure behavioral patterns.
Most "memory" solutions try to store everything forever. That's the wrong problem. The real problem is precision loss during compaction. You don't need to remember everything β you need to remember the RIGHT things at the RIGHT time. WAL catches specifics the moment they appear. The buffer captures the danger zone. Recovery restores context after the reset. Three layers, zero dependencies, zero data leakage. Built by @rustyorb + S1nthetta β‘ β Battle-tested across 30+ compaction events.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.