← All skills
Tencent SkillHub Β· AI

Compaction Survival System

Prevent context loss during LLM compaction via Write-Ahead Logging (WAL), Working Buffer, and automatic recovery. Three mechanisms that ensure critical state...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Prevent context loss during LLM compaction via Write-Ahead Logging (WAL), Working Buffer, and automatic recovery. Three mechanisms that ensure critical state...

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 8 sections Open source page

Compaction Survival System

Compaction destroys specifics: file paths, exact values, config details, reasoning chains. This skill ensures critical state survives. The problem: When your context window fills up, OpenClaw compacts older messages into a summary. Summaries lose precision β€” exact numbers become "approximately," file paths vanish, decisions lose their rationale. Your agent wakes up dumber after every compaction. The fix: Three mechanisms that capture critical state before compaction hits, and recover it after.

1. WAL Protocol (Write-Ahead Logging)

On EVERY incoming message, scan for: ✏️ Corrections β€” "It's X, not Y" / "Actually..." πŸ“ Proper nouns β€” names, places, companies, products 🎨 Preferences β€” styles, approaches, "I like/don't like" πŸ“‹ Decisions β€” "Let's do X" / "Go with Y" πŸ“ Draft changes β€” edits to active work πŸ”’ Specific values β€” numbers, dates, IDs, URLs, paths If ANY appear: STOP β€” do not compose response yet WRITE β€” update SESSION-STATE.md with the detail THEN β€” respond to the human The trigger fires on the human's INPUT, not your memory. Write what they said, not what you think.

2. Working Buffer (Danger Zone)

At 60% context utilization (check via session_status): Create/clear memory/working-buffer.md, write header: # Working Buffer (Danger Zone) **Status:** ACTIVE **Started:** [timestamp] Every exchange after 60%: append human's message + your response summary Buffer is a file β€” it survives compaction Leave buffer as-is until next 60% threshold in a new session Location: memory/working-buffer.md

3. Compaction Recovery

Auto-trigger when: Session starts with <summary> tag in context You should know something but don't Human says "where were we?" / "continue" / "what were we doing?" Recovery steps (in order): Read memory/working-buffer.md β€” raw danger-zone exchanges Read SESSION-STATE.md β€” active task state Read today's + yesterday's memory/YYYY-MM-DD.md Run memory_search if still missing context Extract important context from buffer β†’ update SESSION-STATE.md Report: "Recovered context. Last task was X. Continuing." NEVER ask "what were we discussing?" β€” the buffer has the answer.

SESSION-STATE.md Format

  • # Session State β€” Active Working Memory
  • ## Current Task
  • [What we're actively working on]
  • ## Key Details
  • [Specific values, paths, configs captured via WAL]
  • ## Decisions Made
  • [Decisions with rationale]
  • ## Pending
  • [What's waiting/blocked]
  • ## Last Updated
  • [timestamp]
  • Update this file frequently. It's your RAM β€” the only place specifics survive between compaction events.

How It Works Together

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Human sends message β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ WAL: Scan for specifics β”‚ β”‚ Found? Write first. β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Context > 60%? Buffer everything β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Respond to human β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ COMPACTION HIT β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Recovery: Read buffer, β”‚ β”‚ SESSION-STATE, daily log β”‚ β”‚ β†’ Full context restored β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Integration

Works alongside MEMORY.md (long-term) and memory/YYYY-MM-DD.md (daily logs) SESSION-STATE.md = working memory for current task Working buffer = emergency capture for the danger zone All three layers stack: WAL β†’ Buffer β†’ Recovery No dependencies. No API keys. Pure behavioral patterns.

Why This Works

Most "memory" solutions try to store everything forever. That's the wrong problem. The real problem is precision loss during compaction. You don't need to remember everything β€” you need to remember the RIGHT things at the RIGHT time. WAL catches specifics the moment they appear. The buffer captures the danger zone. Recovery restores context after the reset. Three layers, zero dependencies, zero data leakage. Built by @rustyorb + S1nthetta ⚑ β€” Battle-tested across 30+ compaction events.

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
1 Docs
  • SKILL.md Primary doc