Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.
This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
When agent sessions generate millions of tokens of conversation history, compression becomes mandatory. The naive approach is aggressive compression to minimize tokens per request. The correct optimization target is tokens per task: total tokens consumed to complete a task, including re-fetching costs when compression loses critical information.
Activate this skill when: Agent sessions exceed context window limits Codebases exceed context windows (5M+ token systems) Designing conversation summarization strategies Debugging cases where agents "forget" what files they modified Building evaluation frameworks for compression quality
Context compression trades token savings against information loss. Three production-ready approaches exist: Anchored Iterative Summarization: Maintain structured, persistent summaries with explicit sections for session intent, file modifications, decisions, and next steps. When compression triggers, summarize only the newly-truncated span and merge with the existing summary. Structure forces preservation by dedicating sections to specific information types. Opaque Compression: Produce compressed representations optimized for reconstruction fidelity. Achieves highest compression ratios (99%+) but sacrifices interpretability. Cannot verify what was preserved. Regenerative Full Summary: Generate detailed structured summaries on each compression. Produces readable output but may lose details across repeated compression cycles due to full regeneration rather than incremental merging. The critical insight: structure forces preservation. Dedicated sections act as checklists that the summarizer must populate, preventing silent information drift.
Traditional compression metrics target tokens-per-request. This is the wrong optimization. When compression loses critical details like file paths or error messages, the agent must re-fetch information, re-explore approaches, and waste tokens recovering context. The right metric is tokens-per-task: total tokens consumed from task start to completion. A compression strategy saving 0.5% more tokens but causing 20% more re-fetching costs more overall.
Artifact trail integrity is the weakest dimension across all compression methods, scoring 2.2-2.5 out of 5.0 in evaluations. Even structured summarization with explicit file sections struggles to maintain complete file tracking across long sessions. Coding agents need to know: Which files were created Which files were modified and what changed Which files were read but not changed Function names, variable names, error messages This problem likely requires specialized handling beyond general summarization: a separate artifact index or explicit file-state tracking in agent scaffolding.
When to trigger compression matters as much as how to compress: StrategyTrigger PointTrade-offFixed threshold70-80% context utilizationSimple but may compress too earlySliding windowKeep last N turns + summaryPredictable context sizeImportance-basedCompress low-relevance sections firstComplex but preserves signalTask-boundaryCompress at logical task completionsClean summaries but unpredictable timing The sliding window approach with structured summaries provides the best balance of predictability and quality for most coding agent use cases.
Traditional metrics like ROUGE or embedding similarity fail to capture functional compression quality. A summary may score high on lexical overlap while missing the one file path the agent needs. Probe-based evaluation directly measures functional quality by asking questions after compression: Probe TypeWhat It TestsExample QuestionRecallFactual retention"What was the original error message?"ArtifactFile tracking"Which files have we modified?"ContinuationTask planning"What should we do next?"DecisionReasoning chain"What did we decide about the Redis issue?" If compression preserved the right information, the agent answers correctly. If not, it guesses or hallucinates.
Six dimensions capture compression quality for coding agents: Accuracy: Are technical details correct? File paths, function names, error codes. Context Awareness: Does the response reflect current conversation state? Artifact Trail: Does the agent know which files were read or modified? Completeness: Does the response address all parts of the question? Continuity: Can work continue without re-fetching information? Instruction Following: Does the response respect stated constraints? Accuracy shows the largest variation between compression methods (0.6 point gap). Artifact trail is universally weak (2.2-2.5 range).
For large codebases or agent systems exceeding context windows, apply compression through three phases: Research Phase: Produce a research document from architecture diagrams, documentation, and key interfaces. Compress exploration into a structured analysis of components and dependencies. Output: single research document. Planning Phase: Convert research into implementation specification with function signatures, type definitions, and data flow. A 5M token codebase compresses to approximately 2,000 words of specification. Implementation Phase: Execute against the specification. Context remains focused on the spec rather than raw codebase exploration.
When provided with a manual migration example or reference PR, use it as a template to understand the target pattern. The example reveals constraints that static analysis cannot surface: which invariants must hold, which services break on changes, and what a clean migration looks like. This is particularly important when the agent cannot distinguish essential complexity (business requirements) from accidental complexity (legacy workarounds). The example artifact encodes that distinction.
Define explicit summary sections matching your agent's needs On first compression trigger, summarize truncated history into sections On subsequent compressions, summarize only new truncated content Merge new summary into existing sections rather than regenerating Track which information came from which compression cycle for debugging
Use anchored iterative summarization when: Sessions are long-running (100+ messages) File tracking matters (coding, debugging) You need to verify what was preserved Use opaque compression when: Maximum token savings required Sessions are relatively short Re-fetching costs are low Use regenerative summaries when: Summary interpretability is critical Sessions have clear phase boundaries Full context review is acceptable on each compression
MethodCompression RatioQuality ScoreTrade-offAnchored Iterative98.6%3.70Best quality, slightly less compressionRegenerative98.7%3.44Good quality, moderate compressionOpaque99.3%3.35Best compression, quality loss The 0.7% additional tokens retained by structured summarization buys 0.35 quality points. For any task where re-fetching costs matter, this trade-off favors structured approaches.
Optimize for tokens-per-task, not tokens-per-request Use structured summaries with explicit sections for file tracking Trigger compression at 70-80% context utilization Implement incremental merging rather than full regeneration Test compression quality with probe-based evaluation Track artifact trail separately if file tracking is critical Accept slightly lower compression ratios for better quality retention Monitor re-fetching frequency as a compression quality signal
This skill connects to several others in the collection: context-degradation - Compression is a mitigation strategy for degradation context-optimization - Compression is one optimization technique among many evaluation - Probe-based evaluation applies to compression testing memory-systems - Compression relates to scratchpad and summary memory patterns
Internal reference: Evaluation Framework Reference - Detailed probe types and scoring rubrics Related skills in this collection: context-degradation - Understanding what compression prevents context-optimization - Broader optimization strategies evaluation - Building evaluation frameworks External resources: Factory Research: Evaluating Context Compression for AI Agents (December 2025) Research on LLM-as-judge evaluation methodology (Zheng et al., 2023) Netflix Engineering: "The Infinite Software Crisis" - Three-phase workflow and context compression at scale (AI Summit 2025)
Created: 2025-12-22 Last Updated: 2025-12-26 Author: Agent Skills for Context Engineering Contributors Version: 1.1.0
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.