Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Compress text semantically with iterative validation, anchor checksums, and verified information preservation.
Compress text semantically with iterative validation, anchor checksums, and verified information preservation.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
This is SEMANTIC compression, not bit-perfect lossless. L1-L2: Verified reconstruction, production-ready L3-L4: Experimental, may lose subtle information Never use for: Medical dosages, legal text, financial figures, safety-critical data
1. Compress original O β compressed C 2. Extract anchors from O (entities, numbers, dates) 3. Reconstruct C β R (without seeing O) 4. Verify: anchors match + semantic diff 5. If mismatch β refine C with missing info 6. Repeat until validated (max 3 iterations) Convergence = verified. No convergence after 3 rounds = level too aggressive.
TaskLoadCompression levels (L1-L4)levels.mdValidation algorithm detailsvalidation.mdFormat-specific strategiesformats.mdToken budgeting and metricsmetrics.md
LevelRatioReliabilityUse CaseL1~0.8xβ HighProduction, human-readableL2~0.5xβ GoodSystem prompts, repeated useL3~0.3xβ οΈ ModerateExperimental, review outputL4~0.15xβ οΈ LowResearch only, expect losses
Before compression, extract critical facts: [ANCHORS: 3 people, $42,000, 2024-03-15, "Project Alpha"] Reconstruction MUST reproduce these exactly. If anchors mismatch β compression failed.
Always validate β Never trust compression without reconstruction test Use anchors β Extract numbers, names, dates before compressing Cap at L2 for production β L3-L4 are experimental Report confidence β Include iteration count and anchor match rate Independent verification β Consider different model for reconstruction
Each compression costs 3-4 LLM calls. Break-even calculation: break_even_retrievals = compression_tokens / saved_tokens_per_use Only cost-effective if: You'll retrieve the compressed content 6-8+ times. For one-time use β just use the original text.
Content type is NOT safety-critical Target level chosen (L1-L2 recommended) Anchors identified (numbers, names, dates) ROI makes sense (multiple retrievals expected)
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.