# Send claw-compactor to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction",
    "name": "claw-compactor",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/aeromomo/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction",
    "canonicalUrl": "https://clawhub.ai/aeromomo/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "pyproject.toml",
      "README.md",
      "SKILL.md",
      "scripts/mem_compress.py",
      "scripts/estimate_tokens.py",
      "scripts/dedup_memory.py"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction",
    "downloadUrl": "https://openagent3.xyz/downloads/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction",
    "agentUrl": "https://openagent3.xyz/skills/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction/agent",
    "manifestUrl": "https://openagent3.xyz/skills/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction/agent.md"
  }
}
```
## Documentation

### 🦞 Claw Compactor

"Cut your tokens. Keep your facts."

Cut your AI agent's token spend in half. One command compresses your entire workspace — memory files, session transcripts, sub-agent context — using 5 layered compression techniques. Deterministic. Mostly lossless. No LLM required.

### Features

5 compression layers working in sequence for maximum savings
Zero LLM cost — all compression is rule-based and deterministic
Lossless roundtrip for dictionary, RLE, and rule-based compression
~97% savings on session transcripts via observation extraction
Tiered summaries (L0/L1/L2) for progressive context loading
CJK-aware — full Chinese/Japanese/Korean support
One command (full) runs everything in optimal order

### 5 Compression Layers

#LayerMethodSavingsLossless?1Rule engineDedup lines, strip markdown filler, merge sections4-8%✅2Dictionary encodingAuto-learned codebook, $XX substitution4-5%✅3Observation compressionSession JSONL → structured summaries~97%❌*4RLE patternsPath shorthand ($WS), IP prefix, enum compaction1-2%✅5Compressed Context Protocolultra/medium/light abbreviation20-60%❌*

*Lossy techniques preserve all facts and decisions; only verbose formatting is removed.

### Quick Start

git clone https://github.com/aeromomo/claw-compactor.git
cd claw-compactor

# See how much you'd save (non-destructive)
python3 scripts/mem_compress.py /path/to/workspace benchmark

# Compress everything
python3 scripts/mem_compress.py /path/to/workspace full

Requirements: Python 3.9+. Optional: pip install tiktoken for exact token counts (falls back to heuristic).

### Architecture

┌─────────────────────────────────────────────────────────────┐
│                      mem_compress.py                        │
│                   (unified entry point)                     │
└──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬────┘
       │      │      │      │      │      │      │      │
       ▼      ▼      ▼      ▼      ▼      ▼      ▼      ▼
  estimate compress  dict  dedup observe tiers  audit optimize
       └──────┴──────┴──┬───┴──────┴──────┴──────┴──────┘
                        ▼
                  ┌────────────────┐
                  │     lib/       │
                  │ tokens.py      │ ← tiktoken or heuristic
                  │ markdown.py    │ ← section parsing
                  │ dedup.py       │ ← shingle hashing
                  │ dictionary.py  │ ← codebook compression
                  │ rle.py         │ ← path/IP/enum encoding
                  │ tokenizer_     │
                  │   optimizer.py │ ← format optimization
                  │ config.py      │ ← JSON config
                  │ exceptions.py  │ ← error types
                  └────────────────┘

### Commands

All commands: python3 scripts/mem_compress.py <workspace> <command> [options]

CommandDescriptionTypical SavingsfullComplete pipeline (all steps in order)50%+ combinedbenchmarkDry-run performance report—compressRule-based compression4-8%dictDictionary encoding with auto-codebook4-5%observeSession transcript → observations~97%tiersGenerate L0/L1/L2 summaries88-95% on sub-agent loadsdedupCross-file duplicate detectionvariesestimateToken count report—auditWorkspace health check—optimizeTokenizer-level format fixes1-3%

### Global Options

--json — Machine-readable JSON output
--dry-run — Preview changes without writing
--since YYYY-MM-DD — Filter sessions by date
--auto-merge — Auto-merge duplicates (dedup)

### Real-World Savings

Workspace StateTypical SavingsNotesSession transcripts (observe)~97%Megabytes of JSONL → concise observation MDVerbose/new workspace50-70%First run on unoptimized workspaceRegular maintenance10-20%Weekly runs on active workspaceAlready-optimized3-12%Diminishing returns — workspace is clean

### cacheRetention — Complementary Optimization

Before compression runs, enable prompt caching for a 90% discount on cached tokens:

{
  "models": {
    "model-name": {
      "cacheRetention": "long"
    }
  }
}

Compression reduces token count, caching reduces cost-per-token. Together: 50% compression + 90% cache discount = 95% effective cost reduction.

### Heartbeat Automation

Run weekly or on heartbeat:

## Memory Maintenance (weekly)
- python3 skills/claw-compactor/scripts/mem_compress.py <workspace> benchmark
- If savings > 5%: run full pipeline
- If pending transcripts: run observe

Cron example:

0 3 * * 0 cd /path/to/skills/claw-compactor && python3 scripts/mem_compress.py /path/to/workspace full

### Configuration

Optional claw-compactor-config.json in workspace root:

{
  "chars_per_token": 4,
  "level0_max_tokens": 200,
  "level1_max_tokens": 500,
  "dedup_similarity_threshold": 0.6,
  "dedup_shingle_size": 3
}

All fields optional — sensible defaults are used when absent.

### Artifacts

FilePurposememory/.codebook.jsonDictionary codebook (must travel with memory files)memory/.observed-sessions.jsonTracks processed transcriptsmemory/observations/Compressed session summariesmemory/MEMORY-L0.mdLevel 0 summary (~200 tokens)

### FAQ

Q: Will compression lose my data?
A: Rule engine, dictionary, RLE, and tokenizer optimization are fully lossless. Observation compression and CCP are lossy but preserve all facts and decisions.

Q: How does dictionary decompression work?
A: decompress_text(text, codebook) expands all $XX codes back. The codebook JSON must be present.

Q: Can I run individual steps?
A: Yes. Every command is independent: compress, dict, observe, tiers, dedup, optimize.

Q: What if tiktoken isn't installed?
A: Falls back to a CJK-aware heuristic (chars÷4). Results are ~90% accurate.

Q: Does it handle Chinese/Japanese/Unicode?
A: Yes. Full CJK support including character-aware token estimation and Chinese punctuation normalization.

### Troubleshooting

FileNotFoundError on workspace: Ensure path points to workspace root (contains memory/ or MEMORY.md)
Dictionary decompression fails: Check memory/.codebook.json exists and is valid JSON
Zero savings on benchmark: Workspace is already optimized — nothing to do
observe finds no transcripts: Check sessions directory for .jsonl files
Token count seems wrong: Install tiktoken: pip3 install tiktoken

### Credits

Inspired by claude-mem by thedotmack
Built by Bot777 🤖 for OpenClaw

### License

MIT
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: aeromomo
- Version: 6.0.0
## Source health
- Status: healthy
- Source download looks usable.
- Yavira can redirect you to the upstream package for this source.
- Health scope: source
- Reason: direct_download_ok
- Checked at: 2026-04-30T16:55:25.780Z
- Expires at: 2026-05-07T16:55:25.780Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction)
- [Send to Agent page](https://openagent3.xyz/skills/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction/agent)
- [JSON manifest](https://openagent3.xyz/skills/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction/agent.md)
- [Download page](https://openagent3.xyz/downloads/cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction)