Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Reduce OpenClaw AI costs with model-aware optimization. Features dynamic compaction presets based on your model's context window, intelligent file compression, and robust model detection with fallback. Supports Claude, GPT-4, Gemini, DeepSeek, and more.
Reduce OpenClaw AI costs with model-aware optimization. Features dynamic compaction presets based on your model's context window, intelligent file compression, and robust model detection with fallback. Supports Claude, GPT-4, Gemini, DeepSeek, and more.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
π‘ Did you know? Every API call sends your workspace files (SOUL.md, USER.md, MEMORY.md, AGENTS.md, etc.) along with your message. These files count toward your context window, slowing responses and costing real money on every message. Token Saver v3 is model-aware β it knows your model's context window and adapts recommendations accordingly. Using Gemini's 1M context? Presets scale up. On GPT-4o's 128K? Presets adjust down.
Featurev2v3Compaction presetsFixed (80K/120K/160K)Dynamic (% of model's context)Model detectionFragile, env-onlyRobust fallback chainContext windowsNot trackedFull registry (9 models)Model infoHardcoded pricingJSON registry, easy updatesAlready-optimizedRe-compressedSmart bypass
CommandWhat it does/optimizeFull dashboard β files, models, context usage %/optimize tokensCompress workspace files (auto-backup)/optimize compactionChat compaction control (model-aware)/optimize compaction balancedApply balanced preset (60% of context)/optimize compaction 120Custom threshold (compact at 120K)/optimize modelsDetailed model audit with registry/optimize revertRestore backups, disable persistent mode
Shows current model, context window, and usage percentage: π€ Model: Claude Opus 4.5 (200K context) Detected: openclaw.json π Context Usage: [ββββββββββββββββββββ] 42% (84K/200K)
Scans all .md files, shows token count and potential savings. Smart bypass skips already-optimized files. File-aware compression: SOUL.md β Light compression, keeps personality language AGENTS.md β Medium compression, dense instructions USER.md / MEMORY.md β Heavy compression, key:value format PROJECTS.md β No compression (user structure preserved)
Presets adapt to your model's context window: Preset% of ContextClaude 200KGPT-4o 128KGemini 1MAggressive40%80K51K400KBalanced60%120K77K600KConservative80%160K102K800KOff95%190K122K950K
24+ models with context windows, pricing, and aliases: Claude: Opus 4.6 (1M), Opus 4.5, Sonnet 4.5, Sonnet 4, Haiku 4.5, Haiku 3.5 (200K) OpenAI: GPT-5.2, GPT-5.1, GPT-5-mini, GPT-5-nano (256K), GPT-4.1, GPT-4o (128K), o1, o3, o4-mini Gemini: 3 Pro (2M), 2.5 Pro, 2.0 Flash (1M) Others: DeepSeek V3 (64K), Kimi K2.5 (128K), Llama 3.3 70B, Mistral Large
Detection priority: Runtime injection (--model=...) Environment variables (SKILL_MODEL, OPENCLAW_MODEL) Config file (~/.openclaw/openclaw.json) File inference (TOOLS.md, MEMORY.md mentions) Fallback: Claude Sonnet 4 (safe default) Unknown model handling: Strict version matching β opus-6.5 won't fuzzy-match to opus-4.5 Unknown models get safe defaults (200K context) + warning Easy to add new models to scripts/models.json
Adds writing guidance to AGENTS.md for continued token efficiency: FileWriting StyleSOUL.mdEvocative, personality-shapingAGENTS.mdDense instructions, symbols OKUSER.mdKey:value factsMEMORY.mdUltra-dense data
Auto-backup β All modified files get .backup extension Integrity > Size β Never sacrifices meaning for smaller tokens Smart bypass β Skips already-optimized files Revert anytime β /optimize revert restores everything No external calls β All analysis runs locally
clawhub install token-saver --registry "https://www.clawhub.ai"
3.0.0 β Model registry, dynamic presets, robust detection, smart bypass 2.0.1 β Chat compaction, file-aware compression, persistent mode 1.0.0 β Initial release
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.