Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Official Whisper Context skill for OpenClaw. Cuts context tokens via delta compression + caching, and adds long-term memory across sessions.
Official Whisper Context skill for OpenClaw. Cuts context tokens via delta compression + caching, and adds long-term memory across sessions.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Reduce OpenClaw API spend by shrinking the context you send to the model (delta compression + caching), while keeping long-term memory across sessions. This skill provides a minimal Node-based helper (whisper-context.mjs) that OpenClaw agents can run to: Retrieve packed context for a user/session (query_context) with compress: true and compression_strategy: "delta" Persist the latest turn into long-term memory (ingest_session) Write/search memories (memory_write, memory_search) Run Oracle search/research (oracle_search) Fetch cost analytics (get_cost_summary) Inspect/warm cache (cache_stats, cache_warm)
npx clawhub@latest install whisper-context ClawHub installs the skill folder into your OpenClaw skills workspace (typically ~/.openclaw/workspace/skills/).
Set environment variables (where OpenClaw reads env for your agent): WHISPER_CONTEXT_API_URL=https://context.usewhisper.dev WHISPER_CONTEXT_API_KEY=YOUR_KEY WHISPER_CONTEXT_PROJECT=openclaw-cost-optimization Notes: WHISPER_CONTEXT_API_URL is optional (defaults to https://context.usewhisper.dev). WHISPER_CONTEXT_PROJECT can be a project slug/name. If the project does not exist yet, the helper will auto-create it in your org on first use. For best memory behavior, use stable user_id and session_id values (donβt hardcode them globally; derive them per user/session in your agent).
All commands print JSON to stdout.
--project <slugOrName>: override WHISPER_CONTEXT_PROJECT --api_url <url>: override WHISPER_CONTEXT_API_URL --timeout_ms <n>: request timeout (default: 30000)
Always call query_context first and inject the returned context instead of re-sending your entire chat history. Keep compress: true, compression_strategy: "delta", and use_cache: true (the defaults in this helper) to maximize token savings. Use stable user_id and session_id so memory works across sessions and cache keys stay effective.
node whisper-context.mjs query_context \ --query "What did we decide about the retriever cache?" \ --user_id "user-123" \ --session_id "session-123"
node whisper-context.mjs ingest_session \ --user_id "user-123" \ --session_id "session-123" \ --user "..." \ --assistant "..." If your message text is large or hard to shell-escape, pass JSON via stdin: echo '{ "user": "....", "assistant": "...." }' | node whisper-context.mjs ingest_session --session_id "session-123" --turn_json -
ingest_session sends both user and assistant text to the Context API (so it can build memory and improve retrieval). The helper only reads local files if you explicitly pass @path (or stdin via -). Treat your WHISPER_CONTEXT_API_KEY like a secret; donβt commit it to git.
node whisper-context.mjs memory_write \ --memory_type "preference" \ --content "User prefers concise answers." \ --user_id "user-123"
node whisper-context.mjs memory_search \ --query "preferences" \ --user_id "user-123"
node whisper-context.mjs oracle_search --query "How does delta compression work?" --mode search node whisper-context.mjs oracle_search --query "Design a plan..." --mode research --max_steps 3
node whisper-context.mjs get_cost_summary \ --start_date "2026-01-01T00:00:00.000Z" \ --end_date "2026-02-01T00:00:00.000Z"
node whisper-context.mjs cache_stats
node whisper-context.mjs cache_warm --queries "retriever cache,l1 query cache,delta compression" --ttl_seconds 3600
Before calling the model: run query_context and prepend the returned context (if present) to your prompt. After replying: run ingest_session with the user + assistant messages to persist memory.
Missing WHISPER_CONTEXT_API_KEY: export the env var where OpenClaw runs commands. HTTP 401/403: verify your API key and that it has access to the project/org. HTTP 404 Project not found: verify WHISPER_CONTEXT_PROJECT (slug/name) exists.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.