← All skills
Tencent SkillHub Β· AI

Context Compactor

Token-based context compaction for local models (MLX, llama.cpp, Ollama) that don't report context limits.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Token-based context compaction for local models (MLX, llama.cpp, Ollama) that don't report context limits.

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
README.md, SKILL.md, cli.js, index.ts, openclaw.plugin.json, package.json

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
0.3.8

Documentation

ClawHub primary doc Primary doc: SKILL.md 15 sections Open source page

Context Compactor

Automatic context compaction for OpenClaw when using local models that don't properly report token limits or context overflow errors.

The Problem

Cloud APIs (Anthropic, OpenAI) report context overflow errors, allowing OpenClaw's built-in compaction to trigger. Local models (MLX, llama.cpp, Ollama) often: Silently truncate context Return garbage when context is exceeded Don't report accurate token counts This leaves you with broken conversations when context gets too long.

The Solution

Context Compactor estimates tokens client-side and proactively summarizes older messages before hitting the model's limit.

How It Works

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ 1. Message arrives β”‚ β”‚ 2. before_agent_start hook fires β”‚ β”‚ 3. Plugin estimates total context tokens β”‚ β”‚ 4. If over maxTokens: β”‚ β”‚ a. Split into "old" and "recent" messages β”‚ β”‚ b. Summarize old messages (LLM or fallback) β”‚ β”‚ c. Inject summary as compacted context β”‚ β”‚ 5. Agent sees: summary + recent + new message β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Installation

# One command setup (recommended) npx jasper-context-compactor setup # Restart gateway openclaw gateway restart The setup command automatically: Copies plugin files to ~/.openclaw/extensions/context-compactor/ Adds plugin config to openclaw.json with sensible defaults

Configuration

Add to openclaw.json: { "plugins": { "entries": { "context-compactor": { "enabled": true, "config": { "maxTokens": 8000, "keepRecentTokens": 2000, "summaryMaxTokens": 1000, "charsPerToken": 4 } } } } }

Options

OptionDefaultDescriptionenabledtrueEnable/disable the pluginmaxTokens8000Max context tokens before compactionkeepRecentTokens2000Tokens to preserve from recent messagessummaryMaxTokens1000Max tokens for the summarycharsPerToken4Token estimation ratiosummaryModel(session model)Model to use for summarization

Tuning for Your Model

MLX (8K context models): { "maxTokens": 6000, "keepRecentTokens": 1500, "charsPerToken": 4 } Larger context (32K models): { "maxTokens": 28000, "keepRecentTokens": 4000, "charsPerToken": 4 } Small context (4K models): { "maxTokens": 3000, "keepRecentTokens": 800, "charsPerToken": 4 }

/compact-now

Force clear the summary cache and trigger fresh compaction on next message. /compact-now

/context-stats

  • Show current context token usage and whether compaction would trigger.
  • /context-stats
  • Output:
  • πŸ“Š Context Stats
  • Messages: 47 total
  • User: 23
  • Assistant: 24
  • System: 0
  • Estimated Tokens: ~6,234
  • Limit: 8,000
  • Usage: 77.9%
  • βœ… Within limits

How Summarization Works

When compaction triggers: Split messages into "old" (to summarize) and "recent" (to keep) Generate summary using the session model (or configured summaryModel) Cache the summary to avoid regenerating for the same content Inject context with the summary prepended If the LLM runtime isn't available (e.g., during startup), a fallback truncation-based summary is used.

Differences from Built-in Compaction

FeatureBuilt-inContext CompactorTriggerModel reports overflowToken estimate thresholdWorks with local models❌ (need overflow error)βœ…Persists to transcriptβœ…βŒ (session-only)SummarizationPi runtimePlugin LLM call Context Compactor is complementary β€” it catches cases before they hit the model's hard limit.

Troubleshooting

Summary quality is poor: Try a better summaryModel Increase summaryMaxTokens The fallback truncation is used if LLM runtime isn't available Compaction triggers too often: Increase maxTokens Decrease keepRecentTokens (keeps less, summarizes earlier) Not compacting when expected: Check /context-stats to see current usage Verify enabled: true in config Check logs for [context-compactor] messages Characters per token wrong: Default of 4 works for English Try 3 for CJK languages Try 5 for highly technical content

Logs

Enable debug logging: { "plugins": { "entries": { "context-compactor": { "config": { "logLevel": "debug" } } } } } Look for: [context-compactor] Current context: ~XXXX tokens [context-compactor] Compacted X messages β†’ summary

Links

GitHub: https://github.com/E-x-O-Entertainment-Studios-Inc/openclaw-context-compactor OpenClaw Docs: https://docs.openclaw.ai/concepts/compaction

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
2 Docs2 Scripts2 Config
  • SKILL.md Primary doc
  • README.md Docs
  • cli.js Scripts
  • index.ts Scripts
  • openclaw.plugin.json Config
  • package.json Config