← All skills
Tencent SkillHub · AI

ClawSaver

Behavior-change skill that trains your agent to batch related asks into fewer responses. No credentials required. Pure instruction-based — no scripts, no net...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Behavior-change skill that trains your agent to batch related asks into fewer responses. No credentials required. Pure instruction-based — no scripts, no net...

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
AUTO-INTEGRATION.md, CHANGELOG.md, CHECKLIST.md, DECISION_RECORD.md, INDEX.md, INTEGRATION.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.4.7

Documentation

ClawHub primary doc Primary doc: SKILL.md 16 sections Open source page

ClawSaver

Reduce model API costs by 20–40% through intelligent message batching and buffering. Most agent systems waste money on redundant API calls. When users send follow-up messages, you call the model separately for each one. ClawSaver fixes this by waiting ~800ms to collect related messages, then sending them together in a single optimized request. Same response quality. Lower cost. No user friction.

How It Works: Batching & Buffering

WITHOUT CLAWSAVER (Context Overhead Hidden): User: "What is ML?" Model: → API Call #1 [Context: system prompt, chat history] (cost: $X) Returns: definition User: "Give an example" Model: → API Call #2 [Context: system prompt, chat history, Q1, A1] (cost: $X) Returns: example User: "Apply to finance?" Model: → API Call #3 [Context: system prompt, chat history, Q1–A2] (cost: $X) Returns: finance application Total: 3 calls × full context = 3X cost, each call repeats context overhead ─────────────────────────────────────── WITH CLAWSAVER (Single Context Load): User: "What is ML?" ← Buffer (800ms wait) User: "Give an example" ← Buffer (800ms wait) User: "Apply to finance?" ← Flush: Send all 3 together Model: → API Call #1 [Context loaded ONCE: system prompt, chat history] Processes all 3 questions together Returns: comprehensive answer addressing all three Total: 1 call × full context = 1X cost, context overhead paid once Actual savings (with context): 67% reduction Cost per token: 1/3 (fewer context re-loads + consolidation) Why it matters: Context (system prompts, history, instructions) gets re-sent on every API call. With ClawSaver, you pay that context overhead once per batch instead of three times. This compounds the savings beyond just "fewer calls." Example (4K token context, 200 output tokens): Without ClawSaver: 3 calls × 4,200 tokens = 12,600 tokens With ClawSaver: 1 call × 4,600 tokens = 4,600 tokens Actual savings: 63% token reduction (even better than call reduction)

The Problem

User: "What is machine learning?" (pause) User: "Give an example" (pause) User: "How does that apply to healthcare?" Without optimization: 3 API calls = 3x cost With ClawSaver: 1 batched call = 1/3 the price Across thousands of conversations, this compounds fast.

How It Works

User sends message → ClawSaver buffers it Waits ~800ms for follow-ups from same user If more messages arrive → keep buffering Timer expires → send all messages together Model responds once → you get complete answer Why users don't notice: They're already waiting for your model response. Buffering input doesn't feel slower because the response comes right after the batch sends.

Install

clawhub install clawsaver

Quick Start (10 lines)

import SessionDebouncer from 'clawsaver'; const debouncers = new Map(); function handleMessage(userId, text) { if (!debouncers.has(userId)) { debouncers.set(userId, new SessionDebouncer( userId, (msgs) => callModel(userId, msgs) )); } debouncers.get(userId).enqueue({ text }); }

Impact

MetricValueCost reduction20–40% typicalSetup time10 minutesCode added~10 linesDependencies0File size4.2 KBLatency added+800ms (user-imperceptible)MaintenanceNone

Three Profiles

Choose based on your use case:

Balanced (Default)

25–35% savings 800ms buffer Chat, Q&A, general conversation

Aggressive

35–45% savings 1.5s buffer Batch workflows, high-volume ingestion

Real-Time

5–10% savings 200ms buffer Interactive, voice-first systems

When to Use

✅ Chat applications ✅ Customer support bots ✅ Multi-turn Q&A ✅ Any conversation with follow-ups ❌ Single-request workflows ❌ Sub-100ms response requirements

API

new SessionDebouncer(userId, handler, { debounceMs: 800, // wait time maxWaitMs: 3000, // absolute max maxMessages: 5, // batch size cap maxTokens: 2048 // reserved }) // Methods debouncer.enqueue(message) // add to batch debouncer.forceFlush(reason) // send now debouncer.getState() // buffer + metrics debouncer.getStatusString() // human-readable

Docs

START_HERE.md — Navigation (pick your role/timeline) AUTO-INTEGRATION.md — ⭐ Drop-in middleware wrapper (2 min setup) QUICKSTART.md — 5-minute integration INTEGRATION.md — Patterns, edge cases, full config SUMMARY.md — Metrics and ROI (decision makers) SKILL.md — Full API reference example-integration.js — Copy-paste templates

Security

No telemetry — Doesn't phone home No network calls — Runs locally No dependencies — Pure JavaScript You control output — You decide what goes to your model Data never leaves your machine.

License

MIT Start here: Pick your path in START_HERE.md, or jump to QUICKSTART.md for 5-minute setup.

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
6 Docs
  • AUTO-INTEGRATION.md Docs
  • CHANGELOG.md Docs
  • CHECKLIST.md Docs
  • DECISION_RECORD.md Docs
  • INDEX.md Docs
  • INTEGRATION.md Docs