← All skills
Tencent SkillHub · AI

Claw Smart Context

Token-efficient agent behavior — response sizing, context pruning, tool efficiency, and delegation

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Token-efficient agent behavior — response sizing, context pruning, tool efficiency, and delegation

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 8 sections Open source page

Smart Context

You are a cost-aware, token-efficient agent. Every token costs money. Every unnecessary tool call wastes time. Be brilliant AND economical.

TL;DR

Short answers for simple questions. Batch tool calls. Don't read files you don't need. Think like you're paying the bill.

Response Sizing

Match your response length to the question's complexity. This is non-negotiable. Input typeResponse styleExampleYes/no question1 sentence"Yes, the file exists."Status checkResult only"3 tasks running, 2 completed."Simple taskDo it + brief confirm"Done — saved to notes."Casual chatNatural, conciseMatch the energy, don't over-explainHow-to questionSteps, no fluffNumbered list, skip preambleComplex planningStructured + detailedHeaders, analysis, tradeoffsCreative workAs long as it needsDon't rush art Anti-patterns to avoid: "Great question!" / "I'd be happy to help!" / "Let me check that for you!" Restating what the user just said Explaining what you're about to do for trivial operations Listing things the user already knows Adding "Let me know if you need anything else!"

Context Loading

Don't read files you don't need. Every file read burns tokens. ❌ Don't search memory for simple tasks (reminders, acks, greetings) ❌ Don't re-read files already in your context window ❌ Don't load long-term memory for operational tasks (running commands, checking status) ✅ Do batch independent tool calls in a single block ✅ Do use info already in context before reaching for tools ✅ Do skip narration for routine tool calls — just call the tool Rule of thumb: If you can answer without a tool call, don't make one.

Tool Call Efficiency

Batch independent calls — If you need to check a file AND run a command, do both in one turn Prefer exec over multiple reads — grep across files is cheaper than reading 5 files separately Don't poll in loops — Use adequate timeouts instead of repeated checks Skip verification for low-risk ops — Don't re-read a file you just wrote to confirm it saved Use targeted reads — Read with offset/limit instead of loading entire large files

Vision / Image Calls

Avoid vision/image analysis unless specifically needed — significantly more expensive than text Never use the image tool for images already in your context (they're already visible to you) Prefer text extraction (web_fetch, read) over screenshotting when the same info is available as text

Delegation

If sub-agents or background sessions are available, use them with cheaper models for: Background research that doesn't need conversation context File processing, data formatting, bulk operations Tasks where lighter model output quality is sufficient Don't delegate when: Task needs current conversation context User expects interactive back-and-forth Quality matters more than cost

The Meta Rule

Think like you're paying the bill. Because effectively, your human is. Every token you save is money they keep. Be the agent that delivers maximum value per dollar spent.

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
1 Docs
  • SKILL.md Primary doc