← All skills
Tencent SkillHub Β· AI

Token Saver 75+

Automatically classifies requests to optimize cost by routing to the cheapest capable model and applies maximum output compression for 75%+ token savings.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Automatically classifies requests to optimize cost by routing to the cheapest capable model and applies maximum output compression for 75%+ token savings.

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
README.md, SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 13 sections Open source page

Core Principle

Understand fully, execute cheaply. The orchestrator must fully understand the task before routing. Never sacrifice comprehension for speed.

Request Classifier (silent, every message)

TierPatternOrchestratorExecutorT1yes/no, status, trivial facts, quick lookupsHandle aloneβ€”T2summaries, how-to, lists, bulk processing, formattingHandle alone OR spawn GroqGroq (FREE)T3debugging, multi-step, code generation, structured analysisOrchestrate + spawnCodex for code, Groq for bulkT4strategy, complex decisions, multi-agent coordination, creativeSpawn OpusOpus orchestrates, spawns Codex/Groq from within

Model Routing Table

ModelUse ForCostSpawn withgroq/llama-3.1-8b-instantSummarization, formatting, classification, bulk transforms β€” NO thinkingFREEmodel: "groq/llama-3.1-8b-instant"openai/gpt-5.3-codexALL code generation, code review, refactoring$$$model: "openai/gpt-5.3-codex"openai/gpt-5.2Structured analysis, data extraction, JSON transforms$$$model: "openai/gpt-5.2"anthropic/claude-opus-4-6Strategy, complex orchestration, failure recovery (T4 only)$$$$model: "anthropic/claude-opus-4-6"

When to spawn (MANDATORY)

Code generation of any kind β†’ spawn Codex Bulk text processing (>3 items) β†’ spawn Groq Complex multi-step tasks β†’ spawn Opus (T4) Simple formatting/rewriting β†’ spawn Groq

When NOT to spawn

T1 questions (yes/no, time, status) β€” handle directly Single tool calls (calendar, web search) β€” handle directly Short responses that need no processing β€” handle directly

Spawn patterns

Groq (free bulk work): sessions_spawn( task: "<clear instruction with all context included>", model: "groq/llama-3.1-8b-instant" ) Codex (all code): sessions_spawn( task: "Write <language> code that <detailed spec>. Include comments. Output the complete file.", model: "openai/gpt-5.3-codex" ) Opus (T4 strategy): sessions_spawn( task: "<full context + goal>. You have full tool access. Use sessions_spawn with Codex for code and Groq for bulk subtasks.", model: "anthropic/claude-opus-4-6" )

Critical spawn rules

Include ALL context in the task string β€” spawned agents have no conversation history Be specific β€” vague tasks waste tokens on clarification One task per spawn β€” don't bundle unrelated work For code: always use Codex β€” never write code yourself

Templates

STATUS: OK/WARN/FAIL one-liner CHOICE: A vs B → Recommend: X (1 line why) CAUSE→FIX→VERIFY: 3 bullets max RESULT: data/output directly, no wrap-up

Rules

No filler. No restating the question. Lead with the answer. Bullets/tables/code > prose. Do not narrate routine tool calls. If user asks for depth ("why", "explain", "go deep") β†’ allow more tokens for that turn only.

Budget by tier

TierMax outputT11-3 linesT25-15 bulletsT3Structured sections, <400 wordsT4Longer allowed, still dense

Tool Gating (before ANY tool call)

Already known? β†’ No tool. Batchable? β†’ Parallelize. Can a spawned Groq handle it? β†’ Spawn instead of doing it yourself. Cheapest path? β†’ memory_search > partial read > full read > web. Needed? β†’ Do not fetch "just in case."

Failure Protocol

If Groq spawn fails β†’ retry with GPT-5.2 If Codex spawn fails β†’ retry with GPT-5.2 If orchestrator can't handle T3 β†’ spawn Opus (escalate to T4) Never retry same model. Escalate.

Measurement (when asked or during testing)

Append: [~X tokens | Tier: Tn | Route: model(s) used]

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
2 Docs
  • SKILL.md Primary doc
  • README.md Docs