Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Cost-optimize AI agent operations by routing tasks to appropriate models based on complexity. Use this skill when: (1) deciding which model to use for a task...
Cost-optimize AI agent operations by routing tasks to appropriate models based on complexity. Use this skill when: (1) deciding which model to use for a task...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Route tasks to the cheapest model that can handle them. Most agent work is routine.
This skill requires an OpenRouter API key for model routing. Add it to your OpenClaw user config: // ~/.openclaw/openclaw.json { "openrouter_api_key": "sk-or-v1-..." } Without this key, /model switching and sessions_spawn with non-default models will fail. Get a key at openrouter.ai/keys. Privacy Note: Some models listed in this skill (e.g., Aurora Alpha, Free Router) may log prompts and completions for provider training. Do not route sensitive data (API keys, passwords, private PII) through free or unmoderated models. Review model privacy policies at openrouter.ai/docs before use.
80% of agent tasks are janitorial. File reads, status checks, formatting, simple Q&A. These don't need expensive models. Reserve premium models for problems that actually require deep reasoning.
For OpenRouter-specific pricing and models, see references/openrouter-models.md.
ModelContextToolsBest ForAurora Alpha128K✅Zero-cost reasoning, cloaked community modelFree Router200K✅Auto-routes to best available free modelStep 3.5 Flash (free)256K✅Long-context reasoning at zero cost Free models have rate limits and variable availability. Good for non-critical background tasks.
ModelInputOutputContextToolsBest ForQwen3 Coder Next$0.07$0.30262K✅Agentic coding, MoE 80B/3B activeGemini 2.0 Flash Lite$0.07$0.301M✅High volume, massive contextGemini 2.0 Flash$0.10$0.401M✅General routine with long contextGPT-4o-mini$0.15$0.60128K✅Quick responses, reliable tool useDeepSeek Chat$0.30$1.20164K✅General routine workClaude 3 Haiku$0.25$1.25200K✅Fast tool use, structured outputKimi K2.5$0.45$2.20262K✅Multimodal, visual coding, agentic
ModelInputOutputContextToolsBest Foro3-mini$1.10$4.40200K✅Reasoning on a budgetGemini 2.5 Pro$1.25$10.001M✅Long context, large codebase workGPT-4o$2.50$10.00128K✅Multimodal tasksClaude Sonnet$3.00$15.001M✅Balanced performance, agentic
ModelInputOutputContextToolsBest ForClaude Opus 4.6$5.00$25.001M✅Complex reasoning, deep contexto1$15.00$60.00200K✅Multi-step reasoningGPT-4.5$75.00$150.00128K✅Frontier tasks Prices as of Feb 2026. Check provider docs for current rates. Context = max context window. Tools = function calling support.
Before executing any task, classify it:
Characteristics: Single-step operations Clear, unambiguous instructions No judgment required Deterministic output expected Examples: File read/write operations Status checks and health monitoring Simple lookups (time, weather, definitions) Formatting and restructuring text List operations (filter, sort, transform) API calls with known parameters Heartbeat and cron tasks URL fetching and basic parsing
Characteristics: Multi-step but well-defined Some synthesis required Standard patterns apply Quality matters but isn't critical Examples: Code generation (standard patterns) Summarization and synthesis Draft writing (emails, docs, messages) Data analysis and transformation Multi-file operations Tool orchestration Code review (non-security) Search and research tasks
Characteristics: Novel problem solving required Multiple valid approaches Nuanced judgment calls High stakes or irreversible Previous attempts failed Examples: Multi-step debugging Architecture and design decisions Security-sensitive code review Tasks where cheaper model already failed Ambiguous requirements needing interpretation Long-context reasoning (>50K tokens) Creative work requiring originality Adversarial or edge-case handling
function selectModel(task): # Rule 1: Escalation override if task.previousAttemptFailed: return nextTierUp(task.previousModel) # Rule 2: Hard constraints (filter before cost) candidates = ALL_MODELS if task.requiresToolUse: candidates = candidates.filter(m => m.supportsTools) if task.estimatedTokens > 128_000: candidates = candidates.filter(m => m.contextWindow >= task.estimatedTokens) if task.requiresMultimodal: candidates = candidates.filter(m => m.supportsImages) # Rule 3: Latency constraint if task.isRealTime or task.inAgentLoop: candidates = candidates.filter(m => m.latencyTier <= "fast") # Rule 4: Complexity classification if task.hasSignal("debug", "architect", "design", "security"): return cheapestIn(candidates, TIER_3) if task.hasSignal("summarize", "analyze", "refactor"): return cheapestIn(candidates, TIER_2) complexity = classifyTask(task) if complexity == ROUTINE: return cheapestIn(candidates, TIER_1) elif complexity == MODERATE: return cheapestIn(candidates, TIER_2) else: return cheapestIn(candidates, TIER_3) Note: "write", "read", "code" alone are poor routing signals — "write a file" is Tier 1: work, not Tier 2. Classify based on the task structure, not individual keywords.
Cost isn't the only axis. For real-time agent loops, latency matters: TierTypical TTFTThroughputUse WhenFree1-5sVariableBackground tasks, not time-sensitiveTier 1200-800ms50-100 tok/sAgent loops, real-time pipelinesTier 2500ms-2s30-80 tok/sInteractive sessions, async workTier 31-10s10-40 tok/sOne-shot complex tasks, async only TTFT = Time To First Token. Reasoning models (o1, o3-mini) have high TTFT due to thinking time but are worth it for hard problems. Rule of thumb: If the agent is waiting in a loop for a response before the next action, use Tier 1. If the task is fire-and-forget, cost matters more than speed.
Default to Tier 2 for interactive work Suggest downgrade when doing routine work: "This is routine - I can handle this on a cheaper model or spawn a sub-agent." Request upgrade when stuck: "This needs more reasoning power. Switching to [premium model]."
Default to Tier 1 unless task is clearly moderate+ Batch similar tasks to amortize overhead Report failures back to parent for escalation Check context window limits before dispatching — don't send 200K tokens to a 32K model
Heartbeats/monitoring → Always Tier 1 (or Free if available) Scheduled reports → Tier 1 or 2 based on complexity Alert responses → Start Tier 2, escalate if needed Background data fetching → Free tier when non-critical
When suggesting model changes, use clear language: Downgrade suggestion: "This looks like routine file work. Want me to spawn a sub-agent on DeepSeek for this? Same result, fraction of the cost." Upgrade request: "I'm hitting the limits of what I can figure out here. This needs Opus-level reasoning. Switching up." Explaining hierarchy: "I'm running the heavy analysis on Sonnet while sub-agents fetch the data on DeepSeek. Keeps costs down without sacrificing quality where it matters."
Assuming 100K tokens/day average usage: StrategyMonthly CostNotesPure Opus 4.6~$75Maximum capability, lower than old OpusPure Sonnet~$45Good default for most workPure DeepSeek~$9Cheap but limited on hard problemsPure Qwen3 Coder~$2Cheapest viable for coding agentsHierarchy (80/15/5)~$12Best of all worldsWith Free tier (85/10/4/1)~$8Aggressive optimization The 80/15/5 split: 80% routine tasks on Tier 1 (~$4) 15% moderate tasks on Tier 2 (~$5) 5% complex tasks on Tier 3 (~$3) Result: 6-10x cost reduction vs pure premium, with equivalent quality on complex tasks.
# config.yml - set your default session model model: anthropic/claude-sonnet-4 # Mid-session, switch down for routine work /model deepseek/deepseek-chat # Switch up when you hit a wall /model anthropic/claude-opus-4
# Batch routine tasks on cheap models sessions_spawn: task: "Fetch and parse these 50 URLs" model: deepseek/deepseek-chat # Use Qwen3 Coder for file-heavy agent work sessions_spawn: task: "Refactor these test files to use the new helper" model: qwen/qwen3-coder-next # Free tier for non-critical background jobs sessions_spawn: task: "Check health of all endpoints and log status" model: openrouter/free
Task TypeModelWhyMain interactive sessionclaude-sonnet-4Best balance of quality and costFile ops, fetches, formattingdeepseek/deepseek-chatCheap, reliableAgentic coding sub-tasksqwen/qwen3-coder-next$0.07/M, 262K context, tool useBackground monitoringopenrouter/freeZero costStuck / complex debugginganthropic/claude-opus-4Escalate only when needed
DON'T: Leave your session on Opus when the task is clearly routine — /model deepseek exists for a reason Spawn sub-agents without specifying a model — they inherit the session model, which is usually Tier 2 Use Tier 3 for sessions_spawn tasks like file parsing, URL fetching, or status checks Forget context window limits — spawning a 200K-token task on a 32K model will silently truncate Run recurring or scheduled tasks on anything above Tier 1 DO: Set model: anthropic/claude-sonnet-4 as your config.yml default — good baseline Always set an explicit model field in sessions_spawn — default to deepseek/deepseek-chat or qwen/qwen3-coder-next /model switch down the moment you realize the current task is janitorial /model switch up the moment you're stuck — don't waste tokens retrying on a weak model Use openrouter/free for fire-and-forget background checks
Optimize your switchboard over time: Track your actual spend — review your OpenRouter dashboard weekly to see which models are burning tokens Add your own routing signals — if your workflow has domain terms (e.g., "settlement", "pricing", "vault"), map them to tiers Tune the 80/15/5 split — if you find yourself escalating more than 5% of tasks, your classification may be too aggressive Pin model versions — when a cheap model works well for you, pin the version (e.g., deepseek/deepseek-chat-v3.1) so provider updates don't break your flow Set OpenRouter budget alerts — catch runaway premium usage before it compounds
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.