Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Automatically classifies requests to optimize cost by routing to the cheapest capable model and applies maximum output compression for 75%+ token savings.
Automatically classifies requests to optimize cost by routing to the cheapest capable model and applies maximum output compression for 75%+ token savings.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Understand fully, execute cheaply. The orchestrator must fully understand the task before routing. Never sacrifice comprehension for speed.
TierPatternOrchestratorExecutorT1yes/no, status, trivial facts, quick lookupsHandle aloneβT2summaries, how-to, lists, bulk processing, formattingHandle alone OR spawn GroqGroq (FREE)T3debugging, multi-step, code generation, structured analysisOrchestrate + spawnCodex for code, Groq for bulkT4strategy, complex decisions, multi-agent coordination, creativeSpawn OpusOpus orchestrates, spawns Codex/Groq from within
ModelUse ForCostSpawn withgroq/llama-3.1-8b-instantSummarization, formatting, classification, bulk transforms β NO thinkingFREEmodel: "groq/llama-3.1-8b-instant"openai/gpt-5.3-codexALL code generation, code review, refactoring$$$model: "openai/gpt-5.3-codex"openai/gpt-5.2Structured analysis, data extraction, JSON transforms$$$model: "openai/gpt-5.2"anthropic/claude-opus-4-6Strategy, complex orchestration, failure recovery (T4 only)$$$$model: "anthropic/claude-opus-4-6"
Code generation of any kind β spawn Codex Bulk text processing (>3 items) β spawn Groq Complex multi-step tasks β spawn Opus (T4) Simple formatting/rewriting β spawn Groq
T1 questions (yes/no, time, status) β handle directly Single tool calls (calendar, web search) β handle directly Short responses that need no processing β handle directly
Groq (free bulk work): sessions_spawn( task: "<clear instruction with all context included>", model: "groq/llama-3.1-8b-instant" ) Codex (all code): sessions_spawn( task: "Write <language> code that <detailed spec>. Include comments. Output the complete file.", model: "openai/gpt-5.3-codex" ) Opus (T4 strategy): sessions_spawn( task: "<full context + goal>. You have full tool access. Use sessions_spawn with Codex for code and Groq for bulk subtasks.", model: "anthropic/claude-opus-4-6" )
Include ALL context in the task string β spawned agents have no conversation history Be specific β vague tasks waste tokens on clarification One task per spawn β don't bundle unrelated work For code: always use Codex β never write code yourself
STATUS: OK/WARN/FAIL one-liner CHOICE: A vs B β Recommend: X (1 line why) CAUSEβFIXβVERIFY: 3 bullets max RESULT: data/output directly, no wrap-up
No filler. No restating the question. Lead with the answer. Bullets/tables/code > prose. Do not narrate routine tool calls. If user asks for depth ("why", "explain", "go deep") β allow more tokens for that turn only.
TierMax outputT11-3 linesT25-15 bulletsT3Structured sections, <400 wordsT4Longer allowed, still dense
Already known? β No tool. Batchable? β Parallelize. Can a spawned Groq handle it? β Spawn instead of doing it yourself. Cheapest path? β memory_search > partial read > full read > web. Needed? β Do not fetch "just in case."
If Groq spawn fails β retry with GPT-5.2 If Codex spawn fails β retry with GPT-5.2 If orchestrator can't handle T3 β spawn Opus (escalate to T4) Never retry same model. Escalate.
Append: [~X tokens | Tier: Tn | Route: model(s) used]
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.