{
  "schemaVersion": "1.0",
  "item": {
    "slug": "router",
    "name": "SwitchBoard",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/gigabit-eth/router",
    "canonicalUrl": "https://clawhub.ai/gigabit-eth/router",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/router",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=router",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "manifest.json",
      "skill.json",
      "references/openrouter-models.json",
      "references/openrouter-models.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/router"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/router",
    "agentPageUrl": "https://openagent3.xyz/skills/router/agent",
    "manifestUrl": "https://openagent3.xyz/skills/router/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/router/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "SwitchBoard",
        "body": "Route tasks to the cheapest model that can handle them. Most agent work is routine."
      },
      {
        "title": "Prerequisites",
        "body": "This skill requires an OpenRouter API key for model routing. Add it to your OpenClaw user config:\n\n// ~/.openclaw/openclaw.json\n{\n  \"openrouter_api_key\": \"sk-or-v1-...\"\n}\n\nWithout this key, /model switching and sessions_spawn with non-default models will fail. Get a key at openrouter.ai/keys.\n\nPrivacy Note: Some models listed in this skill (e.g., Aurora Alpha, Free Router) may log prompts and completions for provider training. Do not route sensitive data (API keys, passwords, private PII) through free or unmoderated models. Review model privacy policies at openrouter.ai/docs before use."
      },
      {
        "title": "Core Principle",
        "body": "80% of agent tasks are janitorial. File reads, status checks, formatting, simple Q&A. These don't need expensive models. Reserve premium models for problems that actually require deep reasoning."
      },
      {
        "title": "Model Tiers",
        "body": "For OpenRouter-specific pricing and models, see references/openrouter-models.md."
      },
      {
        "title": "Tier 0: Free",
        "body": "ModelContextToolsBest ForAurora Alpha128K✅Zero-cost reasoning, cloaked community modelFree Router200K✅Auto-routes to best available free modelStep 3.5 Flash (free)256K✅Long-context reasoning at zero cost\n\nFree models have rate limits and variable availability. Good for non-critical background tasks."
      },
      {
        "title": "Tier 1: Cheap ($0.02-0.50/M tokens)",
        "body": "ModelInputOutputContextToolsBest ForQwen3 Coder Next$0.07$0.30262K✅Agentic coding, MoE 80B/3B activeGemini 2.0 Flash Lite$0.07$0.301M✅High volume, massive contextGemini 2.0 Flash$0.10$0.401M✅General routine with long contextGPT-4o-mini$0.15$0.60128K✅Quick responses, reliable tool useDeepSeek Chat$0.30$1.20164K✅General routine workClaude 3 Haiku$0.25$1.25200K✅Fast tool use, structured outputKimi K2.5$0.45$2.20262K✅Multimodal, visual coding, agentic"
      },
      {
        "title": "Tier 2: Mid ($1-5/M tokens)",
        "body": "ModelInputOutputContextToolsBest Foro3-mini$1.10$4.40200K✅Reasoning on a budgetGemini 2.5 Pro$1.25$10.001M✅Long context, large codebase workGPT-4o$2.50$10.00128K✅Multimodal tasksClaude Sonnet$3.00$15.001M✅Balanced performance, agentic"
      },
      {
        "title": "Tier 3: Premium ($5+/M tokens)",
        "body": "ModelInputOutputContextToolsBest ForClaude Opus 4.6$5.00$25.001M✅Complex reasoning, deep contexto1$15.00$60.00200K✅Multi-step reasoningGPT-4.5$75.00$150.00128K✅Frontier tasks\n\nPrices as of Feb 2026. Check provider docs for current rates. Context = max context window. Tools = function calling support."
      },
      {
        "title": "Task Classification",
        "body": "Before executing any task, classify it:"
      },
      {
        "title": "ROUTINE → Use Tier 1",
        "body": "Characteristics:\n\nSingle-step operations\nClear, unambiguous instructions\nNo judgment required\nDeterministic output expected\n\nExamples:\n\nFile read/write operations\nStatus checks and health monitoring\nSimple lookups (time, weather, definitions)\nFormatting and restructuring text\nList operations (filter, sort, transform)\nAPI calls with known parameters\nHeartbeat and cron tasks\nURL fetching and basic parsing"
      },
      {
        "title": "MODERATE → Use Tier 2",
        "body": "Characteristics:\n\nMulti-step but well-defined\nSome synthesis required\nStandard patterns apply\nQuality matters but isn't critical\n\nExamples:\n\nCode generation (standard patterns)\nSummarization and synthesis\nDraft writing (emails, docs, messages)\nData analysis and transformation\nMulti-file operations\nTool orchestration\nCode review (non-security)\nSearch and research tasks"
      },
      {
        "title": "COMPLEX → Use Tier 3",
        "body": "Characteristics:\n\nNovel problem solving required\nMultiple valid approaches\nNuanced judgment calls\nHigh stakes or irreversible\nPrevious attempts failed\n\nExamples:\n\nMulti-step debugging\nArchitecture and design decisions\nSecurity-sensitive code review\nTasks where cheaper model already failed\nAmbiguous requirements needing interpretation\nLong-context reasoning (>50K tokens)\nCreative work requiring originality\nAdversarial or edge-case handling"
      },
      {
        "title": "Decision Algorithm",
        "body": "function selectModel(task):\n  # Rule 1: Escalation override\n  if task.previousAttemptFailed:\n    return nextTierUp(task.previousModel)\n\n  # Rule 2: Hard constraints (filter before cost)\n  candidates = ALL_MODELS\n  if task.requiresToolUse:\n    candidates = candidates.filter(m => m.supportsTools)\n  if task.estimatedTokens > 128_000:\n    candidates = candidates.filter(m => m.contextWindow >= task.estimatedTokens)\n  if task.requiresMultimodal:\n    candidates = candidates.filter(m => m.supportsImages)\n\n  # Rule 3: Latency constraint\n  if task.isRealTime or task.inAgentLoop:\n    candidates = candidates.filter(m => m.latencyTier <= \"fast\")\n\n  # Rule 4: Complexity classification\n  if task.hasSignal(\"debug\", \"architect\", \"design\", \"security\"):\n    return cheapestIn(candidates, TIER_3)\n  if task.hasSignal(\"summarize\", \"analyze\", \"refactor\"):\n    return cheapestIn(candidates, TIER_2)\n\n  complexity = classifyTask(task)\n  if complexity == ROUTINE:\n    return cheapestIn(candidates, TIER_1)\n  elif complexity == MODERATE:\n    return cheapestIn(candidates, TIER_2)\n  else:\n    return cheapestIn(candidates, TIER_3)\n\nNote: \"write\", \"read\", \"code\" alone are poor routing signals — \"write a file\" is Tier 1: work, not Tier 2. Classify based on the task structure, not individual keywords."
      },
      {
        "title": "Latency Considerations",
        "body": "Cost isn't the only axis. For real-time agent loops, latency matters:\n\nTierTypical TTFTThroughputUse WhenFree1-5sVariableBackground tasks, not time-sensitiveTier 1200-800ms50-100 tok/sAgent loops, real-time pipelinesTier 2500ms-2s30-80 tok/sInteractive sessions, async workTier 31-10s10-40 tok/sOne-shot complex tasks, async only\n\nTTFT = Time To First Token. Reasoning models (o1, o3-mini) have high TTFT due to thinking time but are worth it for hard problems.\n\nRule of thumb: If the agent is waiting in a loop for a response before the next action, use Tier 1. If the task is fire-and-forget, cost matters more than speed."
      },
      {
        "title": "For Main Session",
        "body": "Default to Tier 2 for interactive work\nSuggest downgrade when doing routine work: \"This is routine - I can handle this on a cheaper model or spawn a sub-agent.\"\nRequest upgrade when stuck: \"This needs more reasoning power. Switching to [premium model].\""
      },
      {
        "title": "For Sub-Agents",
        "body": "Default to Tier 1 unless task is clearly moderate+\nBatch similar tasks to amortize overhead\nReport failures back to parent for escalation\nCheck context window limits before dispatching — don't send 200K tokens to a 32K model"
      },
      {
        "title": "For Automated Tasks",
        "body": "Heartbeats/monitoring → Always Tier 1 (or Free if available)\nScheduled reports → Tier 1 or 2 based on complexity\nAlert responses → Start Tier 2, escalate if needed\nBackground data fetching → Free tier when non-critical"
      },
      {
        "title": "Communication Patterns",
        "body": "When suggesting model changes, use clear language:\n\nDowngrade suggestion:\n\n\"This looks like routine file work. Want me to spawn a sub-agent on DeepSeek for this? Same result, fraction of the cost.\"\n\nUpgrade request:\n\n\"I'm hitting the limits of what I can figure out here. This needs Opus-level reasoning. Switching up.\"\n\nExplaining hierarchy:\n\n\"I'm running the heavy analysis on Sonnet while sub-agents fetch the data on DeepSeek. Keeps costs down without sacrificing quality where it matters.\""
      },
      {
        "title": "Cost Impact",
        "body": "Assuming 100K tokens/day average usage:\n\nStrategyMonthly CostNotesPure Opus 4.6~$75Maximum capability, lower than old OpusPure Sonnet~$45Good default for most workPure DeepSeek~$9Cheap but limited on hard problemsPure Qwen3 Coder~$2Cheapest viable for coding agentsHierarchy (80/15/5)~$12Best of all worldsWith Free tier (85/10/4/1)~$8Aggressive optimization\n\nThe 80/15/5 split:\n\n80% routine tasks on Tier 1 (~$4)\n15% moderate tasks on Tier 2 (~$5)\n5% complex tasks on Tier 3 (~$3)\n\nResult: 6-10x cost reduction vs pure premium, with equivalent quality on complex tasks."
      },
      {
        "title": "Session Model Switching",
        "body": "# config.yml - set your default session model\nmodel: anthropic/claude-sonnet-4\n\n# Mid-session, switch down for routine work\n/model deepseek/deepseek-chat\n\n# Switch up when you hit a wall\n/model anthropic/claude-opus-4"
      },
      {
        "title": "Spawning Sub-Agents",
        "body": "# Batch routine tasks on cheap models\nsessions_spawn:\n  task: \"Fetch and parse these 50 URLs\"\n  model: deepseek/deepseek-chat\n\n# Use Qwen3 Coder for file-heavy agent work\nsessions_spawn:\n  task: \"Refactor these test files to use the new helper\"\n  model: qwen/qwen3-coder-next\n\n# Free tier for non-critical background jobs\nsessions_spawn:\n  task: \"Check health of all endpoints and log status\"\n  model: openrouter/free"
      },
      {
        "title": "Recommended OpenClaw Defaults",
        "body": "Task TypeModelWhyMain interactive sessionclaude-sonnet-4Best balance of quality and costFile ops, fetches, formattingdeepseek/deepseek-chatCheap, reliableAgentic coding sub-tasksqwen/qwen3-coder-next$0.07/M, 262K context, tool useBackground monitoringopenrouter/freeZero costStuck / complex debugginganthropic/claude-opus-4Escalate only when needed"
      },
      {
        "title": "Anti-Patterns",
        "body": "DON'T:\n\nLeave your session on Opus when the task is clearly routine — /model deepseek exists for a reason\nSpawn sub-agents without specifying a model — they inherit the session model, which is usually Tier 2\nUse Tier 3 for sessions_spawn tasks like file parsing, URL fetching, or status checks\nForget context window limits — spawning a 200K-token task on a 32K model will silently truncate\nRun recurring or scheduled tasks on anything above Tier 1\n\nDO:\n\nSet model: anthropic/claude-sonnet-4 as your config.yml default — good baseline\nAlways set an explicit model field in sessions_spawn — default to deepseek/deepseek-chat or qwen/qwen3-coder-next\n/model switch down the moment you realize the current task is janitorial\n/model switch up the moment you're stuck — don't waste tokens retrying on a weak model\nUse openrouter/free for fire-and-forget background checks"
      },
      {
        "title": "Extending This Skill",
        "body": "Optimize your switchboard over time:\n\nTrack your actual spend — review your OpenRouter dashboard weekly to see which models are burning tokens\nAdd your own routing signals — if your workflow has domain terms (e.g., \"settlement\", \"pricing\", \"vault\"), map them to tiers\nTune the 80/15/5 split — if you find yourself escalating more than 5% of tasks, your classification may be too aggressive\nPin model versions — when a cheap model works well for you, pin the version (e.g., deepseek/deepseek-chat-v3.1) so provider updates don't break your flow\nSet OpenRouter budget alerts — catch runaway premium usage before it compounds"
      }
    ],
    "body": "SwitchBoard\n\nRoute tasks to the cheapest model that can handle them. Most agent work is routine.\n\nPrerequisites\n\nThis skill requires an OpenRouter API key for model routing. Add it to your OpenClaw user config:\n\n// ~/.openclaw/openclaw.json\n{\n  \"openrouter_api_key\": \"sk-or-v1-...\"\n}\n\n\nWithout this key, /model switching and sessions_spawn with non-default models will fail. Get a key at openrouter.ai/keys.\n\nPrivacy Note: Some models listed in this skill (e.g., Aurora Alpha, Free Router) may log prompts and completions for provider training. Do not route sensitive data (API keys, passwords, private PII) through free or unmoderated models. Review model privacy policies at openrouter.ai/docs before use.\n\nCore Principle\n\n80% of agent tasks are janitorial. File reads, status checks, formatting, simple Q&A. These don't need expensive models. Reserve premium models for problems that actually require deep reasoning.\n\nModel Tiers\n\nFor OpenRouter-specific pricing and models, see references/openrouter-models.md.\n\nTier 0: Free\nModel\tContext\tTools\tBest For\nAurora Alpha\t128K\t✅\tZero-cost reasoning, cloaked community model\nFree Router\t200K\t✅\tAuto-routes to best available free model\nStep 3.5 Flash (free)\t256K\t✅\tLong-context reasoning at zero cost\n\nFree models have rate limits and variable availability. Good for non-critical background tasks.\n\nTier 1: Cheap ($0.02-0.50/M tokens)\nModel\tInput\tOutput\tContext\tTools\tBest For\nQwen3 Coder Next\t$0.07\t$0.30\t262K\t✅\tAgentic coding, MoE 80B/3B active\nGemini 2.0 Flash Lite\t$0.07\t$0.30\t1M\t✅\tHigh volume, massive context\nGemini 2.0 Flash\t$0.10\t$0.40\t1M\t✅\tGeneral routine with long context\nGPT-4o-mini\t$0.15\t$0.60\t128K\t✅\tQuick responses, reliable tool use\nDeepSeek Chat\t$0.30\t$1.20\t164K\t✅\tGeneral routine work\nClaude 3 Haiku\t$0.25\t$1.25\t200K\t✅\tFast tool use, structured output\nKimi K2.5\t$0.45\t$2.20\t262K\t✅\tMultimodal, visual coding, agentic\nTier 2: Mid ($1-5/M tokens)\nModel\tInput\tOutput\tContext\tTools\tBest For\no3-mini\t$1.10\t$4.40\t200K\t✅\tReasoning on a budget\nGemini 2.5 Pro\t$1.25\t$10.00\t1M\t✅\tLong context, large codebase work\nGPT-4o\t$2.50\t$10.00\t128K\t✅\tMultimodal tasks\nClaude Sonnet\t$3.00\t$15.00\t1M\t✅\tBalanced performance, agentic\nTier 3: Premium ($5+/M tokens)\nModel\tInput\tOutput\tContext\tTools\tBest For\nClaude Opus 4.6\t$5.00\t$25.00\t1M\t✅\tComplex reasoning, deep context\no1\t$15.00\t$60.00\t200K\t✅\tMulti-step reasoning\nGPT-4.5\t$75.00\t$150.00\t128K\t✅\tFrontier tasks\n\nPrices as of Feb 2026. Check provider docs for current rates. Context = max context window. Tools = function calling support.\n\nTask Classification\n\nBefore executing any task, classify it:\n\nROUTINE → Use Tier 1\n\nCharacteristics:\n\nSingle-step operations\nClear, unambiguous instructions\nNo judgment required\nDeterministic output expected\n\nExamples:\n\nFile read/write operations\nStatus checks and health monitoring\nSimple lookups (time, weather, definitions)\nFormatting and restructuring text\nList operations (filter, sort, transform)\nAPI calls with known parameters\nHeartbeat and cron tasks\nURL fetching and basic parsing\nMODERATE → Use Tier 2\n\nCharacteristics:\n\nMulti-step but well-defined\nSome synthesis required\nStandard patterns apply\nQuality matters but isn't critical\n\nExamples:\n\nCode generation (standard patterns)\nSummarization and synthesis\nDraft writing (emails, docs, messages)\nData analysis and transformation\nMulti-file operations\nTool orchestration\nCode review (non-security)\nSearch and research tasks\nCOMPLEX → Use Tier 3\n\nCharacteristics:\n\nNovel problem solving required\nMultiple valid approaches\nNuanced judgment calls\nHigh stakes or irreversible\nPrevious attempts failed\n\nExamples:\n\nMulti-step debugging\nArchitecture and design decisions\nSecurity-sensitive code review\nTasks where cheaper model already failed\nAmbiguous requirements needing interpretation\nLong-context reasoning (>50K tokens)\nCreative work requiring originality\nAdversarial or edge-case handling\nDecision Algorithm\nfunction selectModel(task):\n  # Rule 1: Escalation override\n  if task.previousAttemptFailed:\n    return nextTierUp(task.previousModel)\n\n  # Rule 2: Hard constraints (filter before cost)\n  candidates = ALL_MODELS\n  if task.requiresToolUse:\n    candidates = candidates.filter(m => m.supportsTools)\n  if task.estimatedTokens > 128_000:\n    candidates = candidates.filter(m => m.contextWindow >= task.estimatedTokens)\n  if task.requiresMultimodal:\n    candidates = candidates.filter(m => m.supportsImages)\n\n  # Rule 3: Latency constraint\n  if task.isRealTime or task.inAgentLoop:\n    candidates = candidates.filter(m => m.latencyTier <= \"fast\")\n\n  # Rule 4: Complexity classification\n  if task.hasSignal(\"debug\", \"architect\", \"design\", \"security\"):\n    return cheapestIn(candidates, TIER_3)\n  if task.hasSignal(\"summarize\", \"analyze\", \"refactor\"):\n    return cheapestIn(candidates, TIER_2)\n\n  complexity = classifyTask(task)\n  if complexity == ROUTINE:\n    return cheapestIn(candidates, TIER_1)\n  elif complexity == MODERATE:\n    return cheapestIn(candidates, TIER_2)\n  else:\n    return cheapestIn(candidates, TIER_3)\n\n\nNote: \"write\", \"read\", \"code\" alone are poor routing signals — \"write a file\" is Tier 1: work, not Tier 2. Classify based on the task structure, not individual keywords.\n\nLatency Considerations\n\nCost isn't the only axis. For real-time agent loops, latency matters:\n\nTier\tTypical TTFT\tThroughput\tUse When\nFree\t1-5s\tVariable\tBackground tasks, not time-sensitive\nTier 1\t200-800ms\t50-100 tok/s\tAgent loops, real-time pipelines\nTier 2\t500ms-2s\t30-80 tok/s\tInteractive sessions, async work\nTier 3\t1-10s\t10-40 tok/s\tOne-shot complex tasks, async only\n\nTTFT = Time To First Token. Reasoning models (o1, o3-mini) have high TTFT due to thinking time but are worth it for hard problems.\n\nRule of thumb: If the agent is waiting in a loop for a response before the next action, use Tier 1. If the task is fire-and-forget, cost matters more than speed.\n\nBehavioral Rules\nFor Main Session\nDefault to Tier 2 for interactive work\nSuggest downgrade when doing routine work: \"This is routine - I can handle this on a cheaper model or spawn a sub-agent.\"\nRequest upgrade when stuck: \"This needs more reasoning power. Switching to [premium model].\"\nFor Sub-Agents\nDefault to Tier 1 unless task is clearly moderate+\nBatch similar tasks to amortize overhead\nReport failures back to parent for escalation\nCheck context window limits before dispatching — don't send 200K tokens to a 32K model\nFor Automated Tasks\nHeartbeats/monitoring → Always Tier 1 (or Free if available)\nScheduled reports → Tier 1 or 2 based on complexity\nAlert responses → Start Tier 2, escalate if needed\nBackground data fetching → Free tier when non-critical\nCommunication Patterns\n\nWhen suggesting model changes, use clear language:\n\nDowngrade suggestion:\n\n\"This looks like routine file work. Want me to spawn a sub-agent on DeepSeek for this? Same result, fraction of the cost.\"\n\nUpgrade request:\n\n\"I'm hitting the limits of what I can figure out here. This needs Opus-level reasoning. Switching up.\"\n\nExplaining hierarchy:\n\n\"I'm running the heavy analysis on Sonnet while sub-agents fetch the data on DeepSeek. Keeps costs down without sacrificing quality where it matters.\"\n\nCost Impact\n\nAssuming 100K tokens/day average usage:\n\nStrategy\tMonthly Cost\tNotes\nPure Opus 4.6\t~$75\tMaximum capability, lower than old Opus\nPure Sonnet\t~$45\tGood default for most work\nPure DeepSeek\t~$9\tCheap but limited on hard problems\nPure Qwen3 Coder\t~$2\tCheapest viable for coding agents\nHierarchy (80/15/5)\t~$12\tBest of all worlds\nWith Free tier (85/10/4/1)\t~$8\tAggressive optimization\n\nThe 80/15/5 split:\n\n80% routine tasks on Tier 1 (~$4)\n15% moderate tasks on Tier 2 (~$5)\n5% complex tasks on Tier 3 (~$3)\n\nResult: 6-10x cost reduction vs pure premium, with equivalent quality on complex tasks.\n\nOpenClaw Integration\nSession Model Switching\n# config.yml - set your default session model\nmodel: anthropic/claude-sonnet-4\n\n# Mid-session, switch down for routine work\n/model deepseek/deepseek-chat\n\n# Switch up when you hit a wall\n/model anthropic/claude-opus-4\n\nSpawning Sub-Agents\n# Batch routine tasks on cheap models\nsessions_spawn:\n  task: \"Fetch and parse these 50 URLs\"\n  model: deepseek/deepseek-chat\n\n# Use Qwen3 Coder for file-heavy agent work\nsessions_spawn:\n  task: \"Refactor these test files to use the new helper\"\n  model: qwen/qwen3-coder-next\n\n# Free tier for non-critical background jobs\nsessions_spawn:\n  task: \"Check health of all endpoints and log status\"\n  model: openrouter/free\n\nRecommended OpenClaw Defaults\nTask Type\tModel\tWhy\nMain interactive session\tclaude-sonnet-4\tBest balance of quality and cost\nFile ops, fetches, formatting\tdeepseek/deepseek-chat\tCheap, reliable\nAgentic coding sub-tasks\tqwen/qwen3-coder-next\t$0.07/M, 262K context, tool use\nBackground monitoring\topenrouter/free\tZero cost\nStuck / complex debugging\tanthropic/claude-opus-4\tEscalate only when needed\nAnti-Patterns\n\nDON'T:\n\nLeave your session on Opus when the task is clearly routine — /model deepseek exists for a reason\nSpawn sub-agents without specifying a model — they inherit the session model, which is usually Tier 2\nUse Tier 3 for sessions_spawn tasks like file parsing, URL fetching, or status checks\nForget context window limits — spawning a 200K-token task on a 32K model will silently truncate\nRun recurring or scheduled tasks on anything above Tier 1\n\nDO:\n\nSet model: anthropic/claude-sonnet-4 as your config.yml default — good baseline\nAlways set an explicit model field in sessions_spawn — default to deepseek/deepseek-chat or qwen/qwen3-coder-next\n/model switch down the moment you realize the current task is janitorial\n/model switch up the moment you're stuck — don't waste tokens retrying on a weak model\nUse openrouter/free for fire-and-forget background checks\nExtending This Skill\n\nOptimize your switchboard over time:\n\nTrack your actual spend — review your OpenRouter dashboard weekly to see which models are burning tokens\nAdd your own routing signals — if your workflow has domain terms (e.g., \"settlement\", \"pricing\", \"vault\"), map them to tiers\nTune the 80/15/5 split — if you find yourself escalating more than 5% of tasks, your classification may be too aggressive\nPin model versions — when a cheap model works well for you, pin the version (e.g., deepseek/deepseek-chat-v3.1) so provider updates don't break your flow\nSet OpenRouter budget alerts — catch runaway premium usage before it compounds"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/gigabit-eth/router",
    "publisherUrl": "https://clawhub.ai/gigabit-eth/router",
    "owner": "gigabit-eth",
    "version": "1.0.2",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/router",
    "downloadUrl": "https://openagent3.xyz/downloads/router",
    "agentUrl": "https://openagent3.xyz/skills/router/agent",
    "manifestUrl": "https://openagent3.xyz/skills/router/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/router/agent.md"
  }
}