{
  "schemaVersion": "1.0",
  "item": {
    "slug": "smart-models",
    "name": "Smart Router",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/samstone908/smart-models",
    "canonicalUrl": "https://clawhub.ai/samstone908/smart-models",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/smart-models",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=smart-models",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "README.md",
      "SKILL.md",
      "models.json",
      "scripts/call-model.sh",
      "scripts/sync-models.sh"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/smart-models"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/smart-models",
    "agentPageUrl": "https://openagent3.xyz/skills/smart-models/agent",
    "manifestUrl": "https://openagent3.xyz/skills/smart-models/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/smart-models/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Smart Router — Intelligent Model Router",
        "body": "Route tasks to the best model automatically, via any OpenAI-compatible API.\n\nAuthor: whatevername2023@proton.me"
      },
      {
        "title": "Setup",
        "body": "Models and provider are configured in models.json. Set two environment variables:\n\nSMART_ROUTER_BASE_URL — OpenAI-compatible API base URL (e.g. https://api.openai.com/v1)\nSMART_ROUTER_API_KEY — API key for the provider\n\nEdit models.json to customize categories, models, and defaults for your provider."
      },
      {
        "title": "@ Alias Shortcuts",
        "body": "Prefix a message with @alias to skip auto-classification and call a specific model directly.\n\nFormat: @alias your question or prompt here"
      },
      {
        "title": "Alias Table",
        "body": "AliasModel IDCategoryVision@gpt4ochatgpt-4o-latestvision@qwen-vlqwen3-vl-235b-a22b-instructvision@qwen-vl-maxqwen-vl-max-2025-08-13vision@llama-vlllama-3.2-90b-vision-instructvision@qwen-vl-32bqwen3-vl-32b-instructvisionImage Gen@imagengoogle/imagen-4-ultraimage_gen@fluxblack-forest-labs/flux-1.1-pro-ultraimage_gen@flux-kontextblack-forest-labs/flux-kontext-maximage_gen@dalledall-e-3image_gen@flux2flux-2-proimage_genVideo Gen@sorasora-2-pro-allvideo_gen@veoveo3.1-pro-4kvideo_gen@viduviduq3-provideo_gen@klingkling-videovideo_gen@runwayrunwayml-gen4_turbo-10video_genAudio@sunosuno_musicaudio@ttsgemini-2.5-pro-preview-ttsaudio@tts-hdtts-1-hdaudio@kling-audiokling-audioaudio@vidu-ttsvidu-ttsaudioReasoning@o3o3reasoning@o3-proo3-proreasoning@o4-minio4-minireasoning@deepseekdeepseek-r1reasoning@gemini-thinkgemini-2.5-pro-thinkingreasoning@claude-thinkclaude-sonnet-4-5-20250929-thinkingreasoningCode@claudeclaude-opus-4-6code@codexgpt-5.1-codex-maxcode@claude-sonnetclaude-sonnet-4-6code@qwen-coderqwen3-coder-480b-a35b-instructcode@qwen-coder-plusqwen3-coder-pluscode@gpt4tgpt-4-turbocodeGeneral@gpt52 / @gpt5gpt-5.2-chat-latestgeneral@geminigemini-2.5-progeneral@deepseekv3deepseek-v3.2general@qwenqwen3-maxgeneral@claude-chatclaude-opus-4-6general\n\nAliases are case-insensitive. If no alias matches, attempt fuzzy match on model name/ID. If still no match, prompt the user."
      },
      {
        "title": "Auto-Classification Rules",
        "body": "When no @alias is specified, classify the task automatically:\n\nCategoryTriggervisionUser sends image/URL, asks to analyze, describe, OCR, understand image contentimage_genRequests to draw, generate image, design poster, create illustrationvideo_genRequests to generate video, animation, text-to-video, image-to-videoaudioRequests for music generation, TTS, sound effectsreasoningComplex math, logic puzzles, proofs, deep analysis, long-chain reasoningcodeCode generation, debugging, refactoring, review (when external model needed)generalEveryday chat, translation, summarization, writing, Q&A"
      },
      {
        "title": "1. Read Model Config",
        "body": "cat \"$(dirname \"$0\")/../models.json\""
      },
      {
        "title": "2. Select Model",
        "body": "Determine category based on classification rules above\nUse the first model with \"default\": true in each category\nIf user specifies a model via @alias, use that model directly\nFor cost-sensitive tasks, pick a smaller model in the same category"
      },
      {
        "title": "3. Call Model",
        "body": "Chat (vision / reasoning / code / general)\n\nscripts/call-model.sh --model \"MODEL_ID\" --prompt \"user request\" --type chat\n\nWith image (vision):\n\nscripts/call-model.sh --model \"MODEL_ID\" --prompt \"request\" --type chat --image \"IMAGE_URL\"\n\nImage Generation\n\nscripts/call-model.sh --model \"MODEL_ID\" --prompt \"image description\" --type image\n\nAsync Tasks (video / audio)\n\nscripts/call-model.sh --model \"MODEL_ID\" --prompt \"task description\" --type async\n\nTTS\n\nscripts/call-model.sh --model \"MODEL_ID\" --prompt \"text to speak\" --type tts --voice alloy"
      },
      {
        "title": "4. Return Results",
        "body": "Chat: return the model's text reply directly\nImage: return the generated image URL in markdown format\nVideo/Audio: return task status and result URL"
      },
      {
        "title": "Model Recommendations",
        "body": "Vision: qwen3-vl-235b-a22b-instruct (strongest visual understanding)\nImage gen: google/imagen-4-ultra (highest quality)\nVideo: sora-2-pro-all (best results)\nMusic: suno_music / TTS: tts-1-hd or gemini-2.5-pro-preview-tts\nReasoning: o3 (strongest reasoning)\nCode: gpt-5.1-codex-max\nGeneral: claude-opus-4-6"
      },
      {
        "title": "Fallback",
        "body": "If a model call fails, automatically fall back to the next model in the same category."
      },
      {
        "title": "Customization",
        "body": "Edit models.json to:\n\nAdd/remove models in any category\nChange default models\nAdd new categories\nUpdate aliases in SKILL.md to match\n\nThe scripts/sync-models.sh script lists all available models from your provider to help discover new ones."
      }
    ],
    "body": "Smart Router — Intelligent Model Router\n\nRoute tasks to the best model automatically, via any OpenAI-compatible API.\n\nAuthor: whatevername2023@proton.me\n\nSetup\n\nModels and provider are configured in models.json. Set two environment variables:\n\nSMART_ROUTER_BASE_URL — OpenAI-compatible API base URL (e.g. https://api.openai.com/v1)\nSMART_ROUTER_API_KEY — API key for the provider\n\nEdit models.json to customize categories, models, and defaults for your provider.\n\n@ Alias Shortcuts\n\nPrefix a message with @alias to skip auto-classification and call a specific model directly.\n\nFormat: @alias your question or prompt here\n\nAlias Table\nAlias\tModel ID\tCategory\nVision\t\t\n@gpt4o\tchatgpt-4o-latest\tvision\n@qwen-vl\tqwen3-vl-235b-a22b-instruct\tvision\n@qwen-vl-max\tqwen-vl-max-2025-08-13\tvision\n@llama-vl\tllama-3.2-90b-vision-instruct\tvision\n@qwen-vl-32b\tqwen3-vl-32b-instruct\tvision\nImage Gen\t\t\n@imagen\tgoogle/imagen-4-ultra\timage_gen\n@flux\tblack-forest-labs/flux-1.1-pro-ultra\timage_gen\n@flux-kontext\tblack-forest-labs/flux-kontext-max\timage_gen\n@dalle\tdall-e-3\timage_gen\n@flux2\tflux-2-pro\timage_gen\nVideo Gen\t\t\n@sora\tsora-2-pro-all\tvideo_gen\n@veo\tveo3.1-pro-4k\tvideo_gen\n@vidu\tviduq3-pro\tvideo_gen\n@kling\tkling-video\tvideo_gen\n@runway\trunwayml-gen4_turbo-10\tvideo_gen\nAudio\t\t\n@suno\tsuno_music\taudio\n@tts\tgemini-2.5-pro-preview-tts\taudio\n@tts-hd\ttts-1-hd\taudio\n@kling-audio\tkling-audio\taudio\n@vidu-tts\tvidu-tts\taudio\nReasoning\t\t\n@o3\to3\treasoning\n@o3-pro\to3-pro\treasoning\n@o4-mini\to4-mini\treasoning\n@deepseek\tdeepseek-r1\treasoning\n@gemini-think\tgemini-2.5-pro-thinking\treasoning\n@claude-think\tclaude-sonnet-4-5-20250929-thinking\treasoning\nCode\t\t\n@claude\tclaude-opus-4-6\tcode\n@codex\tgpt-5.1-codex-max\tcode\n@claude-sonnet\tclaude-sonnet-4-6\tcode\n@qwen-coder\tqwen3-coder-480b-a35b-instruct\tcode\n@qwen-coder-plus\tqwen3-coder-plus\tcode\n@gpt4t\tgpt-4-turbo\tcode\nGeneral\t\t\n@gpt52 / @gpt5\tgpt-5.2-chat-latest\tgeneral\n@gemini\tgemini-2.5-pro\tgeneral\n@deepseekv3\tdeepseek-v3.2\tgeneral\n@qwen\tqwen3-max\tgeneral\n@claude-chat\tclaude-opus-4-6\tgeneral\n\nAliases are case-insensitive. If no alias matches, attempt fuzzy match on model name/ID. If still no match, prompt the user.\n\nAuto-Classification Rules\n\nWhen no @alias is specified, classify the task automatically:\n\nCategory\tTrigger\nvision\tUser sends image/URL, asks to analyze, describe, OCR, understand image content\nimage_gen\tRequests to draw, generate image, design poster, create illustration\nvideo_gen\tRequests to generate video, animation, text-to-video, image-to-video\naudio\tRequests for music generation, TTS, sound effects\nreasoning\tComplex math, logic puzzles, proofs, deep analysis, long-chain reasoning\ncode\tCode generation, debugging, refactoring, review (when external model needed)\ngeneral\tEveryday chat, translation, summarization, writing, Q&A\nUsage\n1. Read Model Config\ncat \"$(dirname \"$0\")/../models.json\"\n\n2. Select Model\nDetermine category based on classification rules above\nUse the first model with \"default\": true in each category\nIf user specifies a model via @alias, use that model directly\nFor cost-sensitive tasks, pick a smaller model in the same category\n3. Call Model\nChat (vision / reasoning / code / general)\nscripts/call-model.sh --model \"MODEL_ID\" --prompt \"user request\" --type chat\n\n\nWith image (vision):\n\nscripts/call-model.sh --model \"MODEL_ID\" --prompt \"request\" --type chat --image \"IMAGE_URL\"\n\nImage Generation\nscripts/call-model.sh --model \"MODEL_ID\" --prompt \"image description\" --type image\n\nAsync Tasks (video / audio)\nscripts/call-model.sh --model \"MODEL_ID\" --prompt \"task description\" --type async\n\nTTS\nscripts/call-model.sh --model \"MODEL_ID\" --prompt \"text to speak\" --type tts --voice alloy\n\n4. Return Results\nChat: return the model's text reply directly\nImage: return the generated image URL in markdown format\nVideo/Audio: return task status and result URL\nModel Recommendations\nVision: qwen3-vl-235b-a22b-instruct (strongest visual understanding)\nImage gen: google/imagen-4-ultra (highest quality)\nVideo: sora-2-pro-all (best results)\nMusic: suno_music / TTS: tts-1-hd or gemini-2.5-pro-preview-tts\nReasoning: o3 (strongest reasoning)\nCode: gpt-5.1-codex-max\nGeneral: claude-opus-4-6\nFallback\n\nIf a model call fails, automatically fall back to the next model in the same category.\n\nCustomization\n\nEdit models.json to:\n\nAdd/remove models in any category\nChange default models\nAdd new categories\nUpdate aliases in SKILL.md to match\n\nThe scripts/sync-models.sh script lists all available models from your provider to help discover new ones."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/samstone908/smart-models",
    "publisherUrl": "https://clawhub.ai/samstone908/smart-models",
    "owner": "samstone908",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/smart-models",
    "downloadUrl": "https://openagent3.xyz/downloads/smart-models",
    "agentUrl": "https://openagent3.xyz/skills/smart-models/agent",
    "manifestUrl": "https://openagent3.xyz/skills/smart-models/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/smart-models/agent.md"
  }
}