Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Intelligent multi-model router — automatically selects the best AI model based on task type (vision, image generation, video generation, audio, reasoning, co...
Intelligent multi-model router — automatically selects the best AI model based on task type (vision, image generation, video generation, audio, reasoning, co...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Route tasks to the best model automatically, via any OpenAI-compatible API. Author: whatevername2023@proton.me
Models and provider are configured in models.json. Set two environment variables: SMART_ROUTER_BASE_URL — OpenAI-compatible API base URL (e.g. https://api.openai.com/v1) SMART_ROUTER_API_KEY — API key for the provider Edit models.json to customize categories, models, and defaults for your provider.
Prefix a message with @alias to skip auto-classification and call a specific model directly. Format: @alias your question or prompt here
AliasModel IDCategoryVision@gpt4ochatgpt-4o-latestvision@qwen-vlqwen3-vl-235b-a22b-instructvision@qwen-vl-maxqwen-vl-max-2025-08-13vision@llama-vlllama-3.2-90b-vision-instructvision@qwen-vl-32bqwen3-vl-32b-instructvisionImage Gen@imagengoogle/imagen-4-ultraimage_gen@fluxblack-forest-labs/flux-1.1-pro-ultraimage_gen@flux-kontextblack-forest-labs/flux-kontext-maximage_gen@dalledall-e-3image_gen@flux2flux-2-proimage_genVideo Gen@sorasora-2-pro-allvideo_gen@veoveo3.1-pro-4kvideo_gen@viduviduq3-provideo_gen@klingkling-videovideo_gen@runwayrunwayml-gen4_turbo-10video_genAudio@sunosuno_musicaudio@ttsgemini-2.5-pro-preview-ttsaudio@tts-hdtts-1-hdaudio@kling-audiokling-audioaudio@vidu-ttsvidu-ttsaudioReasoning@o3o3reasoning@o3-proo3-proreasoning@o4-minio4-minireasoning@deepseekdeepseek-r1reasoning@gemini-thinkgemini-2.5-pro-thinkingreasoning@claude-thinkclaude-sonnet-4-5-20250929-thinkingreasoningCode@claudeclaude-opus-4-6code@codexgpt-5.1-codex-maxcode@claude-sonnetclaude-sonnet-4-6code@qwen-coderqwen3-coder-480b-a35b-instructcode@qwen-coder-plusqwen3-coder-pluscode@gpt4tgpt-4-turbocodeGeneral@gpt52 / @gpt5gpt-5.2-chat-latestgeneral@geminigemini-2.5-progeneral@deepseekv3deepseek-v3.2general@qwenqwen3-maxgeneral@claude-chatclaude-opus-4-6general Aliases are case-insensitive. If no alias matches, attempt fuzzy match on model name/ID. If still no match, prompt the user.
When no @alias is specified, classify the task automatically: CategoryTriggervisionUser sends image/URL, asks to analyze, describe, OCR, understand image contentimage_genRequests to draw, generate image, design poster, create illustrationvideo_genRequests to generate video, animation, text-to-video, image-to-videoaudioRequests for music generation, TTS, sound effectsreasoningComplex math, logic puzzles, proofs, deep analysis, long-chain reasoningcodeCode generation, debugging, refactoring, review (when external model needed)generalEveryday chat, translation, summarization, writing, Q&A
cat "$(dirname "$0")/../models.json"
Determine category based on classification rules above Use the first model with "default": true in each category If user specifies a model via @alias, use that model directly For cost-sensitive tasks, pick a smaller model in the same category
Chat (vision / reasoning / code / general) scripts/call-model.sh --model "MODEL_ID" --prompt "user request" --type chat With image (vision): scripts/call-model.sh --model "MODEL_ID" --prompt "request" --type chat --image "IMAGE_URL" Image Generation scripts/call-model.sh --model "MODEL_ID" --prompt "image description" --type image Async Tasks (video / audio) scripts/call-model.sh --model "MODEL_ID" --prompt "task description" --type async TTS scripts/call-model.sh --model "MODEL_ID" --prompt "text to speak" --type tts --voice alloy
Chat: return the model's text reply directly Image: return the generated image URL in markdown format Video/Audio: return task status and result URL
Vision: qwen3-vl-235b-a22b-instruct (strongest visual understanding) Image gen: google/imagen-4-ultra (highest quality) Video: sora-2-pro-all (best results) Music: suno_music / TTS: tts-1-hd or gemini-2.5-pro-preview-tts Reasoning: o3 (strongest reasoning) Code: gpt-5.1-codex-max General: claude-opus-4-6
If a model call fails, automatically fall back to the next model in the same category.
Edit models.json to: Add/remove models in any category Change default models Add new categories Update aliases in SKILL.md to match The scripts/sync-models.sh script lists all available models from your provider to help discover new ones.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.