Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
A comprehensive AI model routing system that automatically selects the optimal model for any task. Set up multiple AI providers (Anthropic, OpenAI, Gemini, Moonshot, Z.ai, GLM) with secure API key storage, then route tasks to the best model based on task type, complexity, and cost optimization. Includes interactive setup wizard, task classification, and cost-effective delegation patterns. Use when you need "use X model for this", "switch model", "optimal model", "which model should I use", or to balance quality vs cost across multiple AI providers.
A comprehensive AI model routing system that automatically selects the optimal model for any task. Set up multiple AI providers (Anthropic, OpenAI, Gemini, Moonshot, Z.ai, GLM) with secure API key storage, then route tasks to the best model based on task type, complexity, and cost optimization. Includes interactive setup wizard, task classification, and cost-effective delegation patterns. Use when you need "use X model for this", "switch model", "optimal model", "which model should I use", or to balance quality vs cost across multiple AI providers.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Intelligent AI model routing across multiple providers for optimal cost-performance balance. Automatically select the best model for any task based on complexity, type, and your preferences. Support for 6 major AI providers with secure API key management and interactive configuration.
Analyzes tasks and classifies them by type (coding, research, creative, simple, etc.) Routes to optimal models from your configured providers Optimizes costs by using cheaper models for simple tasks Secures API keys with file permissions (600) and isolated storage Provides recommendations with confidence scoring and reasoning
cd skills/model-router python3 scripts/setup-wizard.py The wizard will guide you through: Provider setup - Add your API keys (Anthropic, OpenAI, Gemini, etc.) Task mappings - Choose which model for each task type Preferences - Set cost optimization level
# Get model recommendation for a task python3 scripts/classify_task.py "Build a React authentication system" # Output: # Recommended Model: claude-sonnet # Confidence: 85% # Cost Level: medium # Reasoning: Matched 2 keywords: build, system
# Spawn with recommended model sessions_spawn --task "Debug this memory leak" --model claude-sonnet # Use aliases for quick access sessions_spawn --task "What's the weather?" --model haiku
ProviderModelsBest ForKey FormatAnthropicclaude-opus-4-5, claude-sonnet-4-5, claude-haiku-4-5Coding, reasoning, creativesk-ant-...OpenAIgpt-4o, gpt-4o-mini, o1-mini, o1-previewTools, deep reasoningsk-proj-...Geminigemini-2.0-flash, gemini-1.5-pro, gemini-1.5-flashMultimodal, huge context (2M)AIza...Moonshotmoonshot-v1-8k/32k/128kChinese languagesk-...Z.aiglm-4.5-air, glm-4.7Cheapest, fastVariousGLMglm-4-flash, glm-4-plus, glm-4-0520Chinese, codingID.secret
Default routing (customizable via wizard): Task TypeDefault ModelWhysimpleglm-4.5-airFastest, cheapest for quick queriescodingclaude-sonnet-4-5Excellent code understandingresearchclaude-sonnet-4-5Balanced depth and speedcreativeclaude-opus-4-5Maximum creativitymatho1-miniSpecialized reasoningvisiongemini-1.5-flashFast multimodalchineseglm-4.7Optimized for Chineselong_contextgemini-1.5-proUp to 2M tokens
Always uses the cheapest capable model: Simple โ glm-4.5-air (~10% cost) Coding โ claude-haiku-4-5 (~25% cost) Research โ claude-sonnet-4-5 (~50% cost) Savings: 50-90% compared to always using premium models
Considers cost vs quality: Simple tasks โ Cheap models Critical tasks โ Premium models Automatic escalation if cheap model fails
Always uses the best model regardless of cost
~/.model-router/ โโโ config.json # Model mappings (chmod 600) โโโ .api-keys # API keys (chmod 600) Features: File permissions restricted to owner (600) Isolated from version control Encrypted at rest (via OS filesystem encryption) Never logged or printed
Never commit .api-keys to version control Use environment variables for production deployments Rotate keys regularly via the wizard Audit access with ls -la ~/.model-router/
# Classify task first python3 scripts/classify_task.py "Extract prices from this CSV" # Result: simple task โ use glm-4.5-air sessions_spawn --task "Extract prices" --model glm-4.5-air # Then analyze with better model if needed sessions_spawn --task "Analyze price trends" --model claude-sonnet
# Try cheap model first (60s timeout) sessions_spawn --task "Fix this bug" --model glm-4.5-air --runTimeoutSeconds 60 # If fails, escalate to premium sessions_spawn --task "Fix complex architecture bug" --model claude-opus
# Batch simple tasks in parallel with cheap model sessions_spawn --task "Summarize doc A" --model glm-4.5-air & sessions_spawn --task "Summarize doc B" --model glm-4.5-air & sessions_spawn --task "Summarize doc C" --model glm-4.5-air & wait
# Vision task with 2M token context sessions_spawn --task "Analyze these 100 images" --model gemini-1.5-pro
{ "version": "1.1.0", "providers": { "anthropic": { "configured": true, "models": ["claude-opus-4-5", "claude-sonnet-4-5", "claude-haiku-4-5"] }, "openai": { "configured": true, "models": ["gpt-4o", "gpt-4o-mini", "o1-mini", "o1-preview"] } }, "task_mappings": { "simple": "glm-4.5-air", "coding": "claude-sonnet-4-5", "research": "claude-sonnet-4-5", "creative": "claude-opus-4-5" }, "preferences": { "cost_optimization": "balanced", "default_provider": "anthropic" } }
# Generated by setup wizard - DO NOT edit manually ANTHROPIC_API_KEY=sk-ant-... OPENAI_API_KEY=sk-proj-... GEMINI_API_KEY=AIza...
โ Interactive setup wizard for guided configuration โ Secure API key storage with file permissions โ Task-to-model mapping customization โ Multi-provider support (6 providers) โ Cost optimization levels (aggressive/balanced/quality)
โ Better task classification with confidence scores โ Provider-specific model recommendations โ Enhanced security with isolated storage โ Comprehensive documentation
Run the setup wizard to reconfigure: python3 scripts/setup-wizard.py
python3 scripts/setup-wizard.py Interactive configuration of providers, mappings, and preferences.
python3 scripts/classify_task.py "your task description" python3 scripts/classify_task.py "your task" --format json Get model recommendation with reasoning.
python3 scripts/setup-wizard.py --list Show all available models and their status.
SkillIntegrationmodel-usageTrack cost per provider to optimize routingsessions_spawnPrimary tool for model delegationsession_statusCheck current model and usage
Start simple - Try cheap models first Batch tasks - Combine multiple simple tasks Use cleanup - Delete sessions after one-off tasks Set timeouts - Prevent runaway sub-agents Monitor usage - Track costs per provider
Run setup wizard to configure providers Check API keys are valid Verify permissions on .api-keys file
pip3 install -r requirements.txt # if needed
Customize task mappings via wizard Use explicit model in sessions_spawn --model Adjust cost optimization preference
Provider Docs: Anthropic OpenAI Gemini Moonshot Z.ai GLM Setup: Run python3 scripts/setup-wizard.py Support: Check references/ folder for detailed guides
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.