Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Route OpenClaw chats to top Chinese LLMs with smart model selection, auto-fallback, cost tracking, and unified OpenAI-compatible API access.
Route OpenClaw chats to top Chinese LLMs with smart model selection, auto-fallback, cost tracking, and unified OpenAI-compatible API access.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Route your OpenClaw conversations to the best Chinese AI models β no config headaches, just pick and chat.
Gives your OpenClaw instant access to all major Chinese LLMs through a single unified interface: DeepSeek (V3.2 / R1) β Best open-source reasoning, dirt cheap Qwen (Qwen3-Max / Qwen3-Max-Thinking / Qwen3-Coder-Plus) β Alibaba's flagship, strong all-rounder GLM (GLM-5 / GLM-4.7) β Zhipu AI, top-tier coding & agent tasks Kimi (K2.5 / K2.5-Thinking) β Moonshot AI, great for long context & vision Doubao Seed 2.0 (Pro / Lite / Mini) β ByteDance, fast & cheap MiniMax (M2.5) β Lightweight powerhouse, runs locally too Step (3.5 Flash) β StepFun, blazing fast inference Baichuan (Baichuan4-Turbo) β Strong Chinese language understanding Spark (v4.0 Ultra) β iFlytek, speech & Chinese NLP specialist Hunyuan (Turbo-S) β Tencent, WeChat ecosystem integration
Tell your OpenClaw: Use DeepSeek V3.2 for this conversation Or ask it to pick the best model: Which Chinese model is best for coding? Switch to it.
CommandWhat it doeslist modelsShow all available Chinese LLMs with statususe <model>Switch to a specific modelcompare <models>Compare capabilities & pricingrecommend <task>Get model recommendation for a task typetest <model>Send a test prompt to verify connectivitystatusCheck which models are currently accessible
TaskRecommended ModelWhyGeneral chatQwen3-MaxBest all-rounder, strong ChineseCodingGLM-5 / Kimi K2.5Top coding benchmarksMath & reasoningDeepSeek R1Purpose-built for reasoningLong documentsKimi K2.5 (128K) / DeepSeek V3.2 (1M)Massive context windowsFast & cheapStep 3.5 Flash / Doubao Seed 2.0 MiniSub-second latencyCreative writingQwen3-Max / Doubao Seed 2.0 ProRich Chinese expressionAgent tasksGLM-5 / Qwen3-MaxBest tool-use support
The skill reads API keys from environment or from ~/.chinese-llm-router/config.json: { "providers": { "deepseek": { "apiKey": "sk-xxx", "baseUrl": "https://api.deepseek.com/v1", "models": ["deepseek-chat", "deepseek-reasoner"] }, "qwen": { "apiKey": "sk-xxx", "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1", "models": ["qwen3-max", "qwen3-max-thinking", "qwen3-coder-plus"] }, "glm": { "apiKey": "xxx.xxx", "baseUrl": "https://open.bigmodel.cn/api/paas/v4", "models": ["glm-5", "glm-4-plus"] }, "kimi": { "apiKey": "sk-xxx", "baseUrl": "https://api.moonshot.cn/v1", "models": ["kimi-k2.5", "kimi-k2.5-thinking"] }, "doubao": { "apiKey": "xxx", "baseUrl": "https://ark.cn-beijing.volces.com/api/v3", "models": ["doubao-seed-2.0-pro", "doubao-seed-2.0-lite", "doubao-seed-2.0-mini"] }, "minimax": { "apiKey": "xxx", "baseUrl": "https://api.minimax.chat/v1", "models": ["minimax-m2.5"] }, "step": { "apiKey": "xxx", "baseUrl": "https://api.stepfun.com/v1", "models": ["step-3.5-flash"] }, "baichuan": { "apiKey": "xxx", "baseUrl": "https://api.baichuan-ai.com/v1", "models": ["baichuan4-turbo"] }, "spark": { "apiKey": "xxx", "baseUrl": "https://spark-api-open.xf-yun.com/v1", "models": ["spark-v4.0-ultra"] }, "hunyuan": { "apiKey": "xxx", "baseUrl": "https://api.hunyuan.cloud.tencent.com/v1", "models": ["hunyuan-turbo-s"] } }, "default": "qwen3-max", "fallback": ["deepseek-chat", "doubao-seed-2.0-pro"] }
Get API keys from the providers you want (most offer free tiers): DeepSeek: https://platform.deepseek.com Qwen (Alibaba): https://dashscope.console.aliyun.com GLM (Zhipu): https://open.bigmodel.cn Kimi (Moonshot): https://platform.moonshot.cn Doubao (ByteDance): https://console.volcengine.com/ark MiniMax: https://platform.minimaxi.com Step (StepFun): https://platform.stepfun.com Baichuan: https://platform.baichuan-ai.com Spark (iFlytek): https://console.xfyun.cn Hunyuan (Tencent): https://cloud.tencent.com/product/hunyuan Run the setup script: node scripts/setup.js Done! Your OpenClaw can now use any configured model.
ModelInput (Β₯/M tokens)Output (Β₯/M tokens)NotesDeepSeek V3.2Β₯0.5 (cache Β₯0.1)Β₯2.0Cheapest flagshipQwen3-MaxΒ₯2.0Β₯6.0Free tier availableGLM-5Β₯5.0Β₯5.0Just launched, may changeKimi K2.5Β₯2.0Β₯6.0Open source, self-host freeDoubao Seed 2.0 ProΒ₯0.8Β₯2.0ByteDance subsidyDoubao Seed 2.0 MiniΒ₯0.15Β₯0.3Ultra cheapMiniMax M2.5Β₯1.0Β₯3.0Can run locallyStep 3.5 FlashΒ₯0.7Β₯1.4Fastest inference Prices as of Feb 2026. All providers offer free tiers or credits for new users.
Every provider listed uses the OpenAI chat/completions format. No special SDKs needed β just change baseUrl and apiKey.
Auto-fallback: If one provider is down, automatically try the next Cost tracking: See per-model token usage and estimated cost Smart routing: Describe your task, get the best model recommendation Batch compare: Send the same prompt to multiple models, compare outputs Context-aware: Remembers your model preference per conversation topic
π¦ Try our AI Plaza: https://ai.xudd-v.com π¦ ClawHub: https://clawhub.ai/Xdd-xund/chinese-llm-router π¬ Feedback: https://ai.xudd-v.com/connect.html
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.