Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
China LLM Gateway - Unified interface for Chinese LLMs including Qwen, DeepSeek, GLM, Baichuan. OpenAI compatible, one API Key for all models.
China LLM Gateway - Unified interface for Chinese LLMs including Qwen, DeepSeek, GLM, Baichuan. OpenAI compatible, one API Key for all models.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
China LLM Unified Gateway. Powered by AIsa. One API Key to access all Chinese LLMs. OpenAI compatible interface. Qwen, DeepSeek, GLM, Baichuan, Moonshot, and more - unified API access.
"Use Qwen to answer Chinese questions, use DeepSeek for coding"
"Use DeepSeek-R1 for complex reasoning tasks"
"Use DeepSeek-Coder to generate Python code with explanations"
"Use Qwen-Long for ultra-long document summarization"
"Compare response quality between Qwen-Max and DeepSeek-V3"
ModelInput PriceOutput PriceFeaturesqwen3-max$1.37/M$5.48/MMost powerful general modelqwen3-max-2026-01-23$1.37/M$5.48/MLatest versionqwen3-coder-plus$2.86/M$28.60/MEnhanced code generationqwen3-coder-flash$0.72/M$3.60/MFast code generationqwen3-coder-480b-a35b-instruct$2.15/M$8.60/M480B large modelqwen3-vl-plus$0.43/M$4.30/MVision-language modelqwen3-vl-flash$0.86/M$0.86/MFast vision modelqwen3-omni-flash$4.00/M$16.00/MMultimodal modelqwen-vl-max$0.23/M$0.57/MVision-languageqwen-plus-2025-12-01$1.26/M$12.60/MPlus versionqwen-mt-flash$0.168/M$0.514/MFast machine translationqwen-mt-lite$0.13/M$0.39/MLite machine translation
ModelInput PriceOutput PriceFeaturesdeepseek-r1$2.00/M$8.00/MReasoning model, supports Toolsdeepseek-v3$1.00/M$4.00/MGeneral chat, 671B parametersdeepseek-v3-0324$1.20/M$4.80/MV3 stable versiondeepseek-v3.1$4.00/M$12.00/MLatest Terminus version Note: Prices are in M (million tokens). Model availability may change, see marketplace.aisa.one/pricing for the latest list.
export AISA_API_KEY="your-key"
POST https://api.aisa.one/v1/chat/completions Qwen Example curl -X POST "https://api.aisa.one/v1/chat/completions" \ -H "Authorization: Bearer $AISA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "qwen3-max", "messages": [ {"role": "system", "content": "You are a professional Chinese assistant."}, {"role": "user", "content": "Please explain what a large language model is?"} ], "temperature": 0.7, "max_tokens": 1000 }' DeepSeek Example # DeepSeek-V3 general chat (671B parameters) curl -X POST "https://api.aisa.one/v1/chat/completions" \ -H "Authorization: Bearer $AISA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "deepseek-v3", "messages": [{"role": "user", "content": "Write a quicksort algorithm in Python"}], "temperature": 0.3 }' # DeepSeek-R1 deep reasoning (supports Tools) curl -X POST "https://api.aisa.one/v1/chat/completions" \ -H "Authorization: Bearer $AISA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "deepseek-r1", "messages": [{"role": "user", "content": "A farmer needs to cross a river with a wolf, a sheep, and a cabbage. The boat can only carry the farmer and one item at a time. If the farmer is not present, the wolf will eat the sheep, and the sheep will eat the cabbage. How can the farmer safely cross?"}] }' # DeepSeek-V3.1 Terminus latest version curl -X POST "https://api.aisa.one/v1/chat/completions" \ -H "Authorization: Bearer $AISA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "deepseek-v3.1", "messages": [{"role": "user", "content": "Implement an LRU cache with get and put operations"}] }' Qwen3 Code Generation Example curl -X POST "https://api.aisa.one/v1/chat/completions" \ -H "Authorization: Bearer $AISA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "qwen3-coder-plus", "messages": [{"role": "user", "content": "Implement a thread-safe Map in Go"}] }' Parameter Reference ParameterTypeRequiredDescriptionmodelstringYesModel identifiermessagesarrayYesMessage listtemperaturenumberNoRandomness (0-2, default 1)max_tokensintegerNoMaximum tokens to generatestreambooleanNoStream output (default false)top_pnumberNoNucleus sampling parameter (0-1) Response Format { "id": "chatcmpl-xxx", "object": "chat.completion", "created": 1234567890, "model": "qwen-max", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "A large language model (LLM) is a deep learning-based..." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 30, "completion_tokens": 150, "total_tokens": 180, "cost": 0.001 } }
curl -X POST "https://api.aisa.one/v1/chat/completions" \ -H "Authorization: Bearer $AISA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "qwen-plus", "messages": [{"role": "user", "content": "Tell a Chinese folk story"}], "stream": true }' Returns Server-Sent Events (SSE) format: data: {"id":"chatcmpl-xxx","choices":[{"delta":{"content":"Once"}}]} data: {"id":"chatcmpl-xxx","choices":[{"delta":{"content":" upon"}}]} ... data: [DONE]
# Qwen chat python3 {baseDir}/scripts/cn_llm_client.py chat --model qwen3-max --message "Hello, please introduce yourself" # Qwen3 code generation python3 {baseDir}/scripts/cn_llm_client.py chat --model qwen3-coder-plus --message "Write a binary search algorithm" # DeepSeek-R1 reasoning python3 {baseDir}/scripts/cn_llm_client.py chat --model deepseek-r1 --message "Which is larger, 9.9 or 9.11? Please reason in detail" # DeepSeek-V3 chat python3 {baseDir}/scripts/cn_llm_client.py chat --model deepseek-v3 --message "Tell a story" --stream # With system prompt python3 {baseDir}/scripts/cn_llm_client.py chat --model qwen3-max --system "You are a classical poetry expert" --message "Write a poem about plum blossoms" # Model comparison python3 {baseDir}/scripts/cn_llm_client.py compare --models "qwen3-max,deepseek-v3" --message "What is quantum computing?" # List supported models python3 {baseDir}/scripts/cn_llm_client.py models
from cn_llm_client import CNLLMClient client = CNLLMClient() # Uses AISA_API_KEY environment variable # Qwen chat response = client.chat( model="qwen3-max", messages=[{"role": "user", "content": "Hello!"}] ) print(response["choices"][0]["message"]["content"]) # Qwen3 code generation response = client.chat( model="qwen3-coder-plus", messages=[ {"role": "system", "content": "You are a professional programmer."}, {"role": "user", "content": "Implement a singleton pattern in Python"} ], temperature=0.3 ) # Streaming output for chunk in client.chat_stream( model="deepseek-v3", messages=[{"role": "user", "content": "Tell a story about an idiom"}] ): print(chunk, end="", flush=True) # Model comparison results = client.compare_models( models=["qwen3-max", "deepseek-v3", "deepseek-r1"], message="Explain what machine learning is" ) for model, result in results.items(): print(f"{model}: {result['response'][:100]}...")
# Copywriting response = client.chat( model="qwen3-max", messages=[ {"role": "system", "content": "You are a professional copywriter."}, {"role": "user", "content": "Write a product introduction for a smart watch"} ] )
# Code generation and explanation response = client.chat( model="qwen3-coder-plus", messages=[{"role": "user", "content": "Implement a thread-safe Map in Go"}] )
# Mathematical reasoning response = client.chat( model="deepseek-r1", messages=[{"role": "user", "content": "Prove: For any positive integer n, nยณ-n is divisible by 6"}] )
# Image understanding response = client.chat( model="qwen3-vl-plus", messages=[ {"role": "user", "content": [ {"type": "text", "text": "Describe the content of this image"}, {"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}} ]} ] )
MODEL_MAP = { "chat": "qwen3-max", # General chat "code": "qwen3-coder-plus", # Code generation "reasoning": "deepseek-r1", # Complex reasoning "vision": "qwen3-vl-plus", # Visual understanding "fast": "qwen3-coder-flash", # Fast response "translate": "qwen-mt-flash" # Machine translation } def route_by_task(task_type: str, message: str) -> str: model = MODEL_MAP.get(task_type, "qwen3-max") return client.chat(model=model, messages=[{"role": "user", "content": message}])
Errors return JSON with error field: { "error": { "code": "model_not_found", "message": "Model 'xxx' is not available" } } Common error codes: 401 - Invalid or missing API Key 402 - Insufficient balance 404 - Model not found 429 - Rate limit exceeded 500 - Server error
ModelInput ($/M)Output ($/M)qwen3-max$1.37$5.48qwen3-coder-plus$2.86$28.60qwen3-coder-flash$0.72$3.60qwen3-vl-plus$0.43$4.30deepseek-v3$1.00$4.00deepseek-r1$2.00$8.00deepseek-v3.1$4.00$12.00 Price unit: $ per Million tokens. Each response includes usage.cost and usage.credits_remaining.
Register at aisa.one Get API Key Top up (pay-as-you-go) Set environment variable: export AISA_API_KEY="your-key"
See API Reference for complete endpoint documentation.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.