Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Configure AIsa as a first-class model provider for OpenClaw, enabling production access to major Chinese AI models (Qwen, DeepSeek, Kimi K2.5, Doubao) throug...
Configure AIsa as a first-class model provider for OpenClaw, enabling production access to major Chinese AI models (Qwen, DeepSeek, Kimi K2.5, Doubao) throug...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
AIsa is a unified API gateway providing production access to China's leading AI models through official partnerships with all major Chinese AI platforms. It is an Alibaba Cloud Qwen Key Account partner, offering the full Qwen model family at discounted pricing, plus models on the Alibaba Bailian aggregation platform (DeepSeek, Kimi, GLM). AIsa also provides access to Kimi K2.5 (Moonshot AI's flagship reasoning model) at approximately 80% of official pricing. β οΈ All pricing listed below is for reference. Real-time pricing is subject to change β always check https://marketplace.aisa.one/pricing for the latest rates.
export AISA_API_KEY="your-key-here" OpenClaw auto-detects AISA_API_KEY and registers AIsa as a provider. No config file changes needed.
openclaw onboard --auth-choice aisa-api-key
openclaw onboard --auth-choice aisa-api-key --aisa-api-key "your-key-here"
{ "models": { "providers": { "aisa": { "baseUrl": "https://api.aisa.one/v1", "apiKey": "${AISA_API_KEY}", "api": "openai-completions", "models": [ { "id": "aisa/qwen3-max", "name": "Qwen3 Max", "reasoning": true, "input": ["text", "image"], "contextWindow": 256000, "maxTokens": 16384, "supportsDeveloperRole": false, "cost": { "input": 1.20, "output": 4.80, "cacheRead": 0, "cacheWrite": 0 } }, { "id": "aisa/qwen-plus-2025-12-01", "name": "Qwen Plus", "reasoning": true, "input": ["text", "image"], "contextWindow": 256000, "maxTokens": 16384, "supportsDeveloperRole": false, "cost": { "input": 0.30, "output": 0.90, "cacheRead": 0, "cacheWrite": 0 } }, { "id": "aisa/qwen-mt-flash", "name": "Qwen MT Flash", "reasoning": true, "input": ["text"], "contextWindow": 256000, "maxTokens": 8192, "supportsDeveloperRole": false, "cost": { "input": 0.05, "output": 0.30, "cacheRead": 0, "cacheWrite": 0 } }, { "id": "aisa/deepseek-v3.1", "name": "DeepSeek V3.1", "reasoning": true, "input": ["text"], "contextWindow": 131072, "maxTokens": 8192, "supportsDeveloperRole": false, "cost": { "input": 0.27, "output": 1.10, "cacheRead": 0.07, "cacheWrite": 0 } }, { "id": "aisa/kimi-k2.5", "name": "Kimi K2.5", "reasoning": true, "input": ["text"], "contextWindow": 131072, "maxTokens": 8192, "supportsDeveloperRole": false, "cost": { "input": 0.60, "output": 2.40, "cacheRead": 0, "cacheWrite": 0 } } ] } } }, "agents": { "defaults": { "model": { "primary": "aisa/qwen3-max" } } } }
ModelModel IDBest ForContextReasoningVerifiedQwen3 Maxaisa/qwen3-maxComplex reasoning, flagship tasks256Kβ β Qwen Plusaisa/qwen-plus-2025-12-01Main production model256Kβ β Qwen MT Flashaisa/qwen-mt-flashHigh-frequency, lightweight tasks256Kβ β DeepSeek V3.1aisa/deepseek-v3.1Cost-effective reasoning128Kβ β Kimi K2.5aisa/kimi-k2.5Moonshot's flagship reasoning model128Kβ β
Kimi K2.5 is Moonshot AI's latest reasoning model, accessible via AIsa at approximately 80% of official Moonshot pricing. Key features: Strong reasoning and multi-step problem solving Competitive performance on coding and math benchmarks Available through AIsa's Moonshot partnership π Zero Data Retention (ZDR) β Enterprise-Grade Privacy Accessing Kimi K2.5 through AIsa comes with enterprise-level Zero Data Retention guarantees. Users do not need to worry about data privacy β AIsa has executed a formal ZDR agreement with Moonshot AI. Under the Supplemental Enterprise Service Agreement (dated 10 February 2026) between AIsa and Kimi (Moonshot AI PTE. LTD.): Customer data is NOT retained by Moonshot after processing Generated outputs are NOT stored on Moonshot's infrastructure No data is used for model training β your prompts and completions remain private Processing is governed by contractual enterprise terms, not consumer ToS This makes AIsa the recommended path for privacy-sensitive or enterprise workloads that require access to Kimi K2.5. When calling Kimi K2.5 directly via Moonshot's consumer API, standard consumer data policies apply β by routing through AIsa, your data benefits from the negotiated ZDR protections. β οΈ Important: Temperature restriction Kimi K2.5 only accepts temperature=1.0. Using any other value will return an error: Error: invalid temperature: only 1 is allowed for this model If your OpenClaw config or agent sets a different temperature, override it for Kimi: /model aisa/kimi-k2.5 OpenClaw will use the model's default temperature when not explicitly set. Kimi K2.5 Pricing Comparison (per 1M tokens): MetricAIsaMoonshot OfficialSavingsInput/1M~$0.60~$0.75~20% offOutput/1M~$2.40~$3.00~20% off Actual pricing may vary. Check https://marketplace.aisa.one/pricing for real-time rates.
Users can add any model supported by AIsa to their config. The full catalog includes 49+ models: Qwen family (8 models): qwen3-max, qwen3-max-2026-01-23, qwen-plus-2025-12-01 qwen-mt-flash, qwen-mt-lite qwen-vl-max, qwen3-vl-flash, qwen3-vl-plus (vision models) DeepSeek (4 models): deepseek-v3.1, deepseek-v3, deepseek-v3-0324, deepseek-r1 Kimi / Moonshot (2 models): kimi-k2.5, kimi-k2-thinking Also available: Claude series (10), GPT series (9), Gemini series (5), Grok series (2), and more. List all available models: curl https://api.aisa.one/v1/models -H "Authorization: Bearer $AISA_API_KEY"
AIsa uses versioned model IDs for some models. If you encounter a 503 - No available channels error, the model ID may need updating. Known model ID mappings: Common NameCorrect AIsa Model IDβ Does NOT workQwen Plusqwen-plus-2025-12-01qwen3-plus, qwen-plus, qwen-plus-latestQwen Flashqwen-mt-flashqwen3-flash, qwen-turbo, qwen-turbo-latestQwen Maxqwen3-max(works as-is)DeepSeek V3.1deepseek-v3.1(works as-is)Kimi K2.5kimi-k2.5(works as-is) To check the latest available model IDs: curl https://api.aisa.one/v1/models -H "Authorization: Bearer $AISA_API_KEY"
In chat (TUI): /model aisa/qwen3-max /model aisa/deepseek-v3.1 /model aisa/kimi-k2.5 Via CLI: openclaw models set aisa/qwen3-max
All pricing below is for reference. Real-time pricing is subject to change β always check https://marketplace.aisa.one/pricing for the latest rates.
AIsa: $0.05 input / $0.30 output (~50% off retail) Bailian Official: $0.10 / $0.40 OpenRouter: $0.11-0.13 / $0.45-0.50
AIsa: $0.30 input / $0.90 output (~25% off retail) Bailian Official: $0.40 / $1.20 OpenRouter: $0.45-0.50 / $1.35-1.50
AIsa: $1.20 input / $4.80 output (~40% off retail) Bailian Official: $2.00 / $8.00 OpenRouter: $2.20-2.50 / $9.00-10.00
AIsa: ~$0.60 input / ~$2.40 output (~20% off official Moonshot pricing) Moonshot Official: ~$0.75 / ~$3.00 OpenRouter: Limited availability
OpenRouter: ~$4,000-4,250/month Bailian Official: ~$3,400/month AIsa: ~$2,040/month (saves $16,320-26,520/year)
AIsa maintains verified partnerships with: Alibaba Cloud β Qwen Key Account (full model family, 3 global regions: CN, US-Virginia, Singapore) BytePlus β Doubao by ByteDance DeepSeek β via Alibaba Cloud integration Moonshot β Kimi K2.5 integration, with enterprise Zero Data Retention (ZDR) agreement (effective Feb 10, 2026)
AIsa provides access to Qwen models across 3 global regions via Alibaba Cloud: π¨π³ China (default) πΊπΈ US (Virginia) πΈπ¬ Singapore This is unique to AIsa's Key Account status. Other providers like OpenRouter or the free Qwen Portal typically route through CN only.
ModelAvg LatencyRatingQwen3 Max~1,577 msβββββ FastestQwen MT Flash~1,918 msββββ FastKimi K2.5~2,647 msβββ MediumDeepSeek V3.1~3,002 msβββ MediumQwen Plus~8,207 msββ Slower
The model ID may be incorrect or outdated. Check the Model ID Versioning section above for correct IDs. Common fixes: qwen3-plus β use qwen-plus-2025-12-01 qwen3-flash β use qwen-mt-flash
Ensure the model ID uses the aisa/ prefix in OpenClaw config: β aisa/qwen3-max β qwen3-max
Kimi K2.5 only accepts temperature=1.0. If your config sets a different temperature, add a model-specific override or let OpenClaw use the default.
In rare cases Kimi K2.5 may return empty content while consuming output tokens. Retry the request β this is typically transient.
Check env var: echo $AISA_API_KEY Or verify in config: openclaw config get auth.profiles Re-run onboarding: openclaw onboard --auth-choice aisa-api-key
AIsa uses the OpenAI-compatible API (openai-completions). Ensure your config has: "api": "openai-completions"
AIsa has no daily request limits (unlike the free Qwen Portal which caps at 2,000 req/day).
Visit https://marketplace.aisa.one/ Sign up and create an API key Set it as AISA_API_KEY or use the onboarding wizard
AIsa's endpoint is OpenAI-compatible (https://api.aisa.one/v1) All models support streaming and function calling supportsDeveloperRole is set to false for Qwen models Default context window: 256,000 tokens (Qwen) or 131,072 tokens (DeepSeek/Kimi) Reasoning (thinking) is enabled for all default models Kimi K2.5 requires temperature=1.0 β other values cause API errors Kimi K2.5 via AIsa is covered by enterprise Zero Data Retention (ZDR) β data is not retained or used for training Image/Video generation models (WAN) are available but require separate configuration AIsa API supports 49+ models total β use the models endpoint to discover all available options
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.