{
  "schemaVersion": "1.0",
  "item": {
    "slug": "openclaw-aisa-cn-llm",
    "name": "One API key for Chinese AI models. Route to Qwen, Deepseek",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/chaimengphp/openclaw-aisa-cn-llm",
    "canonicalUrl": "https://clawhub.ai/chaimengphp/openclaw-aisa-cn-llm",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/openclaw-aisa-cn-llm",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=openclaw-aisa-cn-llm",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "README.md",
      "SKILL.md",
      "scripts/cn_llm_client.py"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/openclaw-aisa-cn-llm"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/openclaw-aisa-cn-llm",
    "agentPageUrl": "https://openagent3.xyz/skills/openclaw-aisa-cn-llm/agent",
    "manifestUrl": "https://openagent3.xyz/skills/openclaw-aisa-cn-llm/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/openclaw-aisa-cn-llm/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "OpenClaw CN-LLM 🐉",
        "body": "China LLM Unified Gateway. Powered by AIsa.\n\nOne API Key to access all Chinese LLMs. OpenAI compatible interface.\n\nQwen, DeepSeek, GLM, Baichuan, Moonshot, and more - unified API access."
      },
      {
        "title": "Intelligent Chat",
        "body": "\"Use Qwen to answer Chinese questions, use DeepSeek for coding\""
      },
      {
        "title": "Deep Reasoning",
        "body": "\"Use DeepSeek-R1 for complex reasoning tasks\""
      },
      {
        "title": "Code Generation",
        "body": "\"Use DeepSeek-Coder to generate Python code with explanations\""
      },
      {
        "title": "Long Text Processing",
        "body": "\"Use Qwen-Long for ultra-long document summarization\""
      },
      {
        "title": "Model Comparison",
        "body": "\"Compare response quality between Qwen-Max and DeepSeek-V3\""
      },
      {
        "title": "Qwen (Alibaba)",
        "body": "ModelInput PriceOutput PriceFeaturesqwen3-max$1.37/M$5.48/MMost powerful general modelqwen3-max-2026-01-23$1.37/M$5.48/MLatest versionqwen3-coder-plus$2.86/M$28.60/MEnhanced code generationqwen3-coder-flash$0.72/M$3.60/MFast code generationqwen3-coder-480b-a35b-instruct$2.15/M$8.60/M480B large modelqwen3-vl-plus$0.43/M$4.30/MVision-language modelqwen3-vl-flash$0.86/M$0.86/MFast vision modelqwen3-omni-flash$4.00/M$16.00/MMultimodal modelqwen-vl-max$0.23/M$0.57/MVision-languageqwen-plus-2025-12-01$1.26/M$12.60/MPlus versionqwen-mt-flash$0.168/M$0.514/MFast machine translationqwen-mt-lite$0.13/M$0.39/MLite machine translation"
      },
      {
        "title": "DeepSeek",
        "body": "ModelInput PriceOutput PriceFeaturesdeepseek-r1$2.00/M$8.00/MReasoning model, supports Toolsdeepseek-v3$1.00/M$4.00/MGeneral chat, 671B parametersdeepseek-v3-0324$1.20/M$4.80/MV3 stable versiondeepseek-v3.1$4.00/M$12.00/MLatest Terminus version\n\nNote: Prices are in M (million tokens). Model availability may change, see marketplace.aisa.one/pricing for the latest list."
      },
      {
        "title": "Quick Start",
        "body": "export AISA_API_KEY=\"your-key\""
      },
      {
        "title": "OpenAI Compatible Interface",
        "body": "POST https://api.aisa.one/v1/chat/completions\n\nQwen Example\n\ncurl -X POST \"https://api.aisa.one/v1/chat/completions\" \\\n  -H \"Authorization: Bearer $AISA_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"qwen3-max\",\n    \"messages\": [\n      {\"role\": \"system\", \"content\": \"You are a professional Chinese assistant.\"},\n      {\"role\": \"user\", \"content\": \"Please explain what a large language model is?\"}\n    ],\n    \"temperature\": 0.7,\n    \"max_tokens\": 1000\n  }'\n\nDeepSeek Example\n\n# DeepSeek-V3 general chat (671B parameters)\ncurl -X POST \"https://api.aisa.one/v1/chat/completions\" \\\n  -H \"Authorization: Bearer $AISA_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"deepseek-v3\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"Write a quicksort algorithm in Python\"}],\n    \"temperature\": 0.3\n  }'\n\n# DeepSeek-R1 deep reasoning (supports Tools)\ncurl -X POST \"https://api.aisa.one/v1/chat/completions\" \\\n  -H \"Authorization: Bearer $AISA_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"deepseek-r1\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"A farmer needs to cross a river with a wolf, a sheep, and a cabbage. The boat can only carry the farmer and one item at a time. If the farmer is not present, the wolf will eat the sheep, and the sheep will eat the cabbage. How can the farmer safely cross?\"}]\n  }'\n\n# DeepSeek-V3.1 Terminus latest version\ncurl -X POST \"https://api.aisa.one/v1/chat/completions\" \\\n  -H \"Authorization: Bearer $AISA_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"deepseek-v3.1\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"Implement an LRU cache with get and put operations\"}]\n  }'\n\nQwen3 Code Generation Example\n\ncurl -X POST \"https://api.aisa.one/v1/chat/completions\" \\\n  -H \"Authorization: Bearer $AISA_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"qwen3-coder-plus\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"Implement a thread-safe Map in Go\"}]\n  }'\n\nParameter Reference\n\nParameterTypeRequiredDescriptionmodelstringYesModel identifiermessagesarrayYesMessage listtemperaturenumberNoRandomness (0-2, default 1)max_tokensintegerNoMaximum tokens to generatestreambooleanNoStream output (default false)top_pnumberNoNucleus sampling parameter (0-1)\n\nResponse Format\n\n{\n  \"id\": \"chatcmpl-xxx\",\n  \"object\": \"chat.completion\",\n  \"created\": 1234567890,\n  \"model\": \"qwen-max\",\n  \"choices\": [\n    {\n      \"index\": 0,\n      \"message\": {\n        \"role\": \"assistant\",\n        \"content\": \"A large language model (LLM) is a deep learning-based...\"\n      },\n      \"finish_reason\": \"stop\"\n    }\n  ],\n  \"usage\": {\n    \"prompt_tokens\": 30,\n    \"completion_tokens\": 150,\n    \"total_tokens\": 180,\n    \"cost\": 0.001\n  }\n}"
      },
      {
        "title": "Streaming Output",
        "body": "curl -X POST \"https://api.aisa.one/v1/chat/completions\" \\\n  -H \"Authorization: Bearer $AISA_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"qwen-plus\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"Tell a Chinese folk story\"}],\n    \"stream\": true\n  }'\n\nReturns Server-Sent Events (SSE) format:\n\ndata: {\"id\":\"chatcmpl-xxx\",\"choices\":[{\"delta\":{\"content\":\"Once\"}}]}\ndata: {\"id\":\"chatcmpl-xxx\",\"choices\":[{\"delta\":{\"content\":\" upon\"}}]}\n...\ndata: [DONE]"
      },
      {
        "title": "CLI Usage",
        "body": "# Qwen chat\npython3 {baseDir}/scripts/cn_llm_client.py chat --model qwen3-max --message \"Hello, please introduce yourself\"\n\n# Qwen3 code generation\npython3 {baseDir}/scripts/cn_llm_client.py chat --model qwen3-coder-plus --message \"Write a binary search algorithm\"\n\n# DeepSeek-R1 reasoning\npython3 {baseDir}/scripts/cn_llm_client.py chat --model deepseek-r1 --message \"Which is larger, 9.9 or 9.11? Please reason in detail\"\n\n# DeepSeek-V3 chat\npython3 {baseDir}/scripts/cn_llm_client.py chat --model deepseek-v3 --message \"Tell a story\" --stream\n\n# With system prompt\npython3 {baseDir}/scripts/cn_llm_client.py chat --model qwen3-max --system \"You are a classical poetry expert\" --message \"Write a poem about plum blossoms\"\n\n# Model comparison\npython3 {baseDir}/scripts/cn_llm_client.py compare --models \"qwen3-max,deepseek-v3\" --message \"What is quantum computing?\"\n\n# List supported models\npython3 {baseDir}/scripts/cn_llm_client.py models"
      },
      {
        "title": "Python SDK Usage",
        "body": "from cn_llm_client import CNLLMClient\n\nclient = CNLLMClient()  # Uses AISA_API_KEY environment variable\n\n# Qwen chat\nresponse = client.chat(\n    model=\"qwen3-max\",\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}]\n)\nprint(response[\"choices\"][0][\"message\"][\"content\"])\n\n# Qwen3 code generation\nresponse = client.chat(\n    model=\"qwen3-coder-plus\",\n    messages=[\n        {\"role\": \"system\", \"content\": \"You are a professional programmer.\"},\n        {\"role\": \"user\", \"content\": \"Implement a singleton pattern in Python\"}\n    ],\n    temperature=0.3\n)\n\n# Streaming output\nfor chunk in client.chat_stream(\n    model=\"deepseek-v3\",\n    messages=[{\"role\": \"user\", \"content\": \"Tell a story about an idiom\"}]\n):\n    print(chunk, end=\"\", flush=True)\n\n# Model comparison\nresults = client.compare_models(\n    models=[\"qwen3-max\", \"deepseek-v3\", \"deepseek-r1\"],\n    message=\"Explain what machine learning is\"\n)\nfor model, result in results.items():\n    print(f\"{model}: {result['response'][:100]}...\")"
      },
      {
        "title": "1. Chinese Content Generation",
        "body": "# Copywriting\nresponse = client.chat(\n    model=\"qwen3-max\",\n    messages=[\n        {\"role\": \"system\", \"content\": \"You are a professional copywriter.\"},\n        {\"role\": \"user\", \"content\": \"Write a product introduction for a smart watch\"}\n    ]\n)"
      },
      {
        "title": "2. Code Development",
        "body": "# Code generation and explanation\nresponse = client.chat(\n    model=\"qwen3-coder-plus\",\n    messages=[{\"role\": \"user\", \"content\": \"Implement a thread-safe Map in Go\"}]\n)"
      },
      {
        "title": "3. Complex Reasoning",
        "body": "# Mathematical reasoning\nresponse = client.chat(\n    model=\"deepseek-r1\",\n    messages=[{\"role\": \"user\", \"content\": \"Prove: For any positive integer n, n³-n is divisible by 6\"}]\n)"
      },
      {
        "title": "4. Visual Understanding",
        "body": "# Image understanding\nresponse = client.chat(\n    model=\"qwen3-vl-plus\",\n    messages=[\n        {\"role\": \"user\", \"content\": [\n            {\"type\": \"text\", \"text\": \"Describe the content of this image\"},\n            {\"type\": \"image_url\", \"image_url\": {\"url\": \"https://example.com/image.jpg\"}}\n        ]}\n    ]\n)"
      },
      {
        "title": "5. Model Routing Strategy",
        "body": "MODEL_MAP = {\n    \"chat\": \"qwen3-max\",           # General chat\n    \"code\": \"qwen3-coder-plus\",    # Code generation\n    \"reasoning\": \"deepseek-r1\",    # Complex reasoning\n    \"vision\": \"qwen3-vl-plus\",     # Visual understanding\n    \"fast\": \"qwen3-coder-flash\",   # Fast response\n    \"translate\": \"qwen-mt-flash\"   # Machine translation\n}\n\ndef route_by_task(task_type: str, message: str) -> str:\n    model = MODEL_MAP.get(task_type, \"qwen3-max\")\n    return client.chat(model=model, messages=[{\"role\": \"user\", \"content\": message}])"
      },
      {
        "title": "Error Handling",
        "body": "Errors return JSON with error field:\n\n{\n  \"error\": {\n    \"code\": \"model_not_found\",\n    \"message\": \"Model 'xxx' is not available\"\n  }\n}\n\nCommon error codes:\n\n401 - Invalid or missing API Key\n402 - Insufficient balance\n404 - Model not found\n429 - Rate limit exceeded\n500 - Server error"
      },
      {
        "title": "Pricing",
        "body": "ModelInput ($/M)Output ($/M)qwen3-max$1.37$5.48qwen3-coder-plus$2.86$28.60qwen3-coder-flash$0.72$3.60qwen3-vl-plus$0.43$4.30deepseek-v3$1.00$4.00deepseek-r1$2.00$8.00deepseek-v3.1$4.00$12.00\n\nPrice unit: $ per Million tokens. Each response includes usage.cost and usage.credits_remaining."
      },
      {
        "title": "Get Started",
        "body": "Register at aisa.one\nGet API Key\nTop up (pay-as-you-go)\nSet environment variable: export AISA_API_KEY=\"your-key\""
      },
      {
        "title": "Full API Reference",
        "body": "See API Reference for complete endpoint documentation."
      }
    ],
    "body": "OpenClaw CN-LLM 🐉\n\nChina LLM Unified Gateway. Powered by AIsa.\n\nOne API Key to access all Chinese LLMs. OpenAI compatible interface.\n\nQwen, DeepSeek, GLM, Baichuan, Moonshot, and more - unified API access.\n\n🔥 What You Can Do\nIntelligent Chat\n\"Use Qwen to answer Chinese questions, use DeepSeek for coding\"\n\nDeep Reasoning\n\"Use DeepSeek-R1 for complex reasoning tasks\"\n\nCode Generation\n\"Use DeepSeek-Coder to generate Python code with explanations\"\n\nLong Text Processing\n\"Use Qwen-Long for ultra-long document summarization\"\n\nModel Comparison\n\"Compare response quality between Qwen-Max and DeepSeek-V3\"\n\nSupported Models\nQwen (Alibaba)\nModel\tInput Price\tOutput Price\tFeatures\nqwen3-max\t$1.37/M\t$5.48/M\tMost powerful general model\nqwen3-max-2026-01-23\t$1.37/M\t$5.48/M\tLatest version\nqwen3-coder-plus\t$2.86/M\t$28.60/M\tEnhanced code generation\nqwen3-coder-flash\t$0.72/M\t$3.60/M\tFast code generation\nqwen3-coder-480b-a35b-instruct\t$2.15/M\t$8.60/M\t480B large model\nqwen3-vl-plus\t$0.43/M\t$4.30/M\tVision-language model\nqwen3-vl-flash\t$0.86/M\t$0.86/M\tFast vision model\nqwen3-omni-flash\t$4.00/M\t$16.00/M\tMultimodal model\nqwen-vl-max\t$0.23/M\t$0.57/M\tVision-language\nqwen-plus-2025-12-01\t$1.26/M\t$12.60/M\tPlus version\nqwen-mt-flash\t$0.168/M\t$0.514/M\tFast machine translation\nqwen-mt-lite\t$0.13/M\t$0.39/M\tLite machine translation\nDeepSeek\nModel\tInput Price\tOutput Price\tFeatures\ndeepseek-r1\t$2.00/M\t$8.00/M\tReasoning model, supports Tools\ndeepseek-v3\t$1.00/M\t$4.00/M\tGeneral chat, 671B parameters\ndeepseek-v3-0324\t$1.20/M\t$4.80/M\tV3 stable version\ndeepseek-v3.1\t$4.00/M\t$12.00/M\tLatest Terminus version\n\nNote: Prices are in M (million tokens). Model availability may change, see marketplace.aisa.one/pricing for the latest list.\n\nQuick Start\nexport AISA_API_KEY=\"your-key\"\n\nAPI Endpoints\nOpenAI Compatible Interface\nPOST https://api.aisa.one/v1/chat/completions\n\nQwen Example\ncurl -X POST \"https://api.aisa.one/v1/chat/completions\" \\\n  -H \"Authorization: Bearer $AISA_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"qwen3-max\",\n    \"messages\": [\n      {\"role\": \"system\", \"content\": \"You are a professional Chinese assistant.\"},\n      {\"role\": \"user\", \"content\": \"Please explain what a large language model is?\"}\n    ],\n    \"temperature\": 0.7,\n    \"max_tokens\": 1000\n  }'\n\nDeepSeek Example\n# DeepSeek-V3 general chat (671B parameters)\ncurl -X POST \"https://api.aisa.one/v1/chat/completions\" \\\n  -H \"Authorization: Bearer $AISA_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"deepseek-v3\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"Write a quicksort algorithm in Python\"}],\n    \"temperature\": 0.3\n  }'\n\n# DeepSeek-R1 deep reasoning (supports Tools)\ncurl -X POST \"https://api.aisa.one/v1/chat/completions\" \\\n  -H \"Authorization: Bearer $AISA_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"deepseek-r1\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"A farmer needs to cross a river with a wolf, a sheep, and a cabbage. The boat can only carry the farmer and one item at a time. If the farmer is not present, the wolf will eat the sheep, and the sheep will eat the cabbage. How can the farmer safely cross?\"}]\n  }'\n\n# DeepSeek-V3.1 Terminus latest version\ncurl -X POST \"https://api.aisa.one/v1/chat/completions\" \\\n  -H \"Authorization: Bearer $AISA_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"deepseek-v3.1\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"Implement an LRU cache with get and put operations\"}]\n  }'\n\nQwen3 Code Generation Example\ncurl -X POST \"https://api.aisa.one/v1/chat/completions\" \\\n  -H \"Authorization: Bearer $AISA_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"qwen3-coder-plus\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"Implement a thread-safe Map in Go\"}]\n  }'\n\nParameter Reference\nParameter\tType\tRequired\tDescription\nmodel\tstring\tYes\tModel identifier\nmessages\tarray\tYes\tMessage list\ntemperature\tnumber\tNo\tRandomness (0-2, default 1)\nmax_tokens\tinteger\tNo\tMaximum tokens to generate\nstream\tboolean\tNo\tStream output (default false)\ntop_p\tnumber\tNo\tNucleus sampling parameter (0-1)\nResponse Format\n{\n  \"id\": \"chatcmpl-xxx\",\n  \"object\": \"chat.completion\",\n  \"created\": 1234567890,\n  \"model\": \"qwen-max\",\n  \"choices\": [\n    {\n      \"index\": 0,\n      \"message\": {\n        \"role\": \"assistant\",\n        \"content\": \"A large language model (LLM) is a deep learning-based...\"\n      },\n      \"finish_reason\": \"stop\"\n    }\n  ],\n  \"usage\": {\n    \"prompt_tokens\": 30,\n    \"completion_tokens\": 150,\n    \"total_tokens\": 180,\n    \"cost\": 0.001\n  }\n}\n\nStreaming Output\ncurl -X POST \"https://api.aisa.one/v1/chat/completions\" \\\n  -H \"Authorization: Bearer $AISA_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"qwen-plus\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"Tell a Chinese folk story\"}],\n    \"stream\": true\n  }'\n\n\nReturns Server-Sent Events (SSE) format:\n\ndata: {\"id\":\"chatcmpl-xxx\",\"choices\":[{\"delta\":{\"content\":\"Once\"}}]}\ndata: {\"id\":\"chatcmpl-xxx\",\"choices\":[{\"delta\":{\"content\":\" upon\"}}]}\n...\ndata: [DONE]\n\nPython Client\nCLI Usage\n# Qwen chat\npython3 {baseDir}/scripts/cn_llm_client.py chat --model qwen3-max --message \"Hello, please introduce yourself\"\n\n# Qwen3 code generation\npython3 {baseDir}/scripts/cn_llm_client.py chat --model qwen3-coder-plus --message \"Write a binary search algorithm\"\n\n# DeepSeek-R1 reasoning\npython3 {baseDir}/scripts/cn_llm_client.py chat --model deepseek-r1 --message \"Which is larger, 9.9 or 9.11? Please reason in detail\"\n\n# DeepSeek-V3 chat\npython3 {baseDir}/scripts/cn_llm_client.py chat --model deepseek-v3 --message \"Tell a story\" --stream\n\n# With system prompt\npython3 {baseDir}/scripts/cn_llm_client.py chat --model qwen3-max --system \"You are a classical poetry expert\" --message \"Write a poem about plum blossoms\"\n\n# Model comparison\npython3 {baseDir}/scripts/cn_llm_client.py compare --models \"qwen3-max,deepseek-v3\" --message \"What is quantum computing?\"\n\n# List supported models\npython3 {baseDir}/scripts/cn_llm_client.py models\n\nPython SDK Usage\nfrom cn_llm_client import CNLLMClient\n\nclient = CNLLMClient()  # Uses AISA_API_KEY environment variable\n\n# Qwen chat\nresponse = client.chat(\n    model=\"qwen3-max\",\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}]\n)\nprint(response[\"choices\"][0][\"message\"][\"content\"])\n\n# Qwen3 code generation\nresponse = client.chat(\n    model=\"qwen3-coder-plus\",\n    messages=[\n        {\"role\": \"system\", \"content\": \"You are a professional programmer.\"},\n        {\"role\": \"user\", \"content\": \"Implement a singleton pattern in Python\"}\n    ],\n    temperature=0.3\n)\n\n# Streaming output\nfor chunk in client.chat_stream(\n    model=\"deepseek-v3\",\n    messages=[{\"role\": \"user\", \"content\": \"Tell a story about an idiom\"}]\n):\n    print(chunk, end=\"\", flush=True)\n\n# Model comparison\nresults = client.compare_models(\n    models=[\"qwen3-max\", \"deepseek-v3\", \"deepseek-r1\"],\n    message=\"Explain what machine learning is\"\n)\nfor model, result in results.items():\n    print(f\"{model}: {result['response'][:100]}...\")\n\nUse Cases\n1. Chinese Content Generation\n# Copywriting\nresponse = client.chat(\n    model=\"qwen3-max\",\n    messages=[\n        {\"role\": \"system\", \"content\": \"You are a professional copywriter.\"},\n        {\"role\": \"user\", \"content\": \"Write a product introduction for a smart watch\"}\n    ]\n)\n\n2. Code Development\n# Code generation and explanation\nresponse = client.chat(\n    model=\"qwen3-coder-plus\",\n    messages=[{\"role\": \"user\", \"content\": \"Implement a thread-safe Map in Go\"}]\n)\n\n3. Complex Reasoning\n# Mathematical reasoning\nresponse = client.chat(\n    model=\"deepseek-r1\",\n    messages=[{\"role\": \"user\", \"content\": \"Prove: For any positive integer n, n³-n is divisible by 6\"}]\n)\n\n4. Visual Understanding\n# Image understanding\nresponse = client.chat(\n    model=\"qwen3-vl-plus\",\n    messages=[\n        {\"role\": \"user\", \"content\": [\n            {\"type\": \"text\", \"text\": \"Describe the content of this image\"},\n            {\"type\": \"image_url\", \"image_url\": {\"url\": \"https://example.com/image.jpg\"}}\n        ]}\n    ]\n)\n\n5. Model Routing Strategy\nMODEL_MAP = {\n    \"chat\": \"qwen3-max\",           # General chat\n    \"code\": \"qwen3-coder-plus\",    # Code generation\n    \"reasoning\": \"deepseek-r1\",    # Complex reasoning\n    \"vision\": \"qwen3-vl-plus\",     # Visual understanding\n    \"fast\": \"qwen3-coder-flash\",   # Fast response\n    \"translate\": \"qwen-mt-flash\"   # Machine translation\n}\n\ndef route_by_task(task_type: str, message: str) -> str:\n    model = MODEL_MAP.get(task_type, \"qwen3-max\")\n    return client.chat(model=model, messages=[{\"role\": \"user\", \"content\": message}])\n\nError Handling\n\nErrors return JSON with error field:\n\n{\n  \"error\": {\n    \"code\": \"model_not_found\",\n    \"message\": \"Model 'xxx' is not available\"\n  }\n}\n\n\nCommon error codes:\n\n401 - Invalid or missing API Key\n402 - Insufficient balance\n404 - Model not found\n429 - Rate limit exceeded\n500 - Server error\nPricing\nModel\tInput ($/M)\tOutput ($/M)\nqwen3-max\t$1.37\t$5.48\nqwen3-coder-plus\t$2.86\t$28.60\nqwen3-coder-flash\t$0.72\t$3.60\nqwen3-vl-plus\t$0.43\t$4.30\ndeepseek-v3\t$1.00\t$4.00\ndeepseek-r1\t$2.00\t$8.00\ndeepseek-v3.1\t$4.00\t$12.00\n\nPrice unit: $ per Million tokens. Each response includes usage.cost and usage.credits_remaining.\n\nGet Started\nRegister at aisa.one\nGet API Key\nTop up (pay-as-you-go)\nSet environment variable: export AISA_API_KEY=\"your-key\"\nFull API Reference\n\nSee API Reference for complete endpoint documentation."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/chaimengphp/openclaw-aisa-cn-llm",
    "publisherUrl": "https://clawhub.ai/chaimengphp/openclaw-aisa-cn-llm",
    "owner": "chaimengphp",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/openclaw-aisa-cn-llm",
    "downloadUrl": "https://openagent3.xyz/downloads/openclaw-aisa-cn-llm",
    "agentUrl": "https://openagent3.xyz/skills/openclaw-aisa-cn-llm/agent",
    "manifestUrl": "https://openagent3.xyz/skills/openclaw-aisa-cn-llm/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/openclaw-aisa-cn-llm/agent.md"
  }
}