{
  "schemaVersion": "1.0",
  "item": {
    "slug": "add-newcli-provider",
    "name": "Add NewCLI Provider (Claude/GPT/Gemini)",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/jooey/add-newcli-provider",
    "canonicalUrl": "https://clawhub.ai/jooey/add-newcli-provider",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/add-newcli-provider",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=add-newcli-provider",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "README.md",
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/add-newcli-provider"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/add-newcli-provider",
    "agentPageUrl": "https://openagent3.xyz/skills/add-newcli-provider/agent",
    "manifestUrl": "https://openagent3.xyz/skills/add-newcli-provider/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/add-newcli-provider/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "配置 NewCLI Provider（code.newcli.com 模型代理源）",
        "body": "NewCLI (FoxCode) 是一个模型代理服务，通过统一的账户和 API Key 提供三类模型的访问：\n\nProvider模型系列API 协议Base URL备注newcliClaude (Anthropic)anthropic-messagescode.newcli.com/claude主线路newcli-awsClaude (Anthropic)anthropic-messagescode.newcli.com/claude/aws⭐ AWS 特价线路，消耗 1/24newcli-codexGPT (OpenAI)openai-completionscode.newcli.com/codex/v1newcli-geminiGemini (Google)google-generative-aicode.newcli.com/gemini/v1beta\n\n为什么是四个 provider？ 虽然是同一个服务商、同一个 API Key，但：\n\nClaude、GPT、Gemini 使用不同的 API 协议和端点，必须分开\nClaude 有两条线路（主线路 vs AWS 特价线路），消耗倍率不同，也需要分开配置"
      },
      {
        "title": "⭐ AWS 特价线路（重要）",
        "body": "服务商提供了独家 AWS 线路：https://code.newcli.com/claude/aws\n\n对比主线路 (newcli)AWS 线路 (newcli-aws)Base URL.../claude.../claude/aws额度消耗1x1/24x (同样额度可用 24 倍)模型相同相同API 协议相同相同API Key相同相同\n\n策略：所有重要任务优先使用 newcli-aws，主线路 newcli 作为备用。\n\n⚠️ 服务商声明：严禁分发 AWS 线路地址，发现封禁不退款。\n\n如果觉得这个 Skill 有用，请用我的邀请码注册：\nhttps://foxcode.rjj.cc/auth/register?aff=7WTAV8R"
      },
      {
        "title": "前置条件",
        "body": "你需要从 NewCLI 服务获取以下信息：\n\n项目说明API Key格式为 sk-ant-oat01-... 的密钥（三个 provider 共用同一个）可用模型列表向服务商确认，或查看账户统计页\n\n注意：NewCLI 的 /v1/models 接口有 Cloudflare 防护，无法通过 curl 直接获取模型列表。请以账户后台的实际统计为准，不要依赖文档中的\"支持列表\"——文档列出的模型不一定全部对你的账户开放。"
      },
      {
        "title": "第一步：确认可用模型",
        "body": "这一步不能跳过。 不要把文档里列的模型全加上去，要以实际能调通的为准。"
      },
      {
        "title": "1A. 测试 Claude 模型（newcli / newcli-aws）",
        "body": "用 Anthropic messages 格式测试：\n\n# 测试主线路\ncurl -s --max-time 15 https://code.newcli.com/claude/v1/messages \\\n  -H \"x-api-key: <你的API_KEY>\" \\\n  -H \"anthropic-version: 2023-06-01\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\":\"<MODEL_ID>\",\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}],\"max_tokens\":10}'\n\n# 测试 AWS 特价线路\ncurl -s --max-time 15 https://code.newcli.com/claude/aws/v1/messages \\\n  -H \"x-api-key: <你的API_KEY>\" \\\n  -H \"anthropic-version: 2023-06-01\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\":\"<MODEL_ID>\",\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}],\"max_tokens\":10}'\n\n注意：baseUrl 中主线路写 /claude，AWS 线路写 /claude/aws（OpenClaw 自动拼接 /v1/messages）。\n\n如果返回正常的 JSON 响应（含 content）= 可用。\n如果返回 {\"error\":{\"message\":\"暂不支持\"}} 或 \"未开放\" = 该模型不可用。"
      },
      {
        "title": "1B. 测试 GPT 模型（newcli-codex）",
        "body": "用 OpenAI completions 格式测试：\n\ncurl -s --max-time 15 https://code.newcli.com/codex/v1/chat/completions \\\n  -H \"Authorization: Bearer <你的API_KEY>\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\":\"<MODEL_ID>\",\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}],\"max_tokens\":10}'\n\n如果返回正常的 JSON 响应（含 choices）= 可用。\n如果返回错误或超时 = 该模型不可用。"
      },
      {
        "title": "1C. 测试 Gemini 模型（newcli-gemini）",
        "body": "用 Google Generative AI 格式测试：\n\ncurl -s --max-time 15 \\\n  \"https://code.newcli.com/gemini/v1beta/models/<MODEL_ID>:generateContent\" \\\n  -H \"x-goog-api-key: <你的API_KEY>\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"contents\":[{\"role\":\"user\",\"parts\":[{\"text\":\"hi\"}]}]}'\n\n注意：Gemini 端点的 URL 格式与 Claude/GPT 不同——模型名嵌入在 URL 路径中，而不是请求体中。\n\n如果返回正常的 JSON 响应（含 candidates）= 可用。\n如果返回 {\"error\":{\"message\":\"模型未开放\"}} = 该模型不可用。"
      },
      {
        "title": "已知可用模型（截至 2026-02-08）",
        "body": "Claude 系列（newcli）\n\n模型 ID名称Context说明claude-opus-4-6Claude Opus 4.6200K最强，适合复杂任务claude-haiku-4-5-20251001Claude Haiku 4.5200K轻量快速，适合简单任务\n\n其他模型如 claude-sonnet-4-20250514 等在文档中列出但实测可能返回\"未开放\"，以你账户的实际情况为准。\n\nGPT 系列（newcli-codex）\n\n模型 ID名称Context说明gpt-5.3-codexGPT-5.3 Codex128K最新版本gpt-5.2GPT-5.2128K基础版gpt-5.2-codexGPT-5.2 Codex128K代码增强版gpt-5.1GPT-5.1128K基础版gpt-5.1-codexGPT-5.1 Codex128K代码增强版gpt-5.1-codex-miniGPT-5.1 Codex Mini128K轻量版gpt-5.1-codex-maxGPT-5.1 Codex Max128K增强版gpt-5GPT-5128K基础版gpt-5-codexGPT-5 Codex128K代码增强版\n\nGemini 系列（newcli-gemini）— 文本对话模型\n\n模型 ID名称Contextreasoning说明gemini-3-proGemini 3 Pro1M✅最新旗舰gemini-3-pro-highGemini 3 Pro High1M✅旗舰增强版gemini-3-pro-previewGemini 3 Pro Preview1M✅预览版gemini-3-flashGemini 3 Flash1M❌快速版gemini-3-flash-previewGemini 3 Flash Preview1M❌快速预览版gemini-2.5-proGemini 2.5 Pro1M✅上一代旗舰gemini-2.5-flashGemini 2.5 Flash1M❌上一代快速版gemini-2.5-flash-liteGemini 2.5 Flash Lite1M❌轻量版\n\nGemini 系列（newcli-gemini）— 图片生成模型\n\n这些模型用于生成图片，不要加入 fallback 链，但可以通过 /model 命令手动切换使用。\n\n基础分辨率（默认）：\n\n模型 ID说明gemini-3-pro-image默认比例gemini-3-pro-image-3x2横向 3:2gemini-3-pro-image-2x3纵向 2:3gemini-3-pro-image-3x4纵向 3:4gemini-3-pro-image-4x3横向 4:3gemini-3-pro-image-4x5纵向 4:5gemini-3-pro-image-5x4横向 5:4gemini-3-pro-image-9x16竖屏 9:16gemini-3-pro-image-16x9宽屏 16:9gemini-3-pro-image-21x9超宽 21:9\n\n2K 分辨率： 模型 ID 加 -2k 前缀，如 gemini-3-pro-image-2k、gemini-3-pro-image-2k-16x9 等。\n\n4K 分辨率： 模型 ID 加 -4k 前缀，如 gemini-3-pro-image-4k、gemini-3-pro-image-4k-16x9 等。\n\n⚠️ 图片生成模型不要加入 fallback 链——它们不适合文本对话，放进 fallback 会导致对话请求被错误路由到图片生成模型。需要生图时通过 /model gemini-3-pro-image 手动切换。"
      },
      {
        "title": "第二步：添加 Provider",
        "body": "在 ~/.openclaw/openclaw.json 的 models.providers 下添加三个 provider。"
      },
      {
        "title": "2A. 添加 newcli（Claude 主线路）",
        "body": "\"newcli\": {\n  \"baseUrl\": \"https://code.newcli.com/claude\",\n  \"apiKey\": \"<你的API_KEY>\",\n  \"api\": \"anthropic-messages\",\n  \"authHeader\": true,\n  \"models\": [\n    {\n      \"id\": \"claude-opus-4-6\",\n      \"name\": \"Claude Opus 4.6\",\n      \"reasoning\": false,\n      \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 200000,\n      \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"claude-haiku-4-5-20251001\",\n      \"name\": \"Claude Haiku 4.5\",\n      \"reasoning\": false,\n      \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 200000,\n      \"maxTokens\": 8192\n    }\n  ]\n}"
      },
      {
        "title": "2B. 添加 newcli-aws（Claude AWS 特价线路）⭐",
        "body": "\"newcli-aws\": {\n  \"baseUrl\": \"https://code.newcli.com/claude/aws\",\n  \"apiKey\": \"<你的API_KEY>\",\n  \"api\": \"anthropic-messages\",\n  \"authHeader\": true,\n  \"models\": [\n    {\n      \"id\": \"claude-opus-4-6\",\n      \"name\": \"Claude Opus 4.6 (AWS)\",\n      \"reasoning\": false,\n      \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 200000,\n      \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"claude-haiku-4-5-20251001\",\n      \"name\": \"Claude Haiku 4.5 (AWS)\",\n      \"reasoning\": false,\n      \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 200000,\n      \"maxTokens\": 8192\n    }\n  ]\n}\n\n与 newcli 的唯一区别：baseUrl 从 .../claude 变为 .../claude/aws。模型列表、API Key、协议完全相同。\n推荐：重要 Agent（如运维、评审）优先使用 newcli-aws，主线路 newcli 作为备用。"
      },
      {
        "title": "2C. 添加 newcli-codex（GPT 系列）",
        "body": "\"newcli-codex\": {\n  \"baseUrl\": \"https://code.newcli.com/codex/v1\",\n  \"apiKey\": \"<你的API_KEY>\",\n  \"api\": \"openai-completions\",\n  \"models\": [\n    {\n      \"id\": \"gpt-5.3-codex\", \"name\": \"GPT-5.3 Codex\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5.2\", \"name\": \"GPT-5.2\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5.2-codex\", \"name\": \"GPT-5.2 Codex\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5.1\", \"name\": \"GPT-5.1\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5.1-codex\", \"name\": \"GPT-5.1 Codex\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5.1-codex-mini\", \"name\": \"GPT-5.1 Codex Mini\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5.1-codex-max\", \"name\": \"GPT-5.1 Codex Max\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5\", \"name\": \"GPT-5\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5-codex\", \"name\": \"GPT-5 Codex\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    }\n  ]\n}"
      },
      {
        "title": "2D. 添加 newcli-gemini（Gemini 系列）",
        "body": "\"newcli-gemini\": {\n  \"baseUrl\": \"https://code.newcli.com/gemini/v1beta\",\n  \"apiKey\": \"<你的API_KEY>\",\n  \"api\": \"google-generative-ai\",\n  \"models\": [\n    {\n      \"id\": \"gemini-3-pro\", \"name\": \"Gemini 3 Pro\",\n      \"reasoning\": true, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-3-pro-high\", \"name\": \"Gemini 3 Pro High\",\n      \"reasoning\": true, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-3-pro-preview\", \"name\": \"Gemini 3 Pro Preview\",\n      \"reasoning\": true, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-3-flash\", \"name\": \"Gemini 3 Flash\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-3-flash-preview\", \"name\": \"Gemini 3 Flash Preview\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-2.5-pro\", \"name\": \"Gemini 2.5 Pro\",\n      \"reasoning\": true, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-2.5-flash\", \"name\": \"Gemini 2.5 Flash\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-2.5-flash-lite\", \"name\": \"Gemini 2.5 Flash Lite\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    }\n  ]\n}"
      },
      {
        "title": "四个 provider 的关键差异",
        "body": "参数newcli (Claude)newcli-aws (Claude AWS)newcli-codex (GPT)newcli-gemini (Gemini)baseUrl.../claude.../claude/aws.../codex/v1.../gemini/v1betaapianthropic-messagesanthropic-messagesopenai-completionsgoogle-generative-aiauthHeadertruetrue默认（Bearer）默认（x-goog-api-key）apiKey相同相同相同相同额度消耗1x1/24x ⭐1x1xcontextWindow200K200K128K1MmaxTokens81928192819265536"
      },
      {
        "title": "只添加你确认可用的模型",
        "body": "错误做法：把文档里所有模型都堆上去\n正确做法：只添加第一步中测试通过的模型\n\n添加不存在的模型不会导致崩溃，但 fallback 到它时会浪费一次请求超时，影响响应速度。"
      },
      {
        "title": "第三步：配置别名",
        "body": "在 agents.defaults.models 下为新模型添加别名，方便在聊天中用短名切换：\n\n{\n  \"agents\": {\n    \"defaults\": {\n      \"models\": {\n        \"newcli/claude-opus-4-6\": { \"alias\": \"claude-opus\" },\n        \"newcli/claude-haiku-4-5-20251001\": { \"alias\": \"claude-haiku\" },\n        \"newcli-aws/claude-opus-4-6\": { \"alias\": \"claude-opus-aws\" },\n        \"newcli-aws/claude-haiku-4-5-20251001\": { \"alias\": \"claude-haiku-aws\" },\n        \"newcli-codex/gpt-5.3-codex\": { \"alias\": \"gpt53\" },\n        \"newcli-codex/gpt-5.2\": { \"alias\": \"gpt52\" },\n        \"newcli-codex/gpt-5.2-codex\": { \"alias\": \"gpt52codex\" },\n        \"newcli-codex/gpt-5.1\": { \"alias\": \"gpt51\" },\n        \"newcli-codex/gpt-5.1-codex\": { \"alias\": \"gpt51codex\" },\n        \"newcli-codex/gpt-5.1-codex-mini\": { \"alias\": \"gpt51mini\" },\n        \"newcli-codex/gpt-5.1-codex-max\": { \"alias\": \"gpt51max\" },\n        \"newcli-codex/gpt-5\": { \"alias\": \"gpt5\" },\n        \"newcli-codex/gpt-5-codex\": { \"alias\": \"gpt5codex\" },\n        \"newcli-gemini/gemini-3-pro\": { \"alias\": \"gemini3pro\" },\n        \"newcli-gemini/gemini-3-pro-high\": { \"alias\": \"gemini3prohigh\" },\n        \"newcli-gemini/gemini-3-pro-preview\": { \"alias\": \"gemini3preview\" },\n        \"newcli-gemini/gemini-3-flash\": { \"alias\": \"gemini3flash\" },\n        \"newcli-gemini/gemini-3-flash-preview\": { \"alias\": \"gemini3flashpreview\" },\n        \"newcli-gemini/gemini-2.5-pro\": { \"alias\": \"gemini25pro\" },\n        \"newcli-gemini/gemini-2.5-flash\": { \"alias\": \"gemini25flash\" },\n        \"newcli-gemini/gemini-2.5-flash-lite\": { \"alias\": \"gemini25lite\" }\n      }\n    }\n  }\n}\n\n配置后用户可以在聊天中用 /model claude-opus、/model gpt53、/model gemini3pro 切换模型。"
      },
      {
        "title": "⚠️ 别名配置的唯一合法字段是 alias",
        "body": "agents.defaults.models.<model-id>.alias     <-- 唯一合法字段\nagents.defaults.models.<model-id>.reasoning <-- 非法！会导致 Gateway 崩溃！\nagents.defaults.models.<model-id>.xxx       <-- 任何其他字段都非法！\n\n这是一个已经发生过的事故：在别名配置里加了 \"reasoning\": true 导致 schema 校验失败，Gateway 崩溃循环 181 次。模型能力属性只能放在 models.providers 的模型定义里，不能放在别名里。"
      },
      {
        "title": "第四步：接入 Fallback 链",
        "body": "在 agents.defaults.model.fallbacks 中把新模型加到合适的位置：\n\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": {\n        \"primary\": \"minimax/MiniMax-M2.1\",\n        \"fallbacks\": [\n          \"newcli-aws/claude-haiku-4-5-20251001\",\n          \"minimax/MiniMax-M2.1\",\n          \"deepseek/deepseek-chat\",\n          \"qwen-portal/coder-model\",\n          \"newcli/claude-haiku-4-5-20251001\",\n          \"newcli-gemini/gemini-2.5-flash-lite\",\n          \"deepseek/deepseek-reasoner\",\n          \"qwen-portal/vision-model\"\n        ]\n      }\n    }\n  }\n}"
      },
      {
        "title": "Fallback 排序原则",
        "body": "AWS 线路优先，DeepSeek 兜底：\n\n位置模型为什么选它1newcli-aws/claude-haiku⭐ AWS 线路，消耗 1/242minimax/MiniMax-M2.1月费 (100 prompts/5h)3deepseek/deepseek-chatDeepSeek 按量付费4qwen-portal/coder-modelQwen 免费 2000/天5newcli/claude-haiku主线路备用6newcli-gemini/gemini-2.5-flash-liteGemini 最轻量7deepseek/deepseek-reasonerreasoning 备用8qwen-portal/vision-modelvision 备用\n\n其他模型（Claude Opus、GPT-5.3、Gemini 3 Pro 等高端模型）不放 fallback 链，需要时通过 /model <别名> 手动切换。图片生成模型绝不放入 fallback 链。"
      },
      {
        "title": "5.1 JSON 语法检查",
        "body": "python3 -c \"import json; json.load(open('$HOME/.openclaw/openclaw.json')); print('JSON OK')\""
      },
      {
        "title": "5.2 Schema 校验",
        "body": "openclaw doctor\n\n如果输出包含 Unrecognized key 就说明有非法字段，必须修复后才能重启。"
      },
      {
        "title": "5.3 重启 Gateway",
        "body": "# macOS\nlaunchctl kickstart -k gui/$(id -u)/ai.openclaw.gateway\n\n# 等 3 秒后确认状态\nsleep 3\nlaunchctl print gui/$(id -u)/ai.openclaw.gateway | grep -E \"job state|last exit\"\n\n期望看到：\n\nlast exit code = 0\njob state = running\n\n如果 last exit code = 1 且 job state 不是 running，检查错误日志：\n\ntail -20 ~/.openclaw/logs/gateway.err.log"
      },
      {
        "title": "5.4 功能验证",
        "body": "在任意已绑定的聊天中测试三个 provider：\n\n/model claude-opus    # 测试 Claude\n/model gpt53          # 测试 GPT\n/model gemini3pro     # 测试 Gemini\n/model Minimax        # 切回主力"
      },
      {
        "title": "问题：所有模型都返回\"暂不支持\"",
        "body": "可能原因 1：API Key 过期或余额不足 → 登录 NewCLI 后台检查\n可能原因 2：并发限制，已有其他客户端占用 → 关闭其他使用同一 Key 的进程\n可能原因 3：服务临时维护 → 稍后再试"
      },
      {
        "title": "问题：Gateway 启动后立刻崩溃",
        "body": "最可能原因：配置中有非法字段\n诊断：tail -20 ~/.openclaw/logs/gateway.err.log，找 Unrecognized key\n修复：删除非法字段，或运行 openclaw doctor --fix"
      },
      {
        "title": "问题：部分 provider 能用，部分不行",
        "body": "先检查 baseUrl 和 api 协议是否匹配：\n\nProvider正确 baseUrl正确 apinewclihttps://code.newcli.com/claudeanthropic-messagesnewcli-awshttps://code.newcli.com/claude/awsanthropic-messagesnewcli-codexhttps://code.newcli.com/codex/v1openai-completionsnewcli-geminihttps://code.newcli.com/gemini/v1betagoogle-generative-ai\n\n检查 apiKey：四个 provider 应使用相同的 Key。"
      },
      {
        "title": "问题：模型能切换但回复为空或报错",
        "body": "Claude (newcli)：\n\n正确 baseUrl：https://code.newcli.com/claude（OpenClaw 自动拼接 /v1/messages）\n错误：https://code.newcli.com/claude/v1（变成 /claude/v1/v1/messages）\n\n\nGPT (newcli-codex)：\n\n正确 baseUrl：https://code.newcli.com/codex/v1（OpenClaw 自动拼接 /chat/completions）\n错误：https://code.newcli.com/codex/v1/chat/completions（重复拼接）\n\n\nGemini (newcli-gemini)：\n\n正确 baseUrl：https://code.newcli.com/gemini/v1beta（OpenClaw 自动拼接 /models/<id>:streamGenerateContent）\n错误：https://code.newcli.com/gemini（缺少 /v1beta）\n错误：https://code.newcli.com/gemini/v1beta/models（重复 /models）"
      },
      {
        "title": "问题：GPT 模型返回 \"403 Your request was blocked\"",
        "body": "原因：NewCLI 的 Codex 端点有 Cloudflare WAF 防护，当 OpenClaw 发送大 context（>100K tokens）请求时容易触发拦截\n现象：curl 小请求能通，但 OpenClaw 实际使用时被 403\n临时方案：暂不将 GPT 模型放入 fallback 链，避免浪费 failover 时间；需要时通过 /model gpt53 手动切换，小 context 场景下可能可用\n根本解决：联系 NewCLI 服务商确认 Codex 端点的 Cloudflare 规则"
      },
      {
        "title": "问题：Gemini 返回 \"Request contains an invalid argument\"",
        "body": "确保 api 字段为 \"google-generative-ai\"，不是 \"openai-completions\"\nOpenClaw 会自动构造正确的 Google Generative AI 格式请求"
      },
      {
        "title": "变更记录",
        "body": "日期版本变更内容变更人2026-02-08v1.0创建 NewCLI provider 配置指南（Claude 系列）jooey (via Claude Code)2026-02-08v2.0合并 newcli-codex (GPT 系列) 配置指南ConfigBot (via OpenClaw with Opus 4.6)2026-02-08v3.0合并 newcli-gemini (Gemini 系列) 配置指南ConfigBot (via OpenClaw with Opus 4.6)2026-02-08v3.1添加 Gemini 生图模型；精简 fallback 链（每 provider 一个最轻量模型）ConfigBot (via OpenClaw with Opus 4.6)2026-02-08v3.2记录 GPT 403 问题；从 fallback 链移除 GPT 模型ConfigBot (via OpenClaw with Opus 4.6)2026-02-08v4.0新增 newcli-aws provider（AWS 特价线路，消耗 1/24）；更新 fallback 策略；更新额度信息ConfigBot (via OpenClaw with Opus 4.6)"
      }
    ],
    "body": "配置 NewCLI Provider（code.newcli.com 模型代理源）\n\nNewCLI (FoxCode) 是一个模型代理服务，通过统一的账户和 API Key 提供三类模型的访问：\n\nProvider\t模型系列\tAPI 协议\tBase URL\t备注\nnewcli\tClaude (Anthropic)\tanthropic-messages\tcode.newcli.com/claude\t主线路\nnewcli-aws\tClaude (Anthropic)\tanthropic-messages\tcode.newcli.com/claude/aws\t⭐ AWS 特价线路，消耗 1/24\nnewcli-codex\tGPT (OpenAI)\topenai-completions\tcode.newcli.com/codex/v1\t\nnewcli-gemini\tGemini (Google)\tgoogle-generative-ai\tcode.newcli.com/gemini/v1beta\t\n\n为什么是四个 provider？ 虽然是同一个服务商、同一个 API Key，但：\n\nClaude、GPT、Gemini 使用不同的 API 协议和端点，必须分开\nClaude 有两条线路（主线路 vs AWS 特价线路），消耗倍率不同，也需要分开配置\n⭐ AWS 特价线路（重要）\n\n服务商提供了独家 AWS 线路：https://code.newcli.com/claude/aws\n\n对比\t主线路 (newcli)\tAWS 线路 (newcli-aws)\nBase URL\t.../claude\t.../claude/aws\n额度消耗\t1x\t1/24x (同样额度可用 24 倍)\n模型\t相同\t相同\nAPI 协议\t相同\t相同\nAPI Key\t相同\t相同\n\n策略：所有重要任务优先使用 newcli-aws，主线路 newcli 作为备用。\n\n⚠️ 服务商声明：严禁分发 AWS 线路地址，发现封禁不退款。\n\n如果觉得这个 Skill 有用，请用我的邀请码注册： https://foxcode.rjj.cc/auth/register?aff=7WTAV8R\n\n前置条件\n\n你需要从 NewCLI 服务获取以下信息：\n\n项目\t说明\nAPI Key\t格式为 sk-ant-oat01-... 的密钥（三个 provider 共用同一个）\n可用模型列表\t向服务商确认，或查看账户统计页\n\n注意：NewCLI 的 /v1/models 接口有 Cloudflare 防护，无法通过 curl 直接获取模型列表。请以账户后台的实际统计为准，不要依赖文档中的\"支持列表\"——文档列出的模型不一定全部对你的账户开放。\n\n第一步：确认可用模型\n\n这一步不能跳过。 不要把文档里列的模型全加上去，要以实际能调通的为准。\n\n1A. 测试 Claude 模型（newcli / newcli-aws）\n\n用 Anthropic messages 格式测试：\n\n# 测试主线路\ncurl -s --max-time 15 https://code.newcli.com/claude/v1/messages \\\n  -H \"x-api-key: <你的API_KEY>\" \\\n  -H \"anthropic-version: 2023-06-01\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\":\"<MODEL_ID>\",\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}],\"max_tokens\":10}'\n\n# 测试 AWS 特价线路\ncurl -s --max-time 15 https://code.newcli.com/claude/aws/v1/messages \\\n  -H \"x-api-key: <你的API_KEY>\" \\\n  -H \"anthropic-version: 2023-06-01\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\":\"<MODEL_ID>\",\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}],\"max_tokens\":10}'\n\n\n注意：baseUrl 中主线路写 /claude，AWS 线路写 /claude/aws（OpenClaw 自动拼接 /v1/messages）。\n\n如果返回正常的 JSON 响应（含 content）= 可用。 如果返回 {\"error\":{\"message\":\"暂不支持\"}} 或 \"未开放\" = 该模型不可用。\n\n1B. 测试 GPT 模型（newcli-codex）\n\n用 OpenAI completions 格式测试：\n\ncurl -s --max-time 15 https://code.newcli.com/codex/v1/chat/completions \\\n  -H \"Authorization: Bearer <你的API_KEY>\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\":\"<MODEL_ID>\",\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}],\"max_tokens\":10}'\n\n\n如果返回正常的 JSON 响应（含 choices）= 可用。 如果返回错误或超时 = 该模型不可用。\n\n1C. 测试 Gemini 模型（newcli-gemini）\n\n用 Google Generative AI 格式测试：\n\ncurl -s --max-time 15 \\\n  \"https://code.newcli.com/gemini/v1beta/models/<MODEL_ID>:generateContent\" \\\n  -H \"x-goog-api-key: <你的API_KEY>\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"contents\":[{\"role\":\"user\",\"parts\":[{\"text\":\"hi\"}]}]}'\n\n\n注意：Gemini 端点的 URL 格式与 Claude/GPT 不同——模型名嵌入在 URL 路径中，而不是请求体中。\n\n如果返回正常的 JSON 响应（含 candidates）= 可用。 如果返回 {\"error\":{\"message\":\"模型未开放\"}} = 该模型不可用。\n\n已知可用模型（截至 2026-02-08）\nClaude 系列（newcli）\n模型 ID\t名称\tContext\t说明\nclaude-opus-4-6\tClaude Opus 4.6\t200K\t最强，适合复杂任务\nclaude-haiku-4-5-20251001\tClaude Haiku 4.5\t200K\t轻量快速，适合简单任务\n\n其他模型如 claude-sonnet-4-20250514 等在文档中列出但实测可能返回\"未开放\"，以你账户的实际情况为准。\n\nGPT 系列（newcli-codex）\n模型 ID\t名称\tContext\t说明\ngpt-5.3-codex\tGPT-5.3 Codex\t128K\t最新版本\ngpt-5.2\tGPT-5.2\t128K\t基础版\ngpt-5.2-codex\tGPT-5.2 Codex\t128K\t代码增强版\ngpt-5.1\tGPT-5.1\t128K\t基础版\ngpt-5.1-codex\tGPT-5.1 Codex\t128K\t代码增强版\ngpt-5.1-codex-mini\tGPT-5.1 Codex Mini\t128K\t轻量版\ngpt-5.1-codex-max\tGPT-5.1 Codex Max\t128K\t增强版\ngpt-5\tGPT-5\t128K\t基础版\ngpt-5-codex\tGPT-5 Codex\t128K\t代码增强版\nGemini 系列（newcli-gemini）— 文本对话模型\n模型 ID\t名称\tContext\treasoning\t说明\ngemini-3-pro\tGemini 3 Pro\t1M\t✅\t最新旗舰\ngemini-3-pro-high\tGemini 3 Pro High\t1M\t✅\t旗舰增强版\ngemini-3-pro-preview\tGemini 3 Pro Preview\t1M\t✅\t预览版\ngemini-3-flash\tGemini 3 Flash\t1M\t❌\t快速版\ngemini-3-flash-preview\tGemini 3 Flash Preview\t1M\t❌\t快速预览版\ngemini-2.5-pro\tGemini 2.5 Pro\t1M\t✅\t上一代旗舰\ngemini-2.5-flash\tGemini 2.5 Flash\t1M\t❌\t上一代快速版\ngemini-2.5-flash-lite\tGemini 2.5 Flash Lite\t1M\t❌\t轻量版\nGemini 系列（newcli-gemini）— 图片生成模型\n\n这些模型用于生成图片，不要加入 fallback 链，但可以通过 /model 命令手动切换使用。\n\n基础分辨率（默认）：\n\n模型 ID\t说明\ngemini-3-pro-image\t默认比例\ngemini-3-pro-image-3x2\t横向 3:2\ngemini-3-pro-image-2x3\t纵向 2:3\ngemini-3-pro-image-3x4\t纵向 3:4\ngemini-3-pro-image-4x3\t横向 4:3\ngemini-3-pro-image-4x5\t纵向 4:5\ngemini-3-pro-image-5x4\t横向 5:4\ngemini-3-pro-image-9x16\t竖屏 9:16\ngemini-3-pro-image-16x9\t宽屏 16:9\ngemini-3-pro-image-21x9\t超宽 21:9\n\n2K 分辨率： 模型 ID 加 -2k 前缀，如 gemini-3-pro-image-2k、gemini-3-pro-image-2k-16x9 等。\n\n4K 分辨率： 模型 ID 加 -4k 前缀，如 gemini-3-pro-image-4k、gemini-3-pro-image-4k-16x9 等。\n\n⚠️ 图片生成模型不要加入 fallback 链——它们不适合文本对话，放进 fallback 会导致对话请求被错误路由到图片生成模型。需要生图时通过 /model gemini-3-pro-image 手动切换。\n\n第二步：添加 Provider\n\n在 ~/.openclaw/openclaw.json 的 models.providers 下添加三个 provider。\n\n2A. 添加 newcli（Claude 主线路）\n\"newcli\": {\n  \"baseUrl\": \"https://code.newcli.com/claude\",\n  \"apiKey\": \"<你的API_KEY>\",\n  \"api\": \"anthropic-messages\",\n  \"authHeader\": true,\n  \"models\": [\n    {\n      \"id\": \"claude-opus-4-6\",\n      \"name\": \"Claude Opus 4.6\",\n      \"reasoning\": false,\n      \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 200000,\n      \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"claude-haiku-4-5-20251001\",\n      \"name\": \"Claude Haiku 4.5\",\n      \"reasoning\": false,\n      \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 200000,\n      \"maxTokens\": 8192\n    }\n  ]\n}\n\n2B. 添加 newcli-aws（Claude AWS 特价线路）⭐\n\"newcli-aws\": {\n  \"baseUrl\": \"https://code.newcli.com/claude/aws\",\n  \"apiKey\": \"<你的API_KEY>\",\n  \"api\": \"anthropic-messages\",\n  \"authHeader\": true,\n  \"models\": [\n    {\n      \"id\": \"claude-opus-4-6\",\n      \"name\": \"Claude Opus 4.6 (AWS)\",\n      \"reasoning\": false,\n      \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 200000,\n      \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"claude-haiku-4-5-20251001\",\n      \"name\": \"Claude Haiku 4.5 (AWS)\",\n      \"reasoning\": false,\n      \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 200000,\n      \"maxTokens\": 8192\n    }\n  ]\n}\n\n\n与 newcli 的唯一区别：baseUrl 从 .../claude 变为 .../claude/aws。模型列表、API Key、协议完全相同。 推荐：重要 Agent（如运维、评审）优先使用 newcli-aws，主线路 newcli 作为备用。\n\n2C. 添加 newcli-codex（GPT 系列）\n\"newcli-codex\": {\n  \"baseUrl\": \"https://code.newcli.com/codex/v1\",\n  \"apiKey\": \"<你的API_KEY>\",\n  \"api\": \"openai-completions\",\n  \"models\": [\n    {\n      \"id\": \"gpt-5.3-codex\", \"name\": \"GPT-5.3 Codex\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5.2\", \"name\": \"GPT-5.2\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5.2-codex\", \"name\": \"GPT-5.2 Codex\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5.1\", \"name\": \"GPT-5.1\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5.1-codex\", \"name\": \"GPT-5.1 Codex\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5.1-codex-mini\", \"name\": \"GPT-5.1 Codex Mini\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5.1-codex-max\", \"name\": \"GPT-5.1 Codex Max\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5\", \"name\": \"GPT-5\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    },\n    {\n      \"id\": \"gpt-5-codex\", \"name\": \"GPT-5 Codex\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 128000, \"maxTokens\": 8192\n    }\n  ]\n}\n\n2D. 添加 newcli-gemini（Gemini 系列）\n\"newcli-gemini\": {\n  \"baseUrl\": \"https://code.newcli.com/gemini/v1beta\",\n  \"apiKey\": \"<你的API_KEY>\",\n  \"api\": \"google-generative-ai\",\n  \"models\": [\n    {\n      \"id\": \"gemini-3-pro\", \"name\": \"Gemini 3 Pro\",\n      \"reasoning\": true, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-3-pro-high\", \"name\": \"Gemini 3 Pro High\",\n      \"reasoning\": true, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-3-pro-preview\", \"name\": \"Gemini 3 Pro Preview\",\n      \"reasoning\": true, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-3-flash\", \"name\": \"Gemini 3 Flash\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-3-flash-preview\", \"name\": \"Gemini 3 Flash Preview\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-2.5-pro\", \"name\": \"Gemini 2.5 Pro\",\n      \"reasoning\": true, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-2.5-flash\", \"name\": \"Gemini 2.5 Flash\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    },\n    {\n      \"id\": \"gemini-2.5-flash-lite\", \"name\": \"Gemini 2.5 Flash Lite\",\n      \"reasoning\": false, \"input\": [\"text\"],\n      \"cost\": { \"input\": 0, \"output\": 0, \"cacheRead\": 0, \"cacheWrite\": 0 },\n      \"contextWindow\": 1000000, \"maxTokens\": 65536\n    }\n  ]\n}\n\n四个 provider 的关键差异\n参数\tnewcli (Claude)\tnewcli-aws (Claude AWS)\tnewcli-codex (GPT)\tnewcli-gemini (Gemini)\nbaseUrl\t.../claude\t.../claude/aws\t.../codex/v1\t.../gemini/v1beta\napi\tanthropic-messages\tanthropic-messages\topenai-completions\tgoogle-generative-ai\nauthHeader\ttrue\ttrue\t默认（Bearer）\t默认（x-goog-api-key）\napiKey\t相同\t相同\t相同\t相同\n额度消耗\t1x\t1/24x ⭐\t1x\t1x\ncontextWindow\t200K\t200K\t128K\t1M\nmaxTokens\t8192\t8192\t8192\t65536\n只添加你确认可用的模型\n\n错误做法：把文档里所有模型都堆上去 正确做法：只添加第一步中测试通过的模型\n\n添加不存在的模型不会导致崩溃，但 fallback 到它时会浪费一次请求超时，影响响应速度。\n\n第三步：配置别名\n\n在 agents.defaults.models 下为新模型添加别名，方便在聊天中用短名切换：\n\n{\n  \"agents\": {\n    \"defaults\": {\n      \"models\": {\n        \"newcli/claude-opus-4-6\": { \"alias\": \"claude-opus\" },\n        \"newcli/claude-haiku-4-5-20251001\": { \"alias\": \"claude-haiku\" },\n        \"newcli-aws/claude-opus-4-6\": { \"alias\": \"claude-opus-aws\" },\n        \"newcli-aws/claude-haiku-4-5-20251001\": { \"alias\": \"claude-haiku-aws\" },\n        \"newcli-codex/gpt-5.3-codex\": { \"alias\": \"gpt53\" },\n        \"newcli-codex/gpt-5.2\": { \"alias\": \"gpt52\" },\n        \"newcli-codex/gpt-5.2-codex\": { \"alias\": \"gpt52codex\" },\n        \"newcli-codex/gpt-5.1\": { \"alias\": \"gpt51\" },\n        \"newcli-codex/gpt-5.1-codex\": { \"alias\": \"gpt51codex\" },\n        \"newcli-codex/gpt-5.1-codex-mini\": { \"alias\": \"gpt51mini\" },\n        \"newcli-codex/gpt-5.1-codex-max\": { \"alias\": \"gpt51max\" },\n        \"newcli-codex/gpt-5\": { \"alias\": \"gpt5\" },\n        \"newcli-codex/gpt-5-codex\": { \"alias\": \"gpt5codex\" },\n        \"newcli-gemini/gemini-3-pro\": { \"alias\": \"gemini3pro\" },\n        \"newcli-gemini/gemini-3-pro-high\": { \"alias\": \"gemini3prohigh\" },\n        \"newcli-gemini/gemini-3-pro-preview\": { \"alias\": \"gemini3preview\" },\n        \"newcli-gemini/gemini-3-flash\": { \"alias\": \"gemini3flash\" },\n        \"newcli-gemini/gemini-3-flash-preview\": { \"alias\": \"gemini3flashpreview\" },\n        \"newcli-gemini/gemini-2.5-pro\": { \"alias\": \"gemini25pro\" },\n        \"newcli-gemini/gemini-2.5-flash\": { \"alias\": \"gemini25flash\" },\n        \"newcli-gemini/gemini-2.5-flash-lite\": { \"alias\": \"gemini25lite\" }\n      }\n    }\n  }\n}\n\n\n配置后用户可以在聊天中用 /model claude-opus、/model gpt53、/model gemini3pro 切换模型。\n\n⚠️ 别名配置的唯一合法字段是 alias\nagents.defaults.models.<model-id>.alias     <-- 唯一合法字段\nagents.defaults.models.<model-id>.reasoning <-- 非法！会导致 Gateway 崩溃！\nagents.defaults.models.<model-id>.xxx       <-- 任何其他字段都非法！\n\n\n这是一个已经发生过的事故：在别名配置里加了 \"reasoning\": true 导致 schema 校验失败，Gateway 崩溃循环 181 次。模型能力属性只能放在 models.providers 的模型定义里，不能放在别名里。\n\n第四步：接入 Fallback 链\n\n在 agents.defaults.model.fallbacks 中把新模型加到合适的位置：\n\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": {\n        \"primary\": \"minimax/MiniMax-M2.1\",\n        \"fallbacks\": [\n          \"newcli-aws/claude-haiku-4-5-20251001\",\n          \"minimax/MiniMax-M2.1\",\n          \"deepseek/deepseek-chat\",\n          \"qwen-portal/coder-model\",\n          \"newcli/claude-haiku-4-5-20251001\",\n          \"newcli-gemini/gemini-2.5-flash-lite\",\n          \"deepseek/deepseek-reasoner\",\n          \"qwen-portal/vision-model\"\n        ]\n      }\n    }\n  }\n}\n\nFallback 排序原则\n\nAWS 线路优先，DeepSeek 兜底：\n\n位置\t模型\t为什么选它\n1\tnewcli-aws/claude-haiku\t⭐ AWS 线路，消耗 1/24\n2\tminimax/MiniMax-M2.1\t月费 (100 prompts/5h)\n3\tdeepseek/deepseek-chat\tDeepSeek 按量付费\n4\tqwen-portal/coder-model\tQwen 免费 2000/天\n5\tnewcli/claude-haiku\t主线路备用\n6\tnewcli-gemini/gemini-2.5-flash-lite\tGemini 最轻量\n7\tdeepseek/deepseek-reasoner\treasoning 备用\n8\tqwen-portal/vision-model\tvision 备用\n\n其他模型（Claude Opus、GPT-5.3、Gemini 3 Pro 等高端模型）不放 fallback 链，需要时通过 /model <别名> 手动切换。图片生成模型绝不放入 fallback 链。\n\n第五步：验证\n5.1 JSON 语法检查\npython3 -c \"import json; json.load(open('$HOME/.openclaw/openclaw.json')); print('JSON OK')\"\n\n5.2 Schema 校验\nopenclaw doctor\n\n\n如果输出包含 Unrecognized key 就说明有非法字段，必须修复后才能重启。\n\n5.3 重启 Gateway\n# macOS\nlaunchctl kickstart -k gui/$(id -u)/ai.openclaw.gateway\n\n# 等 3 秒后确认状态\nsleep 3\nlaunchctl print gui/$(id -u)/ai.openclaw.gateway | grep -E \"job state|last exit\"\n\n\n期望看到：\n\nlast exit code = 0\njob state = running\n\n\n如果 last exit code = 1 且 job state 不是 running，检查错误日志：\n\ntail -20 ~/.openclaw/logs/gateway.err.log\n\n5.4 功能验证\n\n在任意已绑定的聊天中测试三个 provider：\n\n/model claude-opus    # 测试 Claude\n/model gpt53          # 测试 GPT\n/model gemini3pro     # 测试 Gemini\n/model Minimax        # 切回主力\n\n排障\n问题：所有模型都返回\"暂不支持\"\n可能原因 1：API Key 过期或余额不足 → 登录 NewCLI 后台检查\n可能原因 2：并发限制，已有其他客户端占用 → 关闭其他使用同一 Key 的进程\n可能原因 3：服务临时维护 → 稍后再试\n问题：Gateway 启动后立刻崩溃\n最可能原因：配置中有非法字段\n诊断：tail -20 ~/.openclaw/logs/gateway.err.log，找 Unrecognized key\n修复：删除非法字段，或运行 openclaw doctor --fix\n问题：部分 provider 能用，部分不行\n\n先检查 baseUrl 和 api 协议是否匹配：\n\nProvider\t正确 baseUrl\t正确 api\nnewcli\thttps://code.newcli.com/claude\tanthropic-messages\nnewcli-aws\thttps://code.newcli.com/claude/aws\tanthropic-messages\nnewcli-codex\thttps://code.newcli.com/codex/v1\topenai-completions\nnewcli-gemini\thttps://code.newcli.com/gemini/v1beta\tgoogle-generative-ai\n\n检查 apiKey：四个 provider 应使用相同的 Key。\n\n问题：模型能切换但回复为空或报错\nClaude (newcli)：\n正确 baseUrl：https://code.newcli.com/claude（OpenClaw 自动拼接 /v1/messages）\n错误：https://code.newcli.com/claude/v1（变成 /claude/v1/v1/messages）\nGPT (newcli-codex)：\n正确 baseUrl：https://code.newcli.com/codex/v1（OpenClaw 自动拼接 /chat/completions）\n错误：https://code.newcli.com/codex/v1/chat/completions（重复拼接）\nGemini (newcli-gemini)：\n正确 baseUrl：https://code.newcli.com/gemini/v1beta（OpenClaw 自动拼接 /models/<id>:streamGenerateContent）\n错误：https://code.newcli.com/gemini（缺少 /v1beta）\n错误：https://code.newcli.com/gemini/v1beta/models（重复 /models）\n问题：GPT 模型返回 \"403 Your request was blocked\"\n原因：NewCLI 的 Codex 端点有 Cloudflare WAF 防护，当 OpenClaw 发送大 context（>100K tokens）请求时容易触发拦截\n现象：curl 小请求能通，但 OpenClaw 实际使用时被 403\n临时方案：暂不将 GPT 模型放入 fallback 链，避免浪费 failover 时间；需要时通过 /model gpt53 手动切换，小 context 场景下可能可用\n根本解决：联系 NewCLI 服务商确认 Codex 端点的 Cloudflare 规则\n问题：Gemini 返回 \"Request contains an invalid argument\"\n确保 api 字段为 \"google-generative-ai\"，不是 \"openai-completions\"\nOpenClaw 会自动构造正确的 Google Generative AI 格式请求\n变更记录\n日期\t版本\t变更内容\t变更人\n2026-02-08\tv1.0\t创建 NewCLI provider 配置指南（Claude 系列）\tjooey (via Claude Code)\n2026-02-08\tv2.0\t合并 newcli-codex (GPT 系列) 配置指南\tConfigBot (via OpenClaw with Opus 4.6)\n2026-02-08\tv3.0\t合并 newcli-gemini (Gemini 系列) 配置指南\tConfigBot (via OpenClaw with Opus 4.6)\n2026-02-08\tv3.1\t添加 Gemini 生图模型；精简 fallback 链（每 provider 一个最轻量模型）\tConfigBot (via OpenClaw with Opus 4.6)\n2026-02-08\tv3.2\t记录 GPT 403 问题；从 fallback 链移除 GPT 模型\tConfigBot (via OpenClaw with Opus 4.6)\n2026-02-08\tv4.0\t新增 newcli-aws provider（AWS 特价线路，消耗 1/24）；更新 fallback 策略；更新额度信息\tConfigBot (via OpenClaw with Opus 4.6)"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/jooey/add-newcli-provider",
    "publisherUrl": "https://clawhub.ai/jooey/add-newcli-provider",
    "owner": "jooey",
    "version": "3.2.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/add-newcli-provider",
    "downloadUrl": "https://openagent3.xyz/downloads/add-newcli-provider",
    "agentUrl": "https://openagent3.xyz/skills/add-newcli-provider/agent",
    "manifestUrl": "https://openagent3.xyz/skills/add-newcli-provider/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/add-newcli-provider/agent.md"
  }
}