{
  "schemaVersion": "1.0",
  "item": {
    "slug": "clawhub-skill-2",
    "name": "Clawhub Skill",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/pathemata-mathemata/clawhub-skill-2",
    "canonicalUrl": "https://clawhub.ai/pathemata-mathemata/clawhub-skill-2",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/clawhub-skill-2",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=clawhub-skill-2",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "USAGE.txt"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/clawhub-skill-2"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/clawhub-skill-2",
    "agentPageUrl": "https://openagent3.xyz/skills/clawhub-skill-2/agent",
    "manifestUrl": "https://openagent3.xyz/skills/clawhub-skill-2/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/clawhub-skill-2/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Xrouter",
        "body": "Xrouter is an open-source inference router that sits between OpenClaw and your LLM providers. It uses a fast, hardware-aware classifier to route each request to the most cost-effective model that can handle the task.\n\nThis project is MIT licensed. See the MIT License.\n\nCore Features\n\nOpenAI-compatible reverse proxy at POST /v1/chat/completions.\n3-tier classifier (0 = cheap, 1 = medium, 2 = frontier) with early stream cutoff.\nHardware detection helper to recommend local engine.\nProvider selection wizard to choose local and cloud endpoints.\nCache layer with Redis or in-memory LRU fallback.\nFull cloud mode when local inference is not viable.\nToken tracking dashboard at /dashboard.\n\nWorkflow\n\nflowchart TD\n  A[\"Client / OpenClaw request\"] --> B[\"Router (OpenAI-compatible)\"]\n  B --> C{\"Classifier enabled?\"}\n  C -->|No| F[\"Route to Frontier provider\"]\n  C -->|Yes| D[\"Classifier (0 / 1 / 2)\"]\n  D --> E{\"Decision\"}\n  E -->|0| G[\"Route to Cheap provider\"]\n  E -->|1| M[\"Route to Medium provider\"]\n  E -->|2| F\n  G --> H[\"Provider adapter (auto or explicit)\"]\n  M --> H\n  F --> H\n  H --> I[\"Upstream API call\"]\n  I --> J[\"Stream/Response back to client\"]\n\nRepository Layout\n\nsrc/server.js: router and streaming proxy.\nsrc/classifier.js: classifier call and retry logic.\nsrc/config.js: configuration and env parsing.\nsrc/cache.js: Redis + LRU cache.\nsrc/token_tracker.js: token tracking.\nscripts/check_hw.js: hardware detection.\nscripts/configure_providers.js: interactive provider setup.\n\nRequirements\n\nNode.js 20+.\nLocal classifier engine (optional).\nA frontier provider endpoint (required).\n\nQuickstart\n\nInstall dependencies.\n(Optional) Start a local model server.\nRun the configuration wizard.\nStart the router.\n\nnpm install\nnpm run configure\nnpm run dev\n\nHow To Use\n\nStart your local model server (optional but recommended).\nRun the wizard to configure providers and models.\nStart the router.\nSend OpenAI-compatible requests to the router.\nInspect routing decisions in response headers or the dashboard.\n\nExample local setup (Ollama):\n\nollama pull llama3.1\nollama run llama3.1\n\nRun the wizard:\n\nnpm run configure\n\nStart the router:\n\nnpm run dev\n\nTest a request:\n\ncurl -i http://localhost:3000/v1/chat/completions \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\":\"any\",\"messages\":[{\"role\":\"user\",\"content\":\"Fix this sentence: I has a apple.\"}]}'\n\nLook for these headers:\n\nX-Xrouter-decision: 0, 1, or 2.\nX-Xrouter-upstream: cheap, medium, or frontier.\n\nOpen the dashboard:\n\nhttp://localhost:3000/dashboard\n\nRaw usage JSON:\n\nhttp://localhost:3000/usage\n\nProvider Selection (Terminal Wizard)\nRun:\n\nnpm run configure\n\nThe wizard:\n\nScans hardware and recommends a local engine.\nSuggests a local classifier model.\nLets you choose provider base URLs, API keys, and model overrides for cheap/medium/frontier routes.\nWrites upstreams.json and optionally updates .env.\n\nQuick Start Mode\n\nIf your machine can run a local model, you can choose Quick Start.\nQuick Start auto-configures the local classifier.\nCheap route always uses the same local model as the classifier to avoid Ollama model swapping.\nYou only need to choose medium and frontier providers/models.\nOn Apple Silicon (Ollama), the wizard lists installed Ollama models and can auto-download a recommended model.\n\nRouting Behavior\n\nThe classifier is called for each uncached request.\nThe first 0, 1, or 2 token returned decides the route.\nIf classification fails, the router defaults to the frontier route.\nWhen the classifier is enabled, cheap, medium, and frontier routes must be configured.\n\nCompatibility\n\nThe router accepts OpenAI-style requests and translates when needed.\nProvider type can be explicit (xrouter, openai_compatible, openai, anthropic, gemini, cohere, azure_openai, mistral, groq, together, perplexity) or auto.\nauto infers the provider adapter from the base URL or API key.\nProviders that expose OpenAI-compatible endpoints use the openai_compatible adapter.\nAnthropic/Gemini/Cohere streaming is translated into OpenAI-style SSE chunks.\nNon-OpenAI adapters currently support text-only messages and basic sampling params (temperature/top_p/stop).\n\nToken Tracking Dashboard\n\nGET /usage: returns cumulative token usage for cheap, medium, and frontier.\nGET /dashboard: UI that displays token split and totals.\nLocal usage is counted inside cheap when cheap uses the local model.\n\nEnvironment Summary\n\nHOST: bind host, default 0.0.0.0.\nPORT: bind port, default 3000.\nROUTER_API_KEY: require Authorization: Bearer <key>.\nLOG_LEVEL: log level (debug/info/warn/error).\nLOG_TO_FILE: set true to write logs to files.\nLOG_DIR: directory for log files (default ./logs).\nCLASSIFIER_ENABLED: set false to disable local classification.\nCLASSIFIER_BASE_URL: OpenAI-compatible classifier endpoint.\nCLASSIFIER_MODEL: classifier model name.\nCLASSIFIER_SYSTEM_PROMPT: classifier prompt (single line).\nCLASSIFIER_TIMEOUT_MS: classifier timeout.\nCLASSIFIER_FORCE_STREAM: force streaming classifier request.\nCLASSIFIER_WARMUP: warm the classifier on server start.\nCLASSIFIER_WARMUP_DELAY_MS: delay before warmup request (ms).\nCLASSIFIER_KEEP_ALIVE_MS: keep-alive interval for classifier warmup (ms).\nCLASSIFIER_LOADING_RETRY_MS: delay between retries when the model is loading.\nCLASSIFIER_LOADING_MAX_RETRIES: max retries when the model is loading.\nCHEAP_BASE_URL: optional, defaults to classifier base URL.\nCHEAP_API_KEY: cheap provider API key.\nCHEAP_MODEL: optional model override for cheap route.\nCHEAP_PROVIDER: provider type for cheap route (auto if empty).\nCHEAP_HEADERS: optional JSON headers for cheap provider (stringified object).\nCHEAP_DEPLOYMENT: Azure deployment override for cheap route.\nCHEAP_API_VERSION: Azure API version override for cheap route.\nMEDIUM_BASE_URL: required when classifier is enabled.\nMEDIUM_API_KEY: medium provider API key.\nMEDIUM_MODEL: optional model override for medium route.\nMEDIUM_PROVIDER: provider type for medium route (auto if empty).\nMEDIUM_HEADERS: optional JSON headers for medium provider (stringified object).\nMEDIUM_DEPLOYMENT: Azure deployment override for medium route.\nMEDIUM_API_VERSION: Azure API version override for medium route.\nFRONTIER_BASE_URL: OpenAI-compatible frontier endpoint.\nFRONTIER_API_KEY: frontier API key.\nFRONTIER_MODEL: optional model override for frontier route.\nFRONTIER_PROVIDER: provider type for frontier route (auto if empty).\nFRONTIER_HEADERS: optional JSON headers for frontier provider (stringified object).\nFRONTIER_DEPLOYMENT: Azure deployment override for frontier route.\nFRONTIER_API_VERSION: Azure API version override for frontier route.\nREDIS_URL: if set, enables Redis cache.\n\nLocal Model Installation & Run Guides\nOllama (best for Mac, easiest cross-platform)\n\nInstall: Ollama Quickstart\nPull a model: ollama pull llama3.1\nRun: ollama run llama3.1\nBase URL: http://localhost:11434\nRouter config:\nCLASSIFIER_BASE_URL=http://localhost:11434\nCLASSIFIER_MODEL=llama3.1\n\nvLLM (NVIDIA GPU)\n\nOpenAI server: vLLM OpenAI Server\nExample: vllm serve NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token-abc123\nBase URL: http://localhost:8000\nRouter config:\nCLASSIFIER_BASE_URL=http://localhost:8000\nCLASSIFIER_MODEL=NousResearch/Meta-Llama-3-8B-Instruct\n\nTensorRT-LLM (NVIDIA, max speed)\n\nRepo: TensorRT-LLM\nServer: trtllm-serve\nBase URL: http://<host>:<port>\nRouter config:\nCLASSIFIER_BASE_URL=http://<host>:<port>\nCLASSIFIER_MODEL=<your model>\n\nllama.cpp (CPU/AMD fallback)\n\nRepo: llama.cpp\nExample: llama-server -m model.gguf --port 8080\nBase URL: http://localhost:8080\nRouter config:\nCLASSIFIER_BASE_URL=http://localhost:8080\nCLASSIFIER_MODEL=<gguf model name>\n\nDocker\nBuild and run the router with Redis:\n\ndocker compose -f deploy/docker-compose.yml up --build\n\nHardware Detection\nRun:\n\nnpm run check-hw\n\nThis prints the recommended engine:\n\ntensorrt-llm for large NVIDIA GPUs.\nvllm for standard NVIDIA GPUs.\nmlx for Apple Silicon.\nllama.cpp for CPU/AMD fallback.\n\nModel List Fetching\n\nThe wizard queries provider model list endpoints when possible.\nOpenAI-compatible: /v1/models\nAnthropic: /v1/models\nGemini: /v1beta/models\nCohere: /v1/models\nIf listing fails, the wizard falls back to scripts/cloud_model_catalog.json.\n\nStar History"
      }
    ],
    "body": "Xrouter\n\nXrouter is an open-source inference router that sits between OpenClaw and your LLM providers. It uses a fast, hardware-aware classifier to route each request to the most cost-effective model that can handle the task.\n\nThis project is MIT licensed. See the MIT License.\n\nCore Features\n\nOpenAI-compatible reverse proxy at POST /v1/chat/completions.\n3-tier classifier (0 = cheap, 1 = medium, 2 = frontier) with early stream cutoff.\nHardware detection helper to recommend local engine.\nProvider selection wizard to choose local and cloud endpoints.\nCache layer with Redis or in-memory LRU fallback.\nFull cloud mode when local inference is not viable.\nToken tracking dashboard at /dashboard.\n\nWorkflow\n\nflowchart TD\n  A[\"Client / OpenClaw request\"] --> B[\"Router (OpenAI-compatible)\"]\n  B --> C{\"Classifier enabled?\"}\n  C -->|No| F[\"Route to Frontier provider\"]\n  C -->|Yes| D[\"Classifier (0 / 1 / 2)\"]\n  D --> E{\"Decision\"}\n  E -->|0| G[\"Route to Cheap provider\"]\n  E -->|1| M[\"Route to Medium provider\"]\n  E -->|2| F\n  G --> H[\"Provider adapter (auto or explicit)\"]\n  M --> H\n  F --> H\n  H --> I[\"Upstream API call\"]\n  I --> J[\"Stream/Response back to client\"]\n\n\nRepository Layout\n\nsrc/server.js: router and streaming proxy.\nsrc/classifier.js: classifier call and retry logic.\nsrc/config.js: configuration and env parsing.\nsrc/cache.js: Redis + LRU cache.\nsrc/token_tracker.js: token tracking.\nscripts/check_hw.js: hardware detection.\nscripts/configure_providers.js: interactive provider setup.\n\nRequirements\n\nNode.js 20+.\nLocal classifier engine (optional).\nA frontier provider endpoint (required).\n\nQuickstart\n\nInstall dependencies.\n(Optional) Start a local model server.\nRun the configuration wizard.\nStart the router.\nnpm install\nnpm run configure\nnpm run dev\n\n\nHow To Use\n\nStart your local model server (optional but recommended).\nRun the wizard to configure providers and models.\nStart the router.\nSend OpenAI-compatible requests to the router.\nInspect routing decisions in response headers or the dashboard.\n\nExample local setup (Ollama):\n\nollama pull llama3.1\nollama run llama3.1\n\n\nRun the wizard:\n\nnpm run configure\n\n\nStart the router:\n\nnpm run dev\n\n\nTest a request:\n\ncurl -i http://localhost:3000/v1/chat/completions \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\":\"any\",\"messages\":[{\"role\":\"user\",\"content\":\"Fix this sentence: I has a apple.\"}]}'\n\n\nLook for these headers:\n\nX-Xrouter-decision: 0, 1, or 2.\nX-Xrouter-upstream: cheap, medium, or frontier.\n\nOpen the dashboard:\n\nhttp://localhost:3000/dashboard\n\nRaw usage JSON:\n\nhttp://localhost:3000/usage\n\nProvider Selection (Terminal Wizard) Run:\n\nnpm run configure\n\n\nThe wizard:\n\nScans hardware and recommends a local engine.\nSuggests a local classifier model.\nLets you choose provider base URLs, API keys, and model overrides for cheap/medium/frontier routes.\nWrites upstreams.json and optionally updates .env.\n\nQuick Start Mode\n\nIf your machine can run a local model, you can choose Quick Start.\nQuick Start auto-configures the local classifier.\nCheap route always uses the same local model as the classifier to avoid Ollama model swapping.\nYou only need to choose medium and frontier providers/models.\nOn Apple Silicon (Ollama), the wizard lists installed Ollama models and can auto-download a recommended model.\n\nRouting Behavior\n\nThe classifier is called for each uncached request.\nThe first 0, 1, or 2 token returned decides the route.\nIf classification fails, the router defaults to the frontier route.\nWhen the classifier is enabled, cheap, medium, and frontier routes must be configured.\n\nCompatibility\n\nThe router accepts OpenAI-style requests and translates when needed.\nProvider type can be explicit (xrouter, openai_compatible, openai, anthropic, gemini, cohere, azure_openai, mistral, groq, together, perplexity) or auto.\nauto infers the provider adapter from the base URL or API key.\nProviders that expose OpenAI-compatible endpoints use the openai_compatible adapter.\nAnthropic/Gemini/Cohere streaming is translated into OpenAI-style SSE chunks.\nNon-OpenAI adapters currently support text-only messages and basic sampling params (temperature/top_p/stop).\n\nToken Tracking Dashboard\n\nGET /usage: returns cumulative token usage for cheap, medium, and frontier.\nGET /dashboard: UI that displays token split and totals.\nLocal usage is counted inside cheap when cheap uses the local model.\n\nEnvironment Summary\n\nHOST: bind host, default 0.0.0.0.\nPORT: bind port, default 3000.\nROUTER_API_KEY: require Authorization: Bearer <key>.\nLOG_LEVEL: log level (debug/info/warn/error).\nLOG_TO_FILE: set true to write logs to files.\nLOG_DIR: directory for log files (default ./logs).\nCLASSIFIER_ENABLED: set false to disable local classification.\nCLASSIFIER_BASE_URL: OpenAI-compatible classifier endpoint.\nCLASSIFIER_MODEL: classifier model name.\nCLASSIFIER_SYSTEM_PROMPT: classifier prompt (single line).\nCLASSIFIER_TIMEOUT_MS: classifier timeout.\nCLASSIFIER_FORCE_STREAM: force streaming classifier request.\nCLASSIFIER_WARMUP: warm the classifier on server start.\nCLASSIFIER_WARMUP_DELAY_MS: delay before warmup request (ms).\nCLASSIFIER_KEEP_ALIVE_MS: keep-alive interval for classifier warmup (ms).\nCLASSIFIER_LOADING_RETRY_MS: delay between retries when the model is loading.\nCLASSIFIER_LOADING_MAX_RETRIES: max retries when the model is loading.\nCHEAP_BASE_URL: optional, defaults to classifier base URL.\nCHEAP_API_KEY: cheap provider API key.\nCHEAP_MODEL: optional model override for cheap route.\nCHEAP_PROVIDER: provider type for cheap route (auto if empty).\nCHEAP_HEADERS: optional JSON headers for cheap provider (stringified object).\nCHEAP_DEPLOYMENT: Azure deployment override for cheap route.\nCHEAP_API_VERSION: Azure API version override for cheap route.\nMEDIUM_BASE_URL: required when classifier is enabled.\nMEDIUM_API_KEY: medium provider API key.\nMEDIUM_MODEL: optional model override for medium route.\nMEDIUM_PROVIDER: provider type for medium route (auto if empty).\nMEDIUM_HEADERS: optional JSON headers for medium provider (stringified object).\nMEDIUM_DEPLOYMENT: Azure deployment override for medium route.\nMEDIUM_API_VERSION: Azure API version override for medium route.\nFRONTIER_BASE_URL: OpenAI-compatible frontier endpoint.\nFRONTIER_API_KEY: frontier API key.\nFRONTIER_MODEL: optional model override for frontier route.\nFRONTIER_PROVIDER: provider type for frontier route (auto if empty).\nFRONTIER_HEADERS: optional JSON headers for frontier provider (stringified object).\nFRONTIER_DEPLOYMENT: Azure deployment override for frontier route.\nFRONTIER_API_VERSION: Azure API version override for frontier route.\nREDIS_URL: if set, enables Redis cache.\n\nLocal Model Installation & Run Guides Ollama (best for Mac, easiest cross-platform)\n\nInstall: Ollama Quickstart\nPull a model: ollama pull llama3.1\nRun: ollama run llama3.1\nBase URL: http://localhost:11434\nRouter config: CLASSIFIER_BASE_URL=http://localhost:11434 CLASSIFIER_MODEL=llama3.1\n\nvLLM (NVIDIA GPU)\n\nOpenAI server: vLLM OpenAI Server\nExample: vllm serve NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token-abc123\nBase URL: http://localhost:8000\nRouter config: CLASSIFIER_BASE_URL=http://localhost:8000 CLASSIFIER_MODEL=NousResearch/Meta-Llama-3-8B-Instruct\n\nTensorRT-LLM (NVIDIA, max speed)\n\nRepo: TensorRT-LLM\nServer: trtllm-serve\nBase URL: http://<host>:<port>\nRouter config: CLASSIFIER_BASE_URL=http://<host>:<port> CLASSIFIER_MODEL=<your model>\n\nllama.cpp (CPU/AMD fallback)\n\nRepo: llama.cpp\nExample: llama-server -m model.gguf --port 8080\nBase URL: http://localhost:8080\nRouter config: CLASSIFIER_BASE_URL=http://localhost:8080 CLASSIFIER_MODEL=<gguf model name>\n\nDocker Build and run the router with Redis:\n\ndocker compose -f deploy/docker-compose.yml up --build\n\n\nHardware Detection Run:\n\nnpm run check-hw\n\n\nThis prints the recommended engine:\n\ntensorrt-llm for large NVIDIA GPUs.\nvllm for standard NVIDIA GPUs.\nmlx for Apple Silicon.\nllama.cpp for CPU/AMD fallback.\n\nModel List Fetching\n\nThe wizard queries provider model list endpoints when possible.\nOpenAI-compatible: /v1/models\nAnthropic: /v1/models\nGemini: /v1beta/models\nCohere: /v1/models\nIf listing fails, the wizard falls back to scripts/cloud_model_catalog.json.\n\nStar History"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/pathemata-mathemata/clawhub-skill-2",
    "publisherUrl": "https://clawhub.ai/pathemata-mathemata/clawhub-skill-2",
    "owner": "pathemata-mathemata",
    "version": "0.1.4",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/clawhub-skill-2",
    "downloadUrl": "https://openagent3.xyz/downloads/clawhub-skill-2",
    "agentUrl": "https://openagent3.xyz/skills/clawhub-skill-2/agent",
    "manifestUrl": "https://openagent3.xyz/skills/clawhub-skill-2/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/clawhub-skill-2/agent.md"
  }
}