{
  "schemaVersion": "1.0",
  "item": {
    "slug": "llmrouter",
    "name": "Llmrouter",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/alexrudloff/llmrouter",
    "canonicalUrl": "https://clawhub.ai/alexrudloff/llmrouter",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/llmrouter",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=llmrouter",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/llmrouter"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/llmrouter",
    "agentPageUrl": "https://openagent3.xyz/skills/llmrouter/agent",
    "manifestUrl": "https://openagent3.xyz/skills/llmrouter/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/llmrouter/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "LLM Router",
        "body": "An intelligent proxy that classifies incoming requests by complexity and routes them to appropriate LLM models. Use cheaper/faster models for simple tasks and reserve expensive models for complex ones.\n\nWorks with OpenClaw to reduce token usage and API costs by routing simple requests to smaller models.\n\nStatus: Tested with Anthropic, OpenAI, Google Gemini, Kimi/Moonshot, and Ollama."
      },
      {
        "title": "Prerequisites",
        "body": "Python 3.10+ with pip\nOllama (optional - only if using local classification)\nAnthropic API key or Claude Code OAuth token (or other provider key)"
      },
      {
        "title": "Setup",
        "body": "# Clone if not already present\ngit clone https://github.com/alexrudloff/llmrouter.git\ncd llmrouter\n\n# Create virtual environment (required on modern Python)\npython3 -m venv venv\nsource venv/bin/activate\n\n# Install dependencies\npip install -r requirements.txt\n\n# Pull classifier model (if using local classification)\nollama pull qwen2.5:3b\n\n# Copy and customize config\ncp config.yaml.example config.yaml\n# Edit config.yaml with your API key and model preferences"
      },
      {
        "title": "Verify Installation",
        "body": "# Start the server\nsource venv/bin/activate\npython server.py\n\n# In another terminal, test health endpoint\ncurl http://localhost:4001/health\n# Should return: {\"status\": \"ok\", ...}"
      },
      {
        "title": "Start the Server",
        "body": "python server.py\n\nOptions:\n\n--port PORT - Port to listen on (default: 4001)\n--host HOST - Host to bind (default: 127.0.0.1)\n--config PATH - Config file path (default: config.yaml)\n--log - Enable verbose logging\n--openclaw - Enable OpenClaw compatibility (rewrites model name in system prompt)"
      },
      {
        "title": "Configuration",
        "body": "Edit config.yaml to customize:"
      },
      {
        "title": "Model Routing",
        "body": "# Anthropic routing\nmodels:\n  super_easy: \"anthropic:claude-haiku-4-5-20251001\"\n  easy: \"anthropic:claude-haiku-4-5-20251001\"\n  medium: \"anthropic:claude-sonnet-4-20250514\"\n  hard: \"anthropic:claude-opus-4-20250514\"\n  super_hard: \"anthropic:claude-opus-4-20250514\"\n\n# OpenAI routing\nmodels:\n  super_easy: \"openai:gpt-4o-mini\"\n  easy: \"openai:gpt-4o-mini\"\n  medium: \"openai:gpt-4o\"\n  hard: \"openai:o3-mini\"\n  super_hard: \"openai:o3\"\n\n# Google Gemini routing\nmodels:\n  super_easy: \"google:gemini-2.0-flash\"\n  easy: \"google:gemini-2.0-flash\"\n  medium: \"google:gemini-2.0-flash\"\n  hard: \"google:gemini-2.0-flash\"\n  super_hard: \"google:gemini-2.0-flash\"\n\nNote: Reasoning models are auto-detected and use correct API params."
      },
      {
        "title": "Classifier",
        "body": "Three options for classifying request complexity:\n\nLocal (default) - Free, requires Ollama:\n\nclassifier:\n  provider: \"local\"\n  model: \"qwen2.5:3b\"\n\nAnthropic - Uses Haiku, fast and cheap:\n\nclassifier:\n  provider: \"anthropic\"\n  model: \"claude-haiku-4-5-20251001\"\n\nOpenAI - Uses GPT-4o-mini:\n\nclassifier:\n  provider: \"openai\"\n  model: \"gpt-4o-mini\"\n\nGoogle - Uses Gemini:\n\nclassifier:\n  provider: \"google\"\n  model: \"gemini-2.0-flash\"\n\nKimi - Uses Moonshot:\n\nclassifier:\n  provider: \"kimi\"\n  model: \"moonshot-v1-8k\"\n\nUse remote (anthropic/openai/google/kimi) if your machine can't run local models."
      },
      {
        "title": "Supported Providers",
        "body": "anthropic:claude-* - Anthropic Claude models (tested)\nopenai:gpt-*, openai:o1-*, openai:o3-* - OpenAI models (tested)\ngoogle:gemini-* - Google Gemini models (tested)\nkimi:kimi-k2.5, kimi:moonshot-* - Kimi/Moonshot models (tested)\nlocal:model-name - Local Ollama models (tested)"
      },
      {
        "title": "Complexity Levels",
        "body": "LevelUse CaseDefault Modelsuper_easyGreetings, acknowledgmentsHaikueasySimple Q&A, remindersHaikumediumCoding, emails, researchSonnethardComplex reasoning, debuggingOpussuper_hardSystem architecture, proofsOpus"
      },
      {
        "title": "Customizing Classification",
        "body": "Edit ROUTES.md to tune how messages are classified. The classifier reads the table in this file to determine complexity levels."
      },
      {
        "title": "API Usage",
        "body": "The router exposes an OpenAI-compatible API:\n\ncurl http://localhost:4001/v1/chat/completions \\\n  -H \"Authorization: Bearer $ANTHROPIC_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"llm-router\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"Hello!\"}]\n  }'"
      },
      {
        "title": "Testing Classification",
        "body": "python classifier.py \"Write a Python sort function\"\n# Output: medium\n\npython classifier.py --test\n# Runs test suite"
      },
      {
        "title": "Running as macOS Service",
        "body": "Create ~/Library/LaunchAgents/com.llmrouter.plist:\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE plist PUBLIC \"-//Apple//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\">\n<plist version=\"1.0\">\n<dict>\n    <key>Label</key>\n    <string>com.llmrouter</string>\n    <key>ProgramArguments</key>\n    <array>\n        <string>/path/to/llmrouter/venv/bin/python</string>\n        <string>/path/to/llmrouter/server.py</string>\n        <string>--openclaw</string>\n    </array>\n    <key>RunAtLoad</key>\n    <true/>\n    <key>KeepAlive</key>\n    <true/>\n    <key>WorkingDirectory</key>\n    <string>/path/to/llmrouter</string>\n    <key>StandardOutPath</key>\n    <string>/path/to/llmrouter/logs/stdout.log</string>\n    <key>StandardErrorPath</key>\n    <string>/path/to/llmrouter/logs/stderr.log</string>\n</dict>\n</plist>\n\nImportant: Replace /path/to/llmrouter with your actual install path. Must use the venv python, not system python.\n\n# Create logs directory\nmkdir -p ~/path/to/llmrouter/logs\n\n# Load the service\nlaunchctl load ~/Library/LaunchAgents/com.llmrouter.plist\n\n# Verify it's running\ncurl http://localhost:4001/health\n\n# To stop/restart\nlaunchctl unload ~/Library/LaunchAgents/com.llmrouter.plist\nlaunchctl load ~/Library/LaunchAgents/com.llmrouter.plist"
      },
      {
        "title": "OpenClaw Configuration",
        "body": "Add the router as a provider in ~/.openclaw/openclaw.json:\n\n{\n  \"models\": {\n    \"providers\": {\n      \"localrouter\": {\n        \"baseUrl\": \"http://localhost:4001/v1\",\n        \"apiKey\": \"via-router\",\n        \"api\": \"openai-completions\",\n        \"models\": [\n          {\n            \"id\": \"llm-router\",\n            \"name\": \"LLM Router (Auto-routes by complexity)\",\n            \"reasoning\": false,\n            \"input\": [\"text\", \"image\"],\n            \"cost\": {\n              \"input\": 0,\n              \"output\": 0,\n              \"cacheRead\": 0,\n              \"cacheWrite\": 0\n            },\n            \"contextWindow\": 200000,\n            \"maxTokens\": 8192\n          }\n        ]\n      }\n    }\n  }\n}\n\nNote: Cost is set to 0 because actual costs depend on which model the router selects. The router logs which model handled each request."
      },
      {
        "title": "Set as Default Model (Optional)",
        "body": "To use the router for all agents by default, add:\n\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": {\n        \"primary\": \"localrouter/llm-router\"\n      }\n    }\n  }\n}"
      },
      {
        "title": "Using with OAuth Tokens",
        "body": "If your config.yaml uses an Anthropic OAuth token from OpenClaw's ~/.openclaw/auth-profiles.json, the router automatically handles Claude Code identity headers."
      },
      {
        "title": "OpenClaw Compatibility Mode (Required)",
        "body": "If using with OpenClaw, you MUST start the server with --openclaw:\n\npython server.py --openclaw\n\nThis flag enables compatibility features required for OpenClaw:\n\nRewrites model names in responses so OpenClaw shows the actual model being used\nHandles tool name and ID remapping for proper tool call routing\n\nWithout this flag, you may encounter errors when using the router with OpenClaw."
      },
      {
        "title": "Common Tasks",
        "body": "Check server status: curl http://localhost:4001/health\nView current config: cat config.yaml\nTest a classification: python classifier.py \"your message\"\nRun classification tests: python classifier.py --test\nRestart server: Stop and run python server.py again\nView logs (if running as service): tail -f logs/stdout.log"
      },
      {
        "title": "\"externally-managed-environment\" error",
        "body": "Python 3.11+ requires virtual environments. Create one:\n\npython3 -m venv venv\nsource venv/bin/activate\npip install -r requirements.txt"
      },
      {
        "title": "\"Connection refused\" on port 4001",
        "body": "Server isn't running. Start it:\n\nsource venv/bin/activate && python server.py"
      },
      {
        "title": "Classification returns wrong complexity",
        "body": "Edit ROUTES.md to tune classification rules. The classifier reads this file to determine complexity levels."
      },
      {
        "title": "Ollama errors / \"model not found\"",
        "body": "Ensure Ollama is running and the model is pulled:\n\nollama serve  # Start Ollama if not running\nollama pull qwen2.5:3b"
      },
      {
        "title": "OAuth token not working",
        "body": "Ensure your token in config.yaml starts with sk-ant-oat. The router auto-detects OAuth tokens and adds required identity headers."
      },
      {
        "title": "LaunchAgent not starting",
        "body": "Check logs and ensure paths are absolute:\n\ncat ~/Library/LaunchAgents/com.llmrouter.plist  # Verify paths\ncat /path/to/llmrouter/logs/stderr.log  # Check for errors"
      }
    ],
    "body": "LLM Router\n\nAn intelligent proxy that classifies incoming requests by complexity and routes them to appropriate LLM models. Use cheaper/faster models for simple tasks and reserve expensive models for complex ones.\n\nWorks with OpenClaw to reduce token usage and API costs by routing simple requests to smaller models.\n\nStatus: Tested with Anthropic, OpenAI, Google Gemini, Kimi/Moonshot, and Ollama.\n\nQuick Start\nPrerequisites\nPython 3.10+ with pip\nOllama (optional - only if using local classification)\nAnthropic API key or Claude Code OAuth token (or other provider key)\nSetup\n# Clone if not already present\ngit clone https://github.com/alexrudloff/llmrouter.git\ncd llmrouter\n\n# Create virtual environment (required on modern Python)\npython3 -m venv venv\nsource venv/bin/activate\n\n# Install dependencies\npip install -r requirements.txt\n\n# Pull classifier model (if using local classification)\nollama pull qwen2.5:3b\n\n# Copy and customize config\ncp config.yaml.example config.yaml\n# Edit config.yaml with your API key and model preferences\n\nVerify Installation\n# Start the server\nsource venv/bin/activate\npython server.py\n\n# In another terminal, test health endpoint\ncurl http://localhost:4001/health\n# Should return: {\"status\": \"ok\", ...}\n\nStart the Server\npython server.py\n\n\nOptions:\n\n--port PORT - Port to listen on (default: 4001)\n--host HOST - Host to bind (default: 127.0.0.1)\n--config PATH - Config file path (default: config.yaml)\n--log - Enable verbose logging\n--openclaw - Enable OpenClaw compatibility (rewrites model name in system prompt)\nConfiguration\n\nEdit config.yaml to customize:\n\nModel Routing\n# Anthropic routing\nmodels:\n  super_easy: \"anthropic:claude-haiku-4-5-20251001\"\n  easy: \"anthropic:claude-haiku-4-5-20251001\"\n  medium: \"anthropic:claude-sonnet-4-20250514\"\n  hard: \"anthropic:claude-opus-4-20250514\"\n  super_hard: \"anthropic:claude-opus-4-20250514\"\n\n# OpenAI routing\nmodels:\n  super_easy: \"openai:gpt-4o-mini\"\n  easy: \"openai:gpt-4o-mini\"\n  medium: \"openai:gpt-4o\"\n  hard: \"openai:o3-mini\"\n  super_hard: \"openai:o3\"\n\n# Google Gemini routing\nmodels:\n  super_easy: \"google:gemini-2.0-flash\"\n  easy: \"google:gemini-2.0-flash\"\n  medium: \"google:gemini-2.0-flash\"\n  hard: \"google:gemini-2.0-flash\"\n  super_hard: \"google:gemini-2.0-flash\"\n\n\nNote: Reasoning models are auto-detected and use correct API params.\n\nClassifier\n\nThree options for classifying request complexity:\n\nLocal (default) - Free, requires Ollama:\n\nclassifier:\n  provider: \"local\"\n  model: \"qwen2.5:3b\"\n\n\nAnthropic - Uses Haiku, fast and cheap:\n\nclassifier:\n  provider: \"anthropic\"\n  model: \"claude-haiku-4-5-20251001\"\n\n\nOpenAI - Uses GPT-4o-mini:\n\nclassifier:\n  provider: \"openai\"\n  model: \"gpt-4o-mini\"\n\n\nGoogle - Uses Gemini:\n\nclassifier:\n  provider: \"google\"\n  model: \"gemini-2.0-flash\"\n\n\nKimi - Uses Moonshot:\n\nclassifier:\n  provider: \"kimi\"\n  model: \"moonshot-v1-8k\"\n\n\nUse remote (anthropic/openai/google/kimi) if your machine can't run local models.\n\nSupported Providers\nanthropic:claude-* - Anthropic Claude models (tested)\nopenai:gpt-*, openai:o1-*, openai:o3-* - OpenAI models (tested)\ngoogle:gemini-* - Google Gemini models (tested)\nkimi:kimi-k2.5, kimi:moonshot-* - Kimi/Moonshot models (tested)\nlocal:model-name - Local Ollama models (tested)\nComplexity Levels\nLevel\tUse Case\tDefault Model\nsuper_easy\tGreetings, acknowledgments\tHaiku\neasy\tSimple Q&A, reminders\tHaiku\nmedium\tCoding, emails, research\tSonnet\nhard\tComplex reasoning, debugging\tOpus\nsuper_hard\tSystem architecture, proofs\tOpus\nCustomizing Classification\n\nEdit ROUTES.md to tune how messages are classified. The classifier reads the table in this file to determine complexity levels.\n\nAPI Usage\n\nThe router exposes an OpenAI-compatible API:\n\ncurl http://localhost:4001/v1/chat/completions \\\n  -H \"Authorization: Bearer $ANTHROPIC_API_KEY\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"model\": \"llm-router\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"Hello!\"}]\n  }'\n\nTesting Classification\npython classifier.py \"Write a Python sort function\"\n# Output: medium\n\npython classifier.py --test\n# Runs test suite\n\nRunning as macOS Service\n\nCreate ~/Library/LaunchAgents/com.llmrouter.plist:\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE plist PUBLIC \"-//Apple//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\">\n<plist version=\"1.0\">\n<dict>\n    <key>Label</key>\n    <string>com.llmrouter</string>\n    <key>ProgramArguments</key>\n    <array>\n        <string>/path/to/llmrouter/venv/bin/python</string>\n        <string>/path/to/llmrouter/server.py</string>\n        <string>--openclaw</string>\n    </array>\n    <key>RunAtLoad</key>\n    <true/>\n    <key>KeepAlive</key>\n    <true/>\n    <key>WorkingDirectory</key>\n    <string>/path/to/llmrouter</string>\n    <key>StandardOutPath</key>\n    <string>/path/to/llmrouter/logs/stdout.log</string>\n    <key>StandardErrorPath</key>\n    <string>/path/to/llmrouter/logs/stderr.log</string>\n</dict>\n</plist>\n\n\nImportant: Replace /path/to/llmrouter with your actual install path. Must use the venv python, not system python.\n\n# Create logs directory\nmkdir -p ~/path/to/llmrouter/logs\n\n# Load the service\nlaunchctl load ~/Library/LaunchAgents/com.llmrouter.plist\n\n# Verify it's running\ncurl http://localhost:4001/health\n\n# To stop/restart\nlaunchctl unload ~/Library/LaunchAgents/com.llmrouter.plist\nlaunchctl load ~/Library/LaunchAgents/com.llmrouter.plist\n\nOpenClaw Configuration\n\nAdd the router as a provider in ~/.openclaw/openclaw.json:\n\n{\n  \"models\": {\n    \"providers\": {\n      \"localrouter\": {\n        \"baseUrl\": \"http://localhost:4001/v1\",\n        \"apiKey\": \"via-router\",\n        \"api\": \"openai-completions\",\n        \"models\": [\n          {\n            \"id\": \"llm-router\",\n            \"name\": \"LLM Router (Auto-routes by complexity)\",\n            \"reasoning\": false,\n            \"input\": [\"text\", \"image\"],\n            \"cost\": {\n              \"input\": 0,\n              \"output\": 0,\n              \"cacheRead\": 0,\n              \"cacheWrite\": 0\n            },\n            \"contextWindow\": 200000,\n            \"maxTokens\": 8192\n          }\n        ]\n      }\n    }\n  }\n}\n\n\nNote: Cost is set to 0 because actual costs depend on which model the router selects. The router logs which model handled each request.\n\nSet as Default Model (Optional)\n\nTo use the router for all agents by default, add:\n\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": {\n        \"primary\": \"localrouter/llm-router\"\n      }\n    }\n  }\n}\n\nUsing with OAuth Tokens\n\nIf your config.yaml uses an Anthropic OAuth token from OpenClaw's ~/.openclaw/auth-profiles.json, the router automatically handles Claude Code identity headers.\n\nOpenClaw Compatibility Mode (Required)\n\nIf using with OpenClaw, you MUST start the server with --openclaw:\n\npython server.py --openclaw\n\n\nThis flag enables compatibility features required for OpenClaw:\n\nRewrites model names in responses so OpenClaw shows the actual model being used\nHandles tool name and ID remapping for proper tool call routing\n\nWithout this flag, you may encounter errors when using the router with OpenClaw.\n\nCommon Tasks\nCheck server status: curl http://localhost:4001/health\nView current config: cat config.yaml\nTest a classification: python classifier.py \"your message\"\nRun classification tests: python classifier.py --test\nRestart server: Stop and run python server.py again\nView logs (if running as service): tail -f logs/stdout.log\nTroubleshooting\n\"externally-managed-environment\" error\n\nPython 3.11+ requires virtual environments. Create one:\n\npython3 -m venv venv\nsource venv/bin/activate\npip install -r requirements.txt\n\n\"Connection refused\" on port 4001\n\nServer isn't running. Start it:\n\nsource venv/bin/activate && python server.py\n\nClassification returns wrong complexity\n\nEdit ROUTES.md to tune classification rules. The classifier reads this file to determine complexity levels.\n\nOllama errors / \"model not found\"\n\nEnsure Ollama is running and the model is pulled:\n\nollama serve  # Start Ollama if not running\nollama pull qwen2.5:3b\n\nOAuth token not working\n\nEnsure your token in config.yaml starts with sk-ant-oat. The router auto-detects OAuth tokens and adds required identity headers.\n\nLaunchAgent not starting\n\nCheck logs and ensure paths are absolute:\n\ncat ~/Library/LaunchAgents/com.llmrouter.plist  # Verify paths\ncat /path/to/llmrouter/logs/stderr.log  # Check for errors"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/alexrudloff/llmrouter",
    "publisherUrl": "https://clawhub.ai/alexrudloff/llmrouter",
    "owner": "alexrudloff",
    "version": "0.1.1",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/llmrouter",
    "downloadUrl": "https://openagent3.xyz/downloads/llmrouter",
    "agentUrl": "https://openagent3.xyz/skills/llmrouter/agent",
    "manifestUrl": "https://openagent3.xyz/skills/llmrouter/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/llmrouter/agent.md"
  }
}