← All skills
Tencent SkillHub Β· AI

Llmrouter

Intelligent LLM proxy that routes requests to appropriate models based on complexity. Save money by using cheaper models for simple tasks. Tested with Anthropic, OpenAI, Gemini, Kimi/Moonshot, and Ollama.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Intelligent LLM proxy that routes requests to appropriate models based on complexity. Save money by using cheaper models for simple tasks. Tested with Anthropic, OpenAI, Gemini, Kimi/Moonshot, and Ollama.

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
0.1.1

Documentation

ClawHub primary doc Primary doc: SKILL.md 25 sections Open source page

LLM Router

An intelligent proxy that classifies incoming requests by complexity and routes them to appropriate LLM models. Use cheaper/faster models for simple tasks and reserve expensive models for complex ones. Works with OpenClaw to reduce token usage and API costs by routing simple requests to smaller models. Status: Tested with Anthropic, OpenAI, Google Gemini, Kimi/Moonshot, and Ollama.

Prerequisites

Python 3.10+ with pip Ollama (optional - only if using local classification) Anthropic API key or Claude Code OAuth token (or other provider key)

Setup

# Clone if not already present git clone https://github.com/alexrudloff/llmrouter.git cd llmrouter # Create virtual environment (required on modern Python) python3 -m venv venv source venv/bin/activate # Install dependencies pip install -r requirements.txt # Pull classifier model (if using local classification) ollama pull qwen2.5:3b # Copy and customize config cp config.yaml.example config.yaml # Edit config.yaml with your API key and model preferences

Verify Installation

# Start the server source venv/bin/activate python server.py # In another terminal, test health endpoint curl http://localhost:4001/health # Should return: {"status": "ok", ...}

Start the Server

python server.py Options: --port PORT - Port to listen on (default: 4001) --host HOST - Host to bind (default: 127.0.0.1) --config PATH - Config file path (default: config.yaml) --log - Enable verbose logging --openclaw - Enable OpenClaw compatibility (rewrites model name in system prompt)

Configuration

Edit config.yaml to customize:

Model Routing

# Anthropic routing models: super_easy: "anthropic:claude-haiku-4-5-20251001" easy: "anthropic:claude-haiku-4-5-20251001" medium: "anthropic:claude-sonnet-4-20250514" hard: "anthropic:claude-opus-4-20250514" super_hard: "anthropic:claude-opus-4-20250514" # OpenAI routing models: super_easy: "openai:gpt-4o-mini" easy: "openai:gpt-4o-mini" medium: "openai:gpt-4o" hard: "openai:o3-mini" super_hard: "openai:o3" # Google Gemini routing models: super_easy: "google:gemini-2.0-flash" easy: "google:gemini-2.0-flash" medium: "google:gemini-2.0-flash" hard: "google:gemini-2.0-flash" super_hard: "google:gemini-2.0-flash" Note: Reasoning models are auto-detected and use correct API params.

Classifier

Three options for classifying request complexity: Local (default) - Free, requires Ollama: classifier: provider: "local" model: "qwen2.5:3b" Anthropic - Uses Haiku, fast and cheap: classifier: provider: "anthropic" model: "claude-haiku-4-5-20251001" OpenAI - Uses GPT-4o-mini: classifier: provider: "openai" model: "gpt-4o-mini" Google - Uses Gemini: classifier: provider: "google" model: "gemini-2.0-flash" Kimi - Uses Moonshot: classifier: provider: "kimi" model: "moonshot-v1-8k" Use remote (anthropic/openai/google/kimi) if your machine can't run local models.

Supported Providers

anthropic:claude-* - Anthropic Claude models (tested) openai:gpt-*, openai:o1-*, openai:o3-* - OpenAI models (tested) google:gemini-* - Google Gemini models (tested) kimi:kimi-k2.5, kimi:moonshot-* - Kimi/Moonshot models (tested) local:model-name - Local Ollama models (tested)

Complexity Levels

LevelUse CaseDefault Modelsuper_easyGreetings, acknowledgmentsHaikueasySimple Q&A, remindersHaikumediumCoding, emails, researchSonnethardComplex reasoning, debuggingOpussuper_hardSystem architecture, proofsOpus

Customizing Classification

Edit ROUTES.md to tune how messages are classified. The classifier reads the table in this file to determine complexity levels.

API Usage

The router exposes an OpenAI-compatible API: curl http://localhost:4001/v1/chat/completions \ -H "Authorization: Bearer $ANTHROPIC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "llm-router", "messages": [{"role": "user", "content": "Hello!"}] }'

Testing Classification

python classifier.py "Write a Python sort function" # Output: medium python classifier.py --test # Runs test suite

Running as macOS Service

Create ~/Library/LaunchAgents/com.llmrouter.plist: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.llmrouter</string> <key>ProgramArguments</key> <array> <string>/path/to/llmrouter/venv/bin/python</string> <string>/path/to/llmrouter/server.py</string> <string>--openclaw</string> </array> <key>RunAtLoad</key> <true/> <key>KeepAlive</key> <true/> <key>WorkingDirectory</key> <string>/path/to/llmrouter</string> <key>StandardOutPath</key> <string>/path/to/llmrouter/logs/stdout.log</string> <key>StandardErrorPath</key> <string>/path/to/llmrouter/logs/stderr.log</string> </dict> </plist> Important: Replace /path/to/llmrouter with your actual install path. Must use the venv python, not system python. # Create logs directory mkdir -p ~/path/to/llmrouter/logs # Load the service launchctl load ~/Library/LaunchAgents/com.llmrouter.plist # Verify it's running curl http://localhost:4001/health # To stop/restart launchctl unload ~/Library/LaunchAgents/com.llmrouter.plist launchctl load ~/Library/LaunchAgents/com.llmrouter.plist

OpenClaw Configuration

Add the router as a provider in ~/.openclaw/openclaw.json: { "models": { "providers": { "localrouter": { "baseUrl": "http://localhost:4001/v1", "apiKey": "via-router", "api": "openai-completions", "models": [ { "id": "llm-router", "name": "LLM Router (Auto-routes by complexity)", "reasoning": false, "input": ["text", "image"], "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }, "contextWindow": 200000, "maxTokens": 8192 } ] } } } } Note: Cost is set to 0 because actual costs depend on which model the router selects. The router logs which model handled each request.

Set as Default Model (Optional)

To use the router for all agents by default, add: { "agents": { "defaults": { "model": { "primary": "localrouter/llm-router" } } } }

Using with OAuth Tokens

If your config.yaml uses an Anthropic OAuth token from OpenClaw's ~/.openclaw/auth-profiles.json, the router automatically handles Claude Code identity headers.

OpenClaw Compatibility Mode (Required)

If using with OpenClaw, you MUST start the server with --openclaw: python server.py --openclaw This flag enables compatibility features required for OpenClaw: Rewrites model names in responses so OpenClaw shows the actual model being used Handles tool name and ID remapping for proper tool call routing Without this flag, you may encounter errors when using the router with OpenClaw.

Common Tasks

Check server status: curl http://localhost:4001/health View current config: cat config.yaml Test a classification: python classifier.py "your message" Run classification tests: python classifier.py --test Restart server: Stop and run python server.py again View logs (if running as service): tail -f logs/stdout.log

"externally-managed-environment" error

Python 3.11+ requires virtual environments. Create one: python3 -m venv venv source venv/bin/activate pip install -r requirements.txt

"Connection refused" on port 4001

Server isn't running. Start it: source venv/bin/activate && python server.py

Classification returns wrong complexity

Edit ROUTES.md to tune classification rules. The classifier reads this file to determine complexity levels.

Ollama errors / "model not found"

Ensure Ollama is running and the model is pulled: ollama serve # Start Ollama if not running ollama pull qwen2.5:3b

OAuth token not working

Ensure your token in config.yaml starts with sk-ant-oat. The router auto-detects OAuth tokens and adds required identity headers.

LaunchAgent not starting

Check logs and ensure paths are absolute: cat ~/Library/LaunchAgents/com.llmrouter.plist # Verify paths cat /path/to/llmrouter/logs/stderr.log # Check for errors

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
1 Docs
  • SKILL.md Primary doc