Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Install, configure, and run NadirClaw LLM router to cut AI API costs by 40-70%. Use when the user wants to reduce LLM spending, route prompts to cheaper mode...
Install, configure, and run NadirClaw LLM router to cut AI API costs by 40-70%. Use when the user wants to reduce LLM spending, route prompts to cheaper mode...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
NadirClaw is an open-source LLM router that classifies prompts in ~10ms and routes simple ones to cheap/local models while keeping complex work on premium models.
pip install nadirclaw
Run the interactive wizard: nadirclaw setup Or auto-configure for OpenClaw: nadirclaw openclaw onboard This writes NadirClaw as a provider in OpenClaw config with model nadirclaw/auto. No restart needed.
nadirclaw serve --verbose Runs on http://localhost:8856. Any OpenAI-compatible tool can use it by pointing to this URL.
# OpenClaw (auto) nadirclaw openclaw onboard # Claude Code ANTHROPIC_BASE_URL=http://localhost:8856/v1 claude # Any OpenAI-compatible tool OPENAI_BASE_URL=http://localhost:8856/v1 <tool>
Pass x-routing-profile header or use these models: nadirclaw/auto - smart routing (default) nadirclaw/eco - maximize savings nadirclaw/premium - always use best model nadirclaw/free - Ollama/local only nadirclaw/reasoning - chain-of-thought optimized
nadirclaw savings # cost savings report nadirclaw report # detailed routing analytics nadirclaw dashboard # live terminal dashboard
~10ms classification overhead Session persistence (no model bouncing mid-conversation) Rate limit fallback (auto-retry on 429) Agentic task detection (forces premium for tool use) Context-window filtering (auto-swaps for long conversations) Supports: OpenAI, Anthropic, Google Gemini, Ollama, any LiteLLM provider
If nadirclaw serve fails, check API keys: nadirclaw setup For Ollama: ensure ollama serve is running first Logs: nadirclaw report --last 20 to see recent routing decisions Raw debug: nadirclaw serve --verbose --log-raw
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.