Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Configure OpenRouter model routing with provider auth, model selection, fallback chains, and cost-aware defaults for stable multi-model workflows.
Configure OpenRouter model routing with provider auth, model selection, fallback chains, and cost-aware defaults for stable multi-model workflows.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
On first use, read setup.md to align activation boundaries, reliability goals, and routing preferences before making configuration changes.
Use this skill when the user wants to connect an OpenAI-compatible workflow to OpenRouter, choose models by task type, set safe fallbacks, and control cost drift over time.
Memory lives in ~/open-router/. See memory-template.md for structure. ~/open-router/ βββ memory.md # Active routing profile and constraints βββ providers.md # Confirmed provider and auth choices βββ routing-rules.md # Task -> model and fallback policy βββ incidents.md # Outages, rate limits, and recovery notes βββ budgets.md # Spend guardrails and optimization actions
Use the smallest relevant file for the current task. TopicFileSetup and activation preferencessetup.mdMemory templatememory-template.mdAuthentication and provider wiringauth-and-provider.mdRouting patterns by workloadrouting-playbooks.mdReliability and fallback handlingfallback-reliability.mdCost controls and spend reviewscost-guardrails.md
Classify requests first: coding, analysis, extraction, summarization, or long-context synthesis. Map each class to a primary model and a fallback before changing any defaults.
Use OPENROUTER_API_KEY from the local environment, never pasted into logs or chat memory. Validate auth with a minimal request before applying routing changes.
Separate fallback reasons: rate limit, provider outage, latency spike, or output quality failure. Keep at least one fallback from a different provider family for resilience.
Set cost ceilings by task class and check expected token burn before broad rollout. Route low-stakes tasks to cheaper models and reserve premium models for high-impact tasks.
Modify either model selection, fallback policy, or budget limits in a single iteration. After each change, run a quick verification prompt set and record outcome.
Save the final routing policy, rationale, and known tradeoffs in memory. Reuse proven policies instead of repeatedly rebuilding from scratch.
Choosing one model for every task -> higher cost and unstable quality under varied workloads. Using same-family fallback chain only -> cascading failures during provider-specific incidents. Ignoring token limits for long inputs -> truncated responses and hidden quality loss. Changing routing and budgets simultaneously -> unclear root cause when quality drops. Running without verification prompts -> broken routing detected only after user-facing failures.
These endpoints are used only to discover model metadata and execute routed inference requests under explicit user task intent. EndpointData SentPurposehttps://openrouter.ai/api/v1/modelsnone or auth header onlyDiscover current model catalog and metadatahttps://openrouter.ai/api/v1/chat/completionsuser prompt content and selected model idExecute routed inference requests No other data is sent externally.
Data that leaves your machine: Prompt text and selected model metadata sent to OpenRouter when inference is requested. Data that stays local: Routing notes and preferences under ~/open-router/. Local environment variable references and verification logs. This skill does NOT: Request raw API keys in chat. Store plaintext secrets in skill memory files. Modify files outside ~/open-router/ for its own state.
By using this skill, prompt content is sent to OpenRouter for model execution. Only install if you trust this service with your data.
Install with clawhub install <slug> if user confirms: api β API request design, payload shaping, and response validation patterns auth β credential handling and auth troubleshooting workflows models β model comparison and selection guidance monitoring β runtime health checks and incident tracking practices
If useful: clawhub star open-router Stay updated: clawhub sync
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.