Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Configure and optimize OpenCode Zen free models with smart fallbacks for subtasks, heartbeat, and cron jobs. Use when setting up cost-effective AI model routing with automatic failover between free models.
Configure and optimize OpenCode Zen free models with smart fallbacks for subtasks, heartbeat, and cron jobs. Use when setting up cost-effective AI model routing with automatic failover between free models.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Configure OpenCode Zen free models with intelligent fallbacks to optimize costs while maintaining reliability. β οΈ Important: To use this skill, you need two API keys: OpenCode Zen API key - For OpenCode free models (MiniMax M2.1, Kimi K2.5, GLM 4.7, GPT 5 Nano) OpenRouter API key - For OpenRouter free models (Trinity Large and other OpenRouter providers) Configure both keys in your OpenCode/Zen settings before applying these configurations.
Apply optimal free model configuration with provider diversification: { "agents": { "defaults": { "model": { "primary": "opencode/minimax-m2.1-free", "fallbacks": [ "openrouter/arcee-ai/trinity-large-preview:free", "opencode/kimi-k2.5-free" ] }, "heartbeat": { "model": "opencode/glm-4.7-free" }, "subagents": { "model": "opencode/kimi-k2.5-free" } } } }
This skill uses models from two different providers, so you need both API keys configured:
Required for: opencode/minimax-m2.1-free opencode/kimi-k2.5-free opencode/glm-4.7-free opencode/gpt-5-nano Where to get: Sign up at OpenCode Zen and generate an API key.
Required for: openrouter/arcee-ai/trinity-large-preview:free Any other OpenRouter free models you add Where to get: Sign up at OpenRouter.ai and generate an API key.
Add both keys to your OpenCode configuration: { "providers": { "opencode": { "api_key": "YOUR_OPENCODE_ZEN_API_KEY" }, "openrouter": { "api_key": "YOUR_OPENROUTER_API_KEY" } } }
If OpenCode models fail β tries next OpenCode fallback or OpenRouter model If OpenRouter models fail β tries next OpenRouter or OpenCode fallback Configure both providers for maximum reliability
See models.md for detailed model comparisons, capabilities, and provider information. Task TypeRecommended ModelRationalePrimary/GeneralMiniMax M2.1 FreeBest free model capabilityFallback 1Trinity Large FreeDifferent provider (OpenRouter) for rate limit resilienceFallback 2Kimi K2.5 FreeGeneral purpose, balanceHeartbeatGLM 4.7 FreeMultilingual, cost-effective for frequent checksSubtasks/SubagentsKimi K2.5 FreeBalanced capability for secondary tasks
ModelIDBest ForMiniMax M2.1 Freeopencode/minimax-m2.1-freeComplex reasoning, coding (Primary)Trinity Large Freeopenrouter/arcee-ai/trinity-large-preview:freeHigh-quality OpenRouter option (Fallback 1)Kimi K2.5 Freeopencode/kimi-k2.5-freeGeneral purpose, balance (Fallback 2)
This version implements provider diversification to maximize resilience against rate limits and service disruptions: "fallbacks": [ "openrouter/arcee-ai/trinity-large-preview:free", // Different provider (OpenRouter) "opencode/kimi-k2.5-free" // Same provider as primary (OpenCode) ] Why Provider Diversification Matters: Rate limit isolation: If OpenCode experiences rate limits, OpenRouter models remain available (and vice versa) First fallback from different provider: Trinity Large on OpenRouter ensures continuity even if all OpenCode models are rate-limited Maximum resilience: By spreading across providers, you avoid a single point of failure Fallback triggers: Rate limits exceeded Auth failures Timeouts Provider unavailability
If OpenCode models fail β tries OpenRouter fallback first (Trinity Large), then back to OpenCode (Kimi) If OpenRouter model fails β tries OpenCode fallback (Kimi) This cross-provider approach ensures at least one model is usually available
"heartbeat": { "every": "30m", "model": "opencode/gpt-5-nano" } Use the cheapest model for frequent, lightweight checks.
"subagents": { "model": "opencode/kimi-k2.5-free" } Good balance for secondary tasks that need reasonable capability.
{ "agents": { "defaults": { "model": { "primary": "opencode/minimax-m2.1-free", "fallbacks": [ "openrouter/arcee-ai/trinity-large-preview:free", "opencode/kimi-k2.5-free" ] }, "models": { "opencode/minimax-m2.1-free": { "alias": "MiniMax M2.1" }, "opencode/kimi-k2.5-free": { "alias": "Kimi K2.5" }, "openrouter/arcee-ai/trinity-large-preview:free": { "alias": "Trinity Large" } }, "heartbeat": { "every": "30m", "model": "opencode/glm-4.7-free" }, "subagents": { "model": "opencode/kimi-k2.5-free" } } } }
Use OpenClaw CLI: openclaw config.patch --raw '{ "agents": { "defaults": { "model": { "primary": "opencode/minimax-m2.1-free", "fallbacks": ["openrouter/arcee-ai/trinity-large-preview:free", "opencode/kimi-k2.5-free"] }, "heartbeat": { "model": "opencode/glm-4.7-free" }, "subagents": { "model": "opencode/kimi-k2.5-free" } } } }'
Provider diversification - Always have your first fallback from a different provider (e.g., OpenRouter) to avoid rate limits affecting all models Keep fallbacks minimal - 2-3 well-chosen fallbacks are better than many Match model to task - Don't use MiniMax for simple checks Test fallback order - Put more capable models first, with provider diversification Monitor usage - Track which models get used most
Authentication errors (401/403)? Check that you have both API keys configured: OpenCode Zen API key for OpenCode models OpenRouter API key for Trinity Large and OpenRouter models Verify keys are valid and have not expired Rate limits still occurring? Add provider diversification (ensure first fallback is from different provider) Consider reducing heartbeat frequency Responses too slow? Move GPT 5 Nano higher in fallback chain Use simpler model for subtasks Model not available? Check model ID format: opencode/model-id-free or openrouter/provider/model:free Verify model is still free (check models.md) Ensure you have the correct API key for the provider OpenRouter models not working? Verify OpenRouter API key is configured Check OpenRouter account has credits/access Some models may have additional access requirements
Complete reference of all free models with capabilities, providers, performance comparisons, and error handling.
Ready-to-use configuration templates for different use cases (minimal, complete, cost-optimized, performance-optimized).
Practical examples showing how to use this skill in real scenarios.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.