Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Pick the best AI model for any task using the Smart Spawn API. No plugin needed — just HTTP requests to ss.deeflect.com/api.
Pick the best AI model for any task using the Smart Spawn API. No plugin needed — just HTTP requests to ss.deeflect.com/api.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Pick the best AI model for any task. Call the API, get a model recommendation, spawn with it. No plugin required. Works with any OpenClaw instance or any HTTP client.
1. GET ss.deeflect.com/api/pick?task=<description>&budget=<tier> 2. Use the returned model ID in sessions_spawn
GET https://ss.deeflect.com/api/pick?task=build+a+react+dashboard&budget=medium Response: { "data": { "id": "anthropic/claude-opus-4.6", "name": "Claude Opus 4.6", "score": 86, "pricing": { "prompt": 5, "completion": 25 }, "reason": "Best general model at medium budget ($0-5/M) — score: 86" } } Then spawn: sessions_spawn(task="Build a React dashboard with auth", model="anthropic/claude-opus-4.6")
ParamRequiredDescriptiontaskYesTask description or category: coding, reasoning, creative, vision, research, fast-cheap, generalbudgetNolow ($0-1/M), medium ($0-5/M, default), high ($2-20/M), anyexcludeNoComma-separated model IDs to skipcontextNoTags like vision,long-context for routing boost
GET https://ss.deeflect.com/api/recommend?task=coding&budget=low&count=3 Returns diverse models from different providers. Use for collective/parallel spawning.
GET https://ss.deeflect.com/api/compare?models=anthropic/claude-opus-4.6,openai/gpt-5.2 Side-by-side scores, pricing, capabilities.
GET https://ss.deeflect.com/api/models?category=coding&sort=score&limit=10 Sort by score, cost, efficiency, or any category name.
POST https://ss.deeflect.com/api/decompose {"task": "Build and deploy a SaaS app", "budget": "medium"} Returns sequential steps with optimal model per step.
POST https://ss.deeflect.com/api/swarm {"task": "Research competitors and build pitch deck", "budget": "low"} Returns a dependency graph of parallel tasks with models assigned.
For any task that needs a sub-agent: Decide budget — low for cheap/fast, medium for quality, high for best available Call /api/pick with the task description Use the returned id as the model parameter in sessions_spawn If task is complex — use /api/decompose or /api/swarm to break it into subtasks, spawn each with its recommended model
API down → skip model selection, use sessions_spawn without a model (falls back to default) No model found (404) → widen budget to any and retry Rate limited (429) → wait and retry, or fall back to default
GET https://ss.deeflect.com/api/status Shows model count, data freshness, source health. Data refreshes every 6 hours from 5 benchmark sources.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.