← All skills
Tencent SkillHub · AI

Friday Router

Austin's intelligent model router with fixed scoring, his preferred models, and OpenClaw integration

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Austin's intelligent model router with fixed scoring, his preferred models, and OpenClaw integration

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
_meta.json, config.json, REVIEW-name-conformity.md, README.md, scripts/router.py, SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.6.2

Documentation

ClawHub primary doc Primary doc: SKILL.md 7 sections Open source page

IntentRouter

Your AI's Smart Traffic Director: Precisely Matching Your OpenClaw Tasks to the Perfect LLM. v1.7.0 — Security-focused release. COMPLEX tier, absolute paths. Tested and working with OpenClaw TUI delegation. Removed gateway auth secret exposure and gateway management for improved security. IntentRouter analyzes your tasks and directs them to the best LLM—MiniMax 2.5 for code, Kimi k2.5 for creative, Grok Fast for research. Eliminate guesswork; route with purpose. Security improvements in v1.7.0: Removed gateway auth token/password from router output. Gateway management removed - use gateway-guard skill separately. FACEPALM integration removed - use FACEPALM skill separately. Requirements: OpenRouter — All model IDs use the openrouter/... prefix. Configure OpenClaw with an OpenRouter API key so one auth profile covers every tier. Config access: This skill reads ONLY its own config.json file (located in the skill directory) for model configuration and routing rules. It does NOT read openclaw.json, does NOT access gateway tokens/passwords, and does NOT perform any gateway management operations. Router output contains only: task, model, sessionTarget, and optional label — no secrets or credentials. Default / orchestrator model: Gemini 2.5 Flash (openrouter/google/gemini-2.5-flash) — fast, cheap, reliable at tool-calling. The router delegates tasks to tier-specific sub-agents (Kimi for creative, MiniMax 2.5 for code, etc.).

Orchestrator flow (mandatory for task requests)

When you are the main agent (Gemini 2.5 Flash) and the user gives you a task (code, research, write, create, design, poem, story, fix, build, etc.), you must delegate it. You do not perform the task yourself; a sub-agent does. Steps (all three required): Run IntentRouter to get spawn params: python3 workspace/skills/friday-router/scripts/router.py spawn --json "<user message>" Example output: {"task":"write a poem about Mai","model":"openrouter/moonshotai/kimi-k2.5","sessionTarget":"isolated"} Immediately call sessions_spawn with the router's output. Use the exact model from the JSON. Example: sessions_spawn(task="write a poem about Mai", model="openrouter/moonshotai/kimi-k2.5", sessionTarget="isolated") Do not change the model value. If the router said openrouter/moonshotai/kimi-k2.5, pass exactly that. Forward the sub-agent's reply to the user. That reply IS the task output. Say "Using: Kimi k2.5" (the model that actually ran). Never say "Using: Claude Sonnet 4" or any model that didn't run. Output hygiene: If the sub-agent result contains internal text ("A subagent task … completed", "Findings:", "Stats:", "sessionKey", "Summarize this naturally"), strip that block and show only the final user-facing content to the user. If sessions_spawn returns an error (e.g. device_token_mismatch): tell the user delegation failed and suggest checking gateway status or using the gateway-guard skill. Do not do the task yourself. Hard-stop rule: If sessions_spawn fails or is skipped, return only the delegation error and next-step fix. Do not write the requested output directly. No-classify execution rule: For real user tasks, do not execute via classify. classify is diagnostics only. Execution must use spawn --json -> sessions_spawn. Label gate: Only print Using: <model> after successful spawn. If no successful spawn, do not print a Using: label. Output hygiene: Never return internal orchestration metadata to the user (no session keys/IDs, transcript paths, runtime/token stats, or internal "summarize this" instructions). Forward only clean user-facing content. Exception: Meta-questions ("what model are you?", "how does routing work?") you answer yourself. Security note: This skill does NOT expose gateway auth secrets (tokens/passwords) in its output. Gateway management functionality has been removed. Use the separate gateway-guard skill if gateway auth management is needed.

Model Selection (Austin's Prefs)

Use CasePrimary (OpenRouter)FallbackDefault / orchestratorGemini 2.5 Flash—Fast/cheapGemini 2.5 FlashGemini 1.5 Flash, HaikuReasoningGLM-5Minimax 2.5Creative/FrontendKimi k2.5—ResearchGrok Fast—Code/EngineeringMiniMax 2.5Qwen2.5-CoderQuality/ComplexGLM 4.7 FlashGLM 4.7, Sonnet 4, GPT-4oVision/ImagesGPT-4o— All model IDs use openrouter/ prefix (e.g. openrouter/moonshotai/kimi-k2.5).

CLI

python scripts/router.py default # Show default model python scripts/router.py classify "fix lint errors" # Classify → tier + model python scripts/router.py spawn --json "write a poem" # JSON for sessions_spawn (no gateway secrets) python scripts/router.py models # List all models Note: Gateway auth management is not included. Use gateway-guard skill separately if needed.

sessions_spawn examples

Creative task (poem): router output: {"task":"write a poem","model":"openrouter/moonshotai/kimi-k2.5","sessionTarget":"isolated"} → sessions_spawn(task="write a poem", model="openrouter/moonshotai/kimi-k2.5", sessionTarget="isolated") Code task (bug fix): router output: {"task":"fix the login bug","model":"openrouter/minimax/minimax-m2.5","sessionTarget":"isolated"} → sessions_spawn(task="fix the login bug", model="openrouter/minimax/minimax-m2.5", sessionTarget="isolated") Research task: router output: {"task":"research best LLMs","model":"openrouter/x-ai/grok-4.1-fast","sessionTarget":"isolated"} → sessions_spawn(task="research best LLMs", model="openrouter/x-ai/grok-4.1-fast", sessionTarget="isolated")

Tier Detection

FAST: check, get, list, show, status, monitor, fetch, simple REASONING: prove, logic, analyze, derive, math, step by step CREATIVE: creative, write, story, design, UI, UX, frontend, website (website/frontend/landing projects → Kimi k2.5 only; do not use CODE tier) RESEARCH: research, find, search, lookup, web, information CODE: code, function, debug, fix, implement, refactor, test, React, JWT (code/API only; not website builds) QUALITY: complex, architecture, design, system, comprehensive VISION: image, picture, photo, screenshot, visual

What Changed from Original

BugFixSimple indicators inverted (high match = complex)Now correctly: high simple keyword match = FAST tierAgentic tasks not bumping tierMulti-step tasks now properly bump to CODE tierVision tasks misclassifiedVision keywords now take priority over other classificationsCode keywords not detectedAdded React, JWT, API, and other common code termsConfidence always lowNow varies appropriately based on keyword match strength

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Docs2 Config1 Scripts
  • SKILL.md Primary doc
  • README.md Docs
  • REVIEW-name-conformity.md Docs
  • scripts/router.py Scripts
  • _meta.json Config
  • config.json Config