Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Context-based skill auto-routing + federated skill composition. Analyzes user input to auto-select single or multiple skills and execute in order. First gate...
Context-based skill auto-routing + federated skill composition. Analyzes user input to auto-select single or multiple skills and execute in order. First gate...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Meta system that analyzes natural language input to auto-select appropriate skill(s), determine order, and chain execution.
1. Scan only skills/*/SKILL.md frontmatter (trigger matching) - Quick match with description + trigger fields - No full body reading β 83% token savings 2. Check run field of matched skill for script path - run: "./run.sh" β skills/{name}/run.sh - run: "./run.js" β skills/{name}/run.js 3. Direct script execution with exec WORKSPACE=$HOME/.openclaw/workspace \ EVENTS_DIR=$WORKSPACE/events \ MEMORY_DIR=$WORKSPACE/memory \ bash $WORKSPACE/skills/{name}/run.sh [args] 4. Agent processes stdout result - Parse if JSON - Pass through if text - Check stderr on error 5. Generate events based on events_out - Create events/{type}-{date}.json file - Subsequent skills consume via events_in 6. Check hooks β trigger subsequent skills - post: ["skill-a", "skill-b"] β auto-execute - on_error: ["notification-hub"] β notify on error
# Extract only frontmatter from all skills for skill in skills/*/SKILL.md; do yq eval '.name, .description, .trigger, .run' "$skill" done
# User: "daily report" # β trigger match: daily-report # β Execute: cd $HOME/.openclaw/workspace WORKSPACE=$PWD \ EVENTS_DIR=$PWD/events \ MEMORY_DIR=$PWD/memory \ bash skills/daily-report/run.sh today # Agent formats stdout result and delivers to user
Before: SKILL.md 3000 chars Γ 40 = 120KB (~30K tokens) v2: SKILL.md 500 chars Γ 40 = 20KB (~5K tokens) Savings: 83% token reduction
OpenClaw already selects 1 skill via description matching, but this skill: Detect complex intent: "Analyze competitors and make card news" β competitor-watch + copywriting + cardnews + insta-post Context-based auto-hooks: Auto-determine subsequent skills when a skill executes Skill chain templates: Pre-define frequently used combinations
"commit/push/git" β git-auto "DM/instagram message" β auto-reply "cost/tokens/how much" β tokenmeter "translate/to English" β translate "invoice/quote" β invoice-gen "code review/PR" β code-review "system status/health" β health-monitor "trends" β trend-radar "performance/reactions/likes" β performance-tracker "daily report" β daily-report "SEO audit" β seo-audit "brand tone" β brand-voice
Trigger PatternSkill ChainDescription"create content/post"seo-content-planner β copywriting β cardnews β insta-postFull content pipeline"analyze competitors and report"competitor-watch β daily-report β mailResearchβreport"summarize this video as card news"yt-digest β content-recycler β cardnews β insta-postVideoβcontent conversion"weekly review"self-eval + tokenmeter + performance-tracker β daily-reportComprehensive review"recycle content"performance-tracker β content-recycler β cardnewsRepackage successful content"review idea and execute"think-tank(brainstorm) β decision-log β skill-composerIdeationβdecisionβexecution"market research"competitor-watch + trend-radar + data-scraper β daily-reportFull research"release"code-review β git-auto β release-disciplineSafe deployment"morning routine"health-monitor β tokenmeter β notification-hub β daily-reportMorning auto-check
Skill A execution complete β analyze results β auto-determine next skill: Auto-chain Rules (if β then) IF competitor-watch detects important change β THEN notification-hub(urgent) + include in daily-report IF tokenmeter exceeds $500/month β THEN notification-hub(urgent) IF code-review detects HIGH severity β THEN block commit + notification-hub IF think-tank conclusion has "immediate execution" action β THEN auto-record in decision-log IF cardnews generation complete β THEN confirm "post with insta-post?" (approval required) IF self-eval detects repeated mistake β THEN trigger learning-engine IF performance-tracker finds successful content β THEN suggest content-recycler IF trend-radar detects hot trend β THEN auto-suggest seo-content-planner IF mail detects important email β THEN notification-hub(important) IF health-monitor detects anomaly β THEN attempt auto-recovery + notification-hub(urgent)
1. Receive user input 2. Classify intent (single vs complex) 3. If single β execute skill immediately 4. If complex β compose skill chain a. Skills without dependencies execute in parallel (sessions_spawn) b. Skills with dependencies execute sequentially (pass previous results via events/) 5. Check auto-chain rules on each skill completion 6. Auto-trigger additional skills if needed (or request approval) 7. Synthesize final results and respond
When skill-router activates, for all skills: pre-hook: Input validation + security check post-hook: Generate events/ event + check chain rules on-error: Error log + notification-hub
[User Input] β [skill-router] β Intent classification β βββββββββββββββββββββββββββββββββββββββββββ β TIER 1: Data Collection β β competitor-watch, data-scraper, β β trend-radar, tokenmeter, yt-digest β βββββββββββββββ¬ββββββββββββββββββββββββββββ β events/ βββββββββββββββββββββββββββββββββββββββββββ β TIER 2: Analysis/Thinking β β think-tank, self-eval, seo-audit, β β code-review, performance-tracker β βββββββββββββββ¬ββββββββββββββββββββββββββββ β events/ βββββββββββββββββββββββββββββββββββββββββββ β TIER 3: Production β β copywriting, cardnews, content-recycler,β β translate, invoice-gen β βββββββββββββββ¬ββββββββββββββββββββββββββββ β events/ βββββββββββββββββββββββββββββββββββββββββββ β TIER 4: Deployment/Execution β β insta-post, mail, git-auto, β β release-discipline β βββββββββββββββ¬ββββββββββββββββββββββββββββ β events/ βββββββββββββββββββββββββββββββββββββββββββ β TIER 5: Tracking/Learning β β daily-report, decision-log, β β learning-engine, notification-hub β βββββββββββββββββββββββββββββββββββββββββββ
Always require approval before external actions (email send, SNS post, payment) Prevent infinite loops: Stop after same skill chain repeats 3 times Cost limit: Max 5 subagents per chain Graceful stop on error + save partial results π§ Built by 무νμ΄ β Mupengism ecosystem skill
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.