Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Test skills before using or publishing. Trial, compare, evaluate in isolation without affecting your environment.
Test skills before using or publishing. Trial, compare, evaluate in isolation without affecting your environment.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Two use cases: Try before commit โ Test drive skills before installing Evaluate before publish โ Verify quality before publishing Key principle: Test in isolation. Never affect user's environment. References: Read sandbox.md โ Isolated testing environment Read compare.md โ A/B comparison between skills Read evaluate.md โ Multi-agent quality evaluation
Trial a skill: sessions_spawn( task="Test skill X: Load ONLY its SKILL.md, run [sample task], report quality", model="anthropic/claude-haiku" ) Compare two skills: Run same task through each (separate sub-agents) Present outputs side-by-side Ask: "Which works better? Why?"
Trial Mode โ Before installing Spawn sub-agent with ONLY the test skill Run 2-3 representative tasks Evaluate: Does it help? Clear instructions? Decision: keep, pass, or try another Evaluation Mode โ Before publishing Spawn specialized reviewers (see evaluate.md) Check structure, safety, usefulness Synthesize findings Recommend improvements
โ ๏ธ Never load test skill into your main context. Sub-agent approach (recommended): sessions_spawn( task="You have ONE skill loaded: [skill content]. Test by doing: [task]", model="anthropic/claude-haiku" ) Complete isolation โ main session unaffected Natural cleanup โ sub-agent terminates, done Cheap testing โ use Haiku What to check: Does it activate correctly? Are instructions clear? Token cost reasonable? Output quality acceptable?
Skill requires credentials: Ask user for test credentials or skip auth-dependent features. Skill not found: Verify slug with npx clawhub info <slug> before testing. Test fails mid-way: Sub-agent terminates cleanly. Review logs, adjust test task, retry. Skill has many auxiliary files: Load SKILL.md first, reference others only if needed during test. Test thoroughly. Install only after explicit user approval.
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.