Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Validates documentation usability by spawning context-free agents to complete tasks using only the docs, identifying gaps for improvement.
Validates documentation usability by spawning context-free agents to complete tasks using only the docs, identifying gaps for improvement.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
A documentation quality methodology that spawns fresh agents to validate whether docs are actually usable.
Documentation written by the person who built the thing is almost always incomplete. They fill in gaps unconsciously. They assume context. They skip "obvious" steps. RTFM Testing fixes this by spawning a fresh agent with zero context and asking: can you complete this task using only the docs?
Before publishing docs, READMEs, tutorials, or setup guides When users report confusion but you can't see why After major refactors to validate docs still work As part of CI for documentation-heavy projects
Identify the task โ What should someone be able to do after reading the docs? Bundle the docs โ Collect all relevant documentation (and nothing else) Spawn a fresh tester โ Use the TESTER.md prompt with sessions_spawn Analyze failures โ Every confusion point is a doc bug Fix and repeat โ Update docs, respawn, retest until clean
sessions_spawn( task: "Complete the following task using ONLY the provided documentation. [TASK DESCRIPTION]\n\n---\n\n[PASTE DOCS HERE]", agentId: "default", label: "rtfm-test" ) Or use the full TESTER.md prompt for more structured output.
Cold Start Score โ Number of spawn cycles until task completion (lower = better docs) Gap Count โ Number of [GAP] reports per run Gap Categories โ Missing steps, unclear language, wrong assumptions, missing prerequisites
No hints โ Don't help the tester. Let it fail. Literal reading โ Tester must not infer or guess Docs only โ No external knowledge, no "common sense" Failures are signal โ Every stumble is actionable feedback
SKILL.md โ This file TESTER.md โ System prompt for the fresh agent GAPS.md โ Output format specification
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.