Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Create documentation optimized for AI agent consumption. Use when writing SKILL.md files, README files, API docs, or any documentation that will be read by LLMs in context windows. Helps structure content for RAG retrieval, token efficiency, and the Hybrid Context Hierarchy.
Create documentation optimized for AI agent consumption. Use when writing SKILL.md files, README files, API docs, or any documentation that will be read by LLMs in context windows. Helps structure content for RAG retrieval, token efficiency, and the Hybrid Context Hierarchy.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Write documentation that AI agents can efficiently consume. Based on Vercel benchmarks and industry standards (AGENTS.md, llms.txt, CLAUDE.md).
Three-layer architecture for optimal agent performance:
Fetched on demand. 1Kβ5K token chunks. Framework-specific guides Detailed style guides API schemas
Gated by allow-lists. Edge cases only. Latest library updates Stack Overflow for obscure errors Third-party llms.txt
Vercel Benchmark (2026): ApproachPass RateTool-based retrieval53%Retrieval + prompting79%Inline AGENTS.md100% Root cause: Meta-cognitive failure. Agents don't know what they don't knowβthey assume training data is sufficient. Inline docs bypass this entirely.
An 8KB compressed index outperforms a 40KB full dump. Compress to: File paths (where code lives) Function signatures (names + types only) Negative constraints ("Do NOT use X")
RAG systems split at headers. Each section must be self-contained: ## Database Setup β Chunk boundary Prerequisites: PostgreSQL 14+ 1. Create database... Rules: Front-load key info (chunkers truncate) Descriptive headers (agents search by header text)
Agents can't autonomously browse. Each link = tool call + latency + potential failure. ApproachToken LoadAgent SuccessFull inline~12Kβ HighLinks only~2Kβ Requires fetchingHybrid~4K baseβ Best of both
LLMs have U-shaped attention: Strong: Start of context (primacy) Strong: End of context (recency) Weak: Middle of context Solution: Put critical rules at TOP of AGENTS.md. Governance first, details later.
Strip everything that isn't essential: No "Welcome to..." preambles No marketing text No changelogs in core docs Formats like llms.txt and AGENTS.md mechanically increase SNR.
AGENTS.md is part of your codebase. Controlled, version-pinned.
Indirect prompt injection via hidden text SSRF risks if agents can browse freely Dependency on external uptime Mitigation: Domain allow-lists, human-in-the-loop for external retrieval.
Pasting 50 pages β triggers "Lost in the Middle" "See external docs" β agents can't browse autonomously Generic advice β "Write clean code" (use specific constraints) TOC-only docs β indexes without content Trusting retrieval alone β 53% vs 100% pass rate
For detailed guidance on RAG optimization, multi-framework docs, and API templates, see references/advanced-patterns.md.
Critical governance at TOP of doc Total inline context under 4K tokens Each H2 section self-contained No external links without inline summary Negative constraints explicit ("Do NOT...") File paths and signatures, not full code
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.