Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Convert PDFs and documents to markdown, index them locally for RAG retrieval, and analyze them token-efficiently. Use when asked to: read/analyze/summarize a PDF, process a document, boof a file, extract information from papers/decks/NOFOs, or when you need to work with large documents without filling the context window. Supports batch processing and cross-document queries.
Convert PDFs and documents to markdown, index them locally for RAG retrieval, and analyze them token-efficiently. Use when asked to: read/analyze/summarize a PDF, process a document, boof a file, extract information from papers/decks/NOFOs, or when you need to work with large documents without filling the context window. Supports batch processing and cross-document queries.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Local-first document processing: PDF โ markdown โ RAG index โ token-efficient analysis. Documents stay local. Only relevant chunks go to the LLM. Maximum knowledge absorption, minimum token burn.
bash {SKILL_DIR}/scripts/boof.sh /path/to/document.pdf
bash {SKILL_DIR}/scripts/boof.sh /path/to/document.pdf --collection my-project
qmd query "your question" -c collection-name
Boof it: Run boof.sh on a PDF. This converts it to markdown via Marker (local ML, no API) and indexes it into QMD for semantic search. Query it: Use qmd query to retrieve only the relevant chunks. Send those chunks to the LLM โ not the entire document. Analyze it: The LLM sees focused, relevant excerpts. No wasted tokens, no lost-in-the-middle problems.
"Analyze this specific aspect of the paper" โ Boof + query (cheapest, most focused) "Summarize this entire document" โ Boof, then read the markdown section by section. Summarize each section individually, then merge summaries. See advanced-usage.md. "Compare findings across multiple papers" โ Boof all papers into one collection, then query across them. "Find where the paper discusses X" โ qmd search "X" -c collection for exact match, qmd query "X" -c collection for semantic match.
Converted markdown files are saved to knowledge/boofed/ by default (override with --output-dir).
If boof.sh reports missing dependencies, see setup-guide.md for installation instructions (Marker + QMD).
MARKER_ENV โ Path to marker-pdf Python venv (default: ~/.openclaw/tools/marker-env) QMD_BIN โ Path to qmd binary (default: ~/.bun/bin/qmd) BOOF_OUTPUT_DIR โ Default output directory (default: ~/.openclaw/workspace/knowledge/boofed)
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.