Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Conduct open-ended research on a topic, building a living markdown document. Supports interactive and deep research modes.
Conduct open-ended research on a topic, building a living markdown document. Supports interactive and deep research modes.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Conduct open-ended research on a topic, building a living markdown document. The conversation is ephemeral; the document is what matters.
Activate when the user wants to: Research a topic, idea, or question Explore something before committing to building it Investigate options, patterns, or approaches Create a "research doc" or "investigation" Run deep async research on a complex topic
Each research topic gets its own folder: ~/.openclaw/workspace/research/<topic-slug>/ โโโ prompt.md # Original research question/prompt โโโ research.md # Main findings (Parallel output or interactive notes) โโโ research.pdf # PDF export (when generated) โโโ ... # Any other related files (data, images, etc.)
For topics you explore together in conversation. You search, synthesize, and update the doc in real-time.
For complex topics that need comprehensive investigation. Uses the Parallel AI API via parallel-research CLI. Takes minutes to hours, returns detailed markdown reports. When to use deep research: Market analysis, competitive landscape Technical deep-dives requiring extensive source gathering Multi-faceted questions that benefit from parallel exploration When user says "deep research" or wants comprehensive coverage
For each exchange: Do the research - Web search, fetch docs, explore code Update the document - Add findings, move answered questions, add sources Show progress - Note what was added (don't repeat everything) Prompt next direction - End with a question or suggestion Key behaviors: Update existing sections over creating new ones Use bullet points for findings; prose for summaries Note uncertainty ("seems like", "according to X", "unverified") Link to sources whenever possible
Every 5-10 exchanges, offer to: Write a "Current Understanding" summary Prune redundant findings Reorganize if unwieldy Check blind spots
When research is complete, update the status in research.md: "Status: Complete" โ Done, stays in place as reference "Status: Ongoing" โ Living doc, will be updated over time If the research is specifically for building a project: Graduate to ~/specs/<project-name>.md as a project spec Or create a project directly based on findings Update status to "Status: Graduated โ ~/specs/..." Most research is just research โ it doesn't need to become a spec. Only graduate if you're actually building something from it.
parallel-research create "Your research question" --processor ultra --wait Processor options: lite, base, core, pro, ultra (default), ultra2x, ultra4x, ultra8x Add -fast suffix for speed over depth: ultra-fast, pro-fast, etc. Options: -w, --wait โ Wait for completion and show result -p, --processor โ Choose processor tier -j, --json โ Raw JSON output
Deep research tasks take minutes to hours. You'll want to poll for results automatically rather than checking manually. Options: OpenClaw users: See OPENCLAW.md for cron-based auto-check scheduling Other setups: Use any scheduler (cron, systemd timer, CI job) to periodically run parallel-research status <run_id> and parallel-research result <run_id> until complete Simple approach: Just use parallel-research create "..." --wait to block until done (works for shorter tasks)
parallel-research status <run_id> parallel-research result <run_id>
Create the research folder and save results: ~/.openclaw/workspace/research/<topic-slug>/ โโโ prompt.md # Original question + run metadata โโโ research.md # Full Parallel output prompt.md should include: # <Topic Title> > <Original research question> **Run ID:** <run_id> **Processor:** <processor> **Started:** <date> **Completed:** <date> research.md contains the full Parallel output, plus any follow-up notes.
All PDFs go in the research folder โ never save to tmp/. Whether using export-pdf, the browser pdf action, or any other method, the output path must be research/<topic-slug>/. Use the export-pdf script to convert research docs to PDF: export-pdf ~/.openclaw/workspace/research/<topic-slug>/research.md # Creates: ~/.openclaw/workspace/research/<topic-slug>/research.pdf For browser-generated PDFs (e.g. saving a webpage as PDF): browser pdf โ save to research/<topic-slug>/<descriptive-name>.pdf Note: Tables render as stacked rows (PyMuPDF limitation). Acceptable for research docs.
"new research: <topic>" - Start interactive research doc "deep research: <topic>" - Start async deep research "show doc" / "show research" - Display current research file "summarize" - Synthesis checkpoint "graduate" - Move research to next phase "archive" - Mark as complete reference "export pdf" - Export to PDF "check research" - Check status of pending deep research tasks
Atomic findings - One insight per bullet Link everything - Sources, docs, repos Capture context - Why did we look at this? Note confidence - Use qualifiers when uncertain Date important findings - Especially for fast-moving topics
See SETUP.md for first-time installation of: parallel-research CLI PDF export tools (pandoc, PyMuPDF)
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.