Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Conduct AI-led research with autonomous literature review, hypothesis generation, analysis, and synthesis while human provides vision.
Conduct AI-led research with autonomous literature review, hypothesis generation, analysis, and synthesis while human provides vision.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
User has a research question or knowledge gap. Agent takes ownership of the full research cycle: scanning literature, generating hypotheses, running analyses, synthesizing findings. Human provides direction and oversight, AI executes.
TopicFileResearch pipelinepipeline.mdRisk mitigationrisks.md
Traditional research: Human-led, human-executed Deep research: Human-led, AI-assisted Vibe research: Human-directed, AI-led The human sets the question and validates outputs. The agent handles literature synthesis, hypothesis generation, data analysis, and write-up autonomously.
Agent executes the complete pipeline: Gap identification โ What's unknown or contested? Literature synthesis โ Scan, summarize, cross-reference sources Hypothesis generation โ Propose testable claims Analysis design โ Define methodology Execution โ Run analyses, gather data Synthesis โ Write findings with citations
Human provides: research question, domain constraints, success criteria Agent handles: reading papers, connecting ideas, running experiments, drafting Human validates: key decisions, final outputs, methodology choices
Cite every claim: source, page, quote Show reasoning chain for hypotheses Log all analytical steps for reproducibility Flag confidence levels (high/medium/low)
Don't wait for instructions. When analyzing a topic: Identify contradictions in literature Spot under-explored areas Suggest follow-up experiments if results are ambiguous Pull additional sources when context is insufficient
Only claim what sources support Distinguish: "Source X says..." vs "I infer..." When uncertain, say so explicitly Cross-verify critical facts across multiple sources
Treating AI output as ground truth โ always require human validation of key findings Skipping methodology transparency โ document every step for reproducibility Overwhelming human with raw output โ synthesize into actionable insights Losing the human's analytical skills โ keep them engaged in critical thinking
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.