Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Autonomous web research agent that performs multi-step searches, follows links, extracts data, and synthesizes findings into structured reports. Use when asked to research a topic, find information across multiple sources, compare options, gather market data, compile lists, or answer questions requiring deep web investigation beyond a single search.
Autonomous web research agent that performs multi-step searches, follows links, extracts data, and synthesizes findings into structured reports. Use when asked to research a topic, find information across multiple sources, compare options, gather market data, compile lists, or answer questions requiring deep web investigation beyond a single search.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Parse the query โ Break the user's request into 2-5 specific search queries that cover different angles of the topic. Search phase โ Execute searches using web_search. Rate limit: max 3 searches, then assess before continuing. Deep dive phase โ For promising results, use web_fetch to extract full content. Prioritize: Primary sources over aggregators Recent content over old (check dates) Authoritative domains over random blogs Cross-reference โ Compare findings across sources. Flag contradictions. Note consensus. Synthesize โ Compile findings into a clear, structured response with: Key findings (bullet points) Sources cited (URLs) Confidence level (high/medium/low per claim) Gaps identified (what couldn't be found)
Search โ verify across 2+ sources โ report with citations.
Search each option separately โ fetch detail pages โ build comparison table โ recommend.
Search name + context โ fetch LinkedIn/company pages โ cross-reference news โ compile profile.
Search with specific technical terms โ fetch documentation/guides โ distill steps.
Max 10 searches per task to avoid rate limits and token waste. Max 5 page fetches โ be selective about which URLs to deep-dive. Always include source URLs so the user can verify. If a search returns nothing useful, rephrase and retry once before moving on. For time-sensitive info, use freshness parameter (pd/pw/pm/py). Prefer web_fetch with maxChars: 5000 to keep context manageable. If the task is massive, suggest breaking it into sub-tasks or spawning sub-agents.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.