Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Transforms a natural-language query into a detailed research report by searching, filtering, fetching, and synthesizing relevant web content with source cita...
Transforms a natural-language query into a detailed research report by searching, filtering, fetching, and synthesizing relevant web content with source cita...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Multi-stage deep intelligence pipeline (Search β Filter β Fetch β Synthesize).
Tell OpenClaw: "Install the deep-scout skill." The agent will handle the installation and configuration automatically.
If you prefer the terminal, run: clawhub install deep-scout
/deep-scout "Your research question" [--depth 5] [--freshness pw] [--country US] [--style report]
FlagDefaultDescription--depth N5Number of URLs to fully fetch (1β10)--freshnesspwpd=past day, pw=past week, pm=past month, py=past year--countryUS2-letter country code for Brave search--languageen2-letter language code--search-count8Total results to collect before filtering--min-score4Minimum relevance score to keep (0β10)--stylereportreport | comparison | bullets | timeline--dimensionsautoComparison dimensions (comma-separated, for --style comparison)--output FILEstdoutWrite report to file--no-browserβDisable browser fallback--no-firecrawlβDisable Firecrawl fallback
When this skill is invoked, execute the following four-stage pipeline:
Call web_search with: query: <user query> count: <search_count> country: <country> search_lang: <language> freshness: <freshness> Collect: title, url, snippet for each result. If fewer than 3 results returned, retry with freshness: "py" (relaxed).
Load prompts/filter.txt. Replace template vars: {{query}} β the user's query {{freshness}} β freshness param {{min_score}} β min_score param {{results_json}} β JSON array of search results Call the LLM with this prompt. Parse the returned JSON array. Keep only results where keep: true. Sort by score descending. Take top depth URLs as the fetch list. Deduplication: Max 2 results per root domain (already handled in filter prompt).
For each URL in the filtered list: Tier 1 β web_fetch (fast): Call web_fetch(url) If content length >= 200 chars β accept, trim to max_chars_per_source Tier 2 β Firecrawl (deep/JS): If Tier 1 fails or returns < 200 chars: Run: scripts/firecrawl-wrap.sh <url> <max_chars> If output != "FIRECRAWL_UNAVAILABLE" and != "FIRECRAWL_EMPTY" β accept Tier 3 β Browser (last resort): If Tier 2 fails: Call browser(action="open", url=url) Call browser(action="snapshot") Load prompts/browser-extract.txt, substitute {{query}} and {{max_chars_per_source}} Call LLM with snapshot content + extraction prompt If output != "FETCH_FAILED:..." β accept If all tiers fail: Use the original snippet from Stage 1 search results. Mark as [snippet only]. Store: { url: extracted_content } dict.
Choose prompt template based on --style: report / bullets / timeline β prompts/synthesize-report.txt comparison β prompts/synthesize-comparison.txt Replace template vars: {{query}} β user query {{today}} β current date (YYYY-MM-DD) {{language}} β language param {{source_count}} β number of successfully fetched sources {{dimensions_or_auto}} β dimensions param (or "auto") {{fetched_content_blocks}} β build as: [Source 1] (url1) <content> --- [Source 2] (url2) <content> Call LLM with the filled prompt. The output is the final report. If --output FILE is set, write the report to that file. Otherwise, print to the channel.
Defaults are in config.yaml. Override via CLI flags above.
skills/deep-scout/ βββ SKILL.md β This file (agent instructions) βββ config.yaml β Default parameter values βββ prompts/ β βββ filter.txt β Stage 2: relevance scoring prompt β βββ synthesize-report.txt β Stage 4: report/bullets/timeline synthesis β βββ synthesize-comparison.txtβ Stage 4: comparison table synthesis β βββ browser-extract.txt β Stage 3: browser snapshot extraction βββ scripts/ β βββ run.sh β CLI entrypoint (emits pipeline actions) β βββ firecrawl-wrap.sh β Firecrawl CLI wrapper with fallback handling βββ examples/ βββ openclaw-acquisition.md β Example output: OpenClaw M&A intelligence
ScenarioHandlingAll fetch attempts failUse snippet from Stage 1; mark [snippet only]Search returns 0 resultsRetry with freshness: py; error if still 0Firecrawl not installedfirecrawl-wrap.sh outputs FIRECRAWL_UNAVAILABLE, skip silentlyBrowser tool unavailableSkip Tier 3; proceed with available contentLLM synthesis exceeds contextTrim sources proportionally, prioritize high-score sourcesRate limit on Brave APIWait 2s, retry once
See examples/openclaw-acquisition.md for a full sample report. Deep Scout v0.1.0 Β· OpenClaw Skills Β· clawhub: deep-scout
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.