← All skills
Tencent SkillHub Β· Developer Tools

Deep Scout

Transforms a natural-language query into a detailed research report by searching, filtering, fetching, and synthesizing relevant web content with source cita...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Transforms a natural-language query into a detailed research report by searching, filtering, fetching, and synthesizing relevant web content with source cita...

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
README.md, SKILL.md, clawhub.json, config.yaml, examples/openclaw-acquisition.md, prompts/browser-extract.txt

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
0.1.4

Documentation

ClawHub primary doc Primary doc: SKILL.md 14 sections Open source page

deep-scout

Multi-stage deep intelligence pipeline (Search β†’ Filter β†’ Fetch β†’ Synthesize).

1. Ask OpenClaw (Recommended)

Tell OpenClaw: "Install the deep-scout skill." The agent will handle the installation and configuration automatically.

2. Manual Installation (CLI)

If you prefer the terminal, run: clawhub install deep-scout

πŸš€ Usage

/deep-scout "Your research question" [--depth 5] [--freshness pw] [--country US] [--style report]

Options

FlagDefaultDescription--depth N5Number of URLs to fully fetch (1–10)--freshnesspwpd=past day, pw=past week, pm=past month, py=past year--countryUS2-letter country code for Brave search--languageen2-letter language code--search-count8Total results to collect before filtering--min-score4Minimum relevance score to keep (0–10)--stylereportreport | comparison | bullets | timeline--dimensionsautoComparison dimensions (comma-separated, for --style comparison)--output FILEstdoutWrite report to file--no-browserβ€”Disable browser fallback--no-firecrawlβ€”Disable Firecrawl fallback

πŸ› οΈ Pipeline β€” Agent Loop Instructions

When this skill is invoked, execute the following four-stage pipeline:

Stage 1: SEARCH

Call web_search with: query: <user query> count: <search_count> country: <country> search_lang: <language> freshness: <freshness> Collect: title, url, snippet for each result. If fewer than 3 results returned, retry with freshness: "py" (relaxed).

Stage 2: FILTER

Load prompts/filter.txt. Replace template vars: {{query}} β†’ the user's query {{freshness}} β†’ freshness param {{min_score}} β†’ min_score param {{results_json}} β†’ JSON array of search results Call the LLM with this prompt. Parse the returned JSON array. Keep only results where keep: true. Sort by score descending. Take top depth URLs as the fetch list. Deduplication: Max 2 results per root domain (already handled in filter prompt).

Stage 3: FETCH (Tiered Escalation)

For each URL in the filtered list: Tier 1 β€” web_fetch (fast): Call web_fetch(url) If content length >= 200 chars β†’ accept, trim to max_chars_per_source Tier 2 β€” Firecrawl (deep/JS): If Tier 1 fails or returns < 200 chars: Run: scripts/firecrawl-wrap.sh <url> <max_chars> If output != "FIRECRAWL_UNAVAILABLE" and != "FIRECRAWL_EMPTY" β†’ accept Tier 3 β€” Browser (last resort): If Tier 2 fails: Call browser(action="open", url=url) Call browser(action="snapshot") Load prompts/browser-extract.txt, substitute {{query}} and {{max_chars_per_source}} Call LLM with snapshot content + extraction prompt If output != "FETCH_FAILED:..." β†’ accept If all tiers fail: Use the original snippet from Stage 1 search results. Mark as [snippet only]. Store: { url: extracted_content } dict.

Stage 4: SYNTHESIZE

Choose prompt template based on --style: report / bullets / timeline β†’ prompts/synthesize-report.txt comparison β†’ prompts/synthesize-comparison.txt Replace template vars: {{query}} β†’ user query {{today}} β†’ current date (YYYY-MM-DD) {{language}} β†’ language param {{source_count}} β†’ number of successfully fetched sources {{dimensions_or_auto}} β†’ dimensions param (or "auto") {{fetched_content_blocks}} β†’ build as: [Source 1] (url1) <content> --- [Source 2] (url2) <content> Call LLM with the filled prompt. The output is the final report. If --output FILE is set, write the report to that file. Otherwise, print to the channel.

βš™οΈ Configuration

Defaults are in config.yaml. Override via CLI flags above.

πŸ“‚ Project Structure

skills/deep-scout/ β”œβ”€β”€ SKILL.md ← This file (agent instructions) β”œβ”€β”€ config.yaml ← Default parameter values β”œβ”€β”€ prompts/ β”‚ β”œβ”€β”€ filter.txt ← Stage 2: relevance scoring prompt β”‚ β”œβ”€β”€ synthesize-report.txt ← Stage 4: report/bullets/timeline synthesis β”‚ β”œβ”€β”€ synthesize-comparison.txt← Stage 4: comparison table synthesis β”‚ └── browser-extract.txt ← Stage 3: browser snapshot extraction β”œβ”€β”€ scripts/ β”‚ β”œβ”€β”€ run.sh ← CLI entrypoint (emits pipeline actions) β”‚ └── firecrawl-wrap.sh ← Firecrawl CLI wrapper with fallback handling └── examples/ └── openclaw-acquisition.md ← Example output: OpenClaw M&A intelligence

πŸ”§ Error Handling

ScenarioHandlingAll fetch attempts failUse snippet from Stage 1; mark [snippet only]Search returns 0 resultsRetry with freshness: py; error if still 0Firecrawl not installedfirecrawl-wrap.sh outputs FIRECRAWL_UNAVAILABLE, skip silentlyBrowser tool unavailableSkip Tier 3; proceed with available contentLLM synthesis exceeds contextTrim sources proportionally, prioritize high-score sourcesRate limit on Brave APIWait 2s, retry once

πŸ“‹ Example Outputs

See examples/openclaw-acquisition.md for a full sample report. Deep Scout v0.1.0 Β· OpenClaw Skills Β· clawhub: deep-scout

Category context

Code helpers, APIs, CLIs, browser automation, testing, and developer operations.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Docs2 Config1 Assets
  • SKILL.md Primary doc
  • examples/openclaw-acquisition.md Docs
  • README.md Docs
  • clawhub.json Config
  • config.yaml Config
  • prompts/browser-extract.txt Assets