โ† All skills
Tencent SkillHub ยท Productivity

Openclaw Research Tool

Search the web using LLMs via OpenRouter. Use for current web data, API docs, market research, news, fact-checking, or any question that benefits from live i...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Search the web using LLMs via OpenRouter. Use for current web data, API docs, market research, news, fact-checking, or any question that benefits from live i...

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
0.1.5

Documentation

ClawHub primary doc Primary doc: SKILL.md 14 sections Open source page

OpenClaw Research Tool

Web search for OpenClaw agents, powered by OpenRouter. Ask questions in natural language, get accurate answers with cited sources. Defaults to GPT-5.2 which excels at documentation lookups and citation-heavy research. Note: Even low-effort queries may take 1 minute or more to complete. High/xhigh reasoning can take 10+ minutes depending on complexity. This is normal โ€” the model is searching the web, reading pages, and synthesizing an answer. Recommended: Run research-tool in a sub-agent so your main session stays responsive: sessions_spawn task:"research-tool 'your query here'" โš ๏ธ Never set a timeout on exec when running research-tool. Queries routinely take 1-10+ minutes. Use yieldMs to background it, then poll โ€” but do NOT set timeout or the process will be killed mid-search. The :online model suffix gives any model live web access โ€” it searches the web, reads pages, cites URLs, and synthesizes an answer.

Install

cargo install openclaw-search-tool Requires OPENROUTER_API_KEY env var. Get a key at https://openrouter.ai/keys

Quick start

research-tool "What are the x.com API rate limits?" research-tool "How do I set reasoning effort parameters on OpenRouter?"

From an OpenClaw agent

# Best: run in a sub-agent (main session stays responsive) sessions_spawn task:"research-tool 'your query here'" # Or via exec โ€” NEVER set timeout, use yieldMs to background: exec command:"research-tool 'your query'" yieldMs:5000 # then poll the session until complete

--effort, -e (default: low)

Controls how much the model reasons before answering. Higher effort means better analysis but slower and more tokens. research-tool --effort low "What year was Rust 1.0 released?" research-tool --effort medium "Explain how OpenRouter routes requests to different model providers" research-tool --effort high "Compare tradeoffs between Opus 4.6 and gpt-5.3-codex for programming" research-tool --effort xhigh "Deep analysis of React Server Components vs traditional SSR approaches" LevelSpeedWhen to uselow~1-3 minQuick fact lookups, simple questionsmedium~2-5 minStandard research, moderate analysishigh~3-10 minDeep analysis with careful reasoningxhigh~5-20+ minMaximum reasoning, complex multi-source synthesis Can also be set via env var RESEARCH_EFFORT.

--model, -m (default: openai/gpt-5.2:online)

Which model to use. Defaults to GPT-5.2 with the :online suffix because it excels at questions where citations and accurate documentation lookups matter. The :online suffix enables live web search and works with any model on OpenRouter. # Default: GPT-5.2 with web search (great for docs and cited answers) research-tool "current weather in San Francisco" # Claude with web search research-tool -m "anthropic/claude-sonnet-4-20250514:online" "Summarize recent changes to the OpenAI API" # GPT-5.2 without web search (training data only) research-tool -m "openai/gpt-5.2" "Explain the React Server Components architecture" # Any OpenRouter model research-tool -m "google/gemini-2.5-pro:online" "Compare React vs Svelte in 2026" Can also be set via env var RESEARCH_MODEL.

--system, -s

Override the system prompt to give the model a specific persona or instructions. research-tool -s "You are a senior infrastructure engineer" "Best practices for zero-downtime Kubernetes deployments" research-tool -s "You are a Rust systems programmer" "Best async patterns for WebSocket servers"

--stdin

Read the query from stdin. Useful for long or multiline queries. echo "Explain the OpenRouter model routing architecture" | research-tool --stdin cat detailed-prompt.txt | research-tool --stdin

--max-tokens (default: 12800)

Maximum tokens in the response.

--timeout (optional, no default)

No timeout by default โ€” queries run until the model finishes. Set this only if you need a hard upper bound (e.g. --timeout 300).

Output format

stdout: Response text only (markdown with citations) โ€” pipe-friendly stderr: Progress status, reasoning traces, and token usage ๐Ÿ” Researching with openai/gpt-5.2:online (effort: high)... โœ… Connected โ€” waiting for response... [response text on stdout] ๐Ÿ“Š Tokens: 4470 prompt + 184 completion = 4654 total | โฑ 5s

Status indicators

๐Ÿ” Researching... โ€” request sent to OpenRouter โœ… Connected โ€” waiting for response... โ€” server accepted the request, model is searching/thinking โณ 15s... โณ 30s... โ€” elapsed time ticks (only in interactive terminals, not in agent exec) โŒ Connection to OpenRouter failed โ€” couldn't reach OpenRouter (network issue) โŒ Connection to OpenRouter lost โ€” connection dropped while waiting. Retry?

Tips for better results

Write in natural language. "What are the best practices for Rust error handling and when should you use anyhow vs thiserror?" works better than keyword-style queries. Provide maximum context. The model starts from zero. Include background, what you already know, and all related sub-questions. Detailed prompts massively outperform vague ones. Use effort levels appropriately. low for quick facts, high for real research, xhigh only for complex multi-source analysis. Use -s for domain expertise. A specific persona produces noticeably better domain-specific answers.

Cost

~$0.01โ€“0.05 per query. Token usage is printed to stderr after each query.

Category context

Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
1 Docs
  • SKILL.md Primary doc