Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Web search across 7 engines in parallel with browser impersonation. Use when the agent needs current information from the web — news, documentation, recent e...
Web search across 7 engines in parallel with browser impersonation. Use when the agent needs current information from the web — news, documentation, recent e...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Metasearch CLI — queries Google, DuckDuckGo, Brave, Yahoo, Mojeek, Startpage, and Presearch in parallel. Uses curl_cffi for browser impersonation. Results like a browser, speed like an API.
You need current/recent information not in your training data You need to verify facts or find sources You need to discover URLs, documentation, or code repositories The user asks about recent events, releases, or news
pip install webserp No API keys, no configuration. Just install and search.
# Search all 7 engines (default) webserp "how to deploy docker containers" # Search specific engines webserp "python async tutorial" --engines google,brave,duckduckgo # Limit results per engine webserp "rust vs go" --max-results 5 # Show which engines succeeded/failed webserp "test query" --verbose # Set per-engine timeout webserp "query" --timeout 15 # Use a proxy webserp "query" --proxy "socks5://127.0.0.1:1080"
FlagDescriptionDefault-e, --enginesComma-separated engine listall-n, --max-resultsMax results per engine10--timeoutPer-engine timeout (seconds)10--proxyProxy URL for all requestsnone--verboseShow engine status in stderrfalse
JSON to stdout (SearXNG-compatible): { "query": "deployment issue", "number_of_results": 42, "results": [ { "title": "How to fix Docker deployment issues", "url": "https://example.com/docker-fix", "content": "Common Docker deployment problems and solutions...", "engine": "google" } ], "suggestions": [], "unresponsive_engines": [] } Parse with jq or any JSON parser. The results array contains title, url, content, and engine for each result. unresponsive_engines lists any engines that failed with the error reason.
Use --max-results 5 to keep output concise when you just need a few links Use --engines google,brave to target specific engines for faster results Use --verbose (writes to stderr) to see which engines responded — the JSON on stdout is unaffected Results are deduplicated by URL across engines — you won't get the same link twice
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.