Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
CLI web search and page fetcher for LLM agents. Search DuckDuckGo/Brave/Bing/Google, fetch pages as markdown, and extract links — single binary, no browser r...
CLI web search and page fetcher for LLM agents. Search DuckDuckGo/Brave/Bing/Google, fetch pages as markdown, and extract links — single binary, no browser r...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Web search and page fetcher for AI agents. Single binary, no browser needed. Fetches pages with browser-like TLS fingerprints for reliable access. Use for: web searches, fetching page content as markdown, extracting links, and gathering information from the web.
ghostfetch "your search query" # Search DuckDuckGo (default) ghostfetch "query" -e brave # Search with Brave ghostfetch "query" -e google # Search with Google ghostfetch "query" -e bing # Search with Bing ghostfetch "query" -n 5 # Limit to 5 results ghostfetch "query" --json # JSON output with metadata Search engines: duckduckgo (default), brave, bing, google
ghostfetch fetch https://example.com # Fetch page (raw HTML) ghostfetch fetch https://example.com -m # Fetch as markdown (reader mode — preferred) ghostfetch fetch https://example.com --markdown-full # Full page as markdown (not just main content) ghostfetch fetch https://example.com --json # JSON with body, status, headers, cookies ghostfetch fetch https://example.com --raw # Raw HTML without processing ghostfetch fetch url1 url2 url3 -p 3 # Fetch multiple URLs in parallel Always use -m (markdown mode) when reading page content — it extracts the main content and converts to clean markdown, saving tokens vs raw HTML.
ghostfetch links https://example.com # Extract all links from page ghostfetch links https://example.com -f "github" # Filter links by regex pattern ghostfetch links https://example.com --json # JSON output
FlagShortDefaultWhat it does--engine-educkduckgoSearch engine to use--results-n10Number of search results--markdown-mfalseConvert to markdown (reader mode)--markdown-fullfalseFull page markdown (not just main content)--json-jfalseJSON output with metadata--rawfalseRaw HTML output--max-parallel-p5Max parallel fetches--filter-fFilter links by regex--timeout-t30sRequest timeout--browser-bchromeBrowser fingerprint: chrome, firefox--no-cookiesfalseDisable cookie persistence--follow-LtrueFollow redirects--verbose-vfalsePrint request/response details--captcha-serviceCaptcha service: 2captcha, anticaptcha--captcha-keyCaptcha service API key
I want to...Use thisSearch the webghostfetch "query"Search with specific engineghostfetch "query" -e braveRead a web pageghostfetch fetch <url> -mRead multiple pages at onceghostfetch fetch url1 url2 url3 -m -p 3Find links on a pageghostfetch links <url>Find specific linksghostfetch links <url> -f "pattern"Get structured dataghostfetch fetch <url> --json
ghostfetch "rust async runtime comparison 2026" -n 5 ghostfetch fetch https://tokio.rs -m
ghostfetch fetch https://api.example.com/data --json
ghostfetch links https://awesome-list.com -f "github.com"
The ghostfetch binary must be in your PATH. Build from source: git clone https://github.com/neothelobster/ghostfetch.git cd ghostfetch go build -o ghostfetch . cp ghostfetch ~/.openclaw/workspace/tools/ Or run the included setup.sh which clones at a pinned commit with verification. Requires Go 1.21+ to build. No runtime dependencies.
Read-only tool — output goes to stdout only, no file write capability No custom headers or POST bodies — cannot leak secrets to external endpoints No data is stored except optional cookie jars (disabled with --no-cookies) All network requests go directly from your machine — no proxy or third-party service The setup script clones from GitHub at a pinned commit with verification Source code: https://github.com/neothelobster/ghostfetch
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.