Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Neural web search, similar-page discovery, and URL content fetching via the Exa AI search engine. USE WHEN: user asks to search the web, find articles/repos/...
Neural web search, similar-page discovery, and URL content fetching via the Exa AI search engine. USE WHEN: user asks to search the web, find articles/repos/...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Use this skill to search the web, find similar pages, or fetch page contents via the Exa AI search engine โ fast, neural, and certificate-aware. The skill invokes a native Rust binary (bin/exa-search) via Bash. Run install.sh once to build it.
โ USE when: Searching the web for articles, documentation, repos, papers, tools, people, companies Finding recent news or announcements (use livecrawl: "fallback" or "always" for recency) Fetching full text of a known URL without browser automation Finding pages similar to a reference URL (competitor analysis, alternative tools) Any web lookup that isn't a specific tweet or video โ NOT FOR: Fetching tweets/X posts โ use fxtwitter skill (Exa can't fetch tweet URLs) Downloading video/audio โ use yt-dlp Scraping dynamic or Cloudflare-protected pages โ use scrapling Local file/code search โ use rg, find, or grep Querying structured APIs (GitHub, weather, etc.) โ use their dedicated skills When the query is already a direct URL with known content โ prefer get_contents action
EXA_API_KEY set in ~/.openclaw/workspace/.env (get one at exa.ai) Rust installed (rustup) โ only needed for the one-time build bin/exa-search binary present (run bash install.sh to build)
echo '{"query":"your query here","num_results":5}' \ | EXA_API_KEY=$(grep -E "^EXA_API_KEY=" ~/.openclaw/workspace/.env | cut -d= -f2 | tr -d '"') \ ~/.openclaw/workspace/skills/exa-search/bin/exa-search Full params: { "query": "rust async programming", "num_results": 5, "type": "neural", "livecrawl": "never", "include_domains": ["github.com", "docs.rs"], "exclude_domains": ["reddit.com"], "start_published_date": "2025-01-01", "end_published_date": "2026-12-31", "category": "research paper", "use_autoprompt": true, "text": { "max_characters": 2000 }, "highlights": { "num_sentences": 3, "highlights_per_url": 2 }, "summary": { "query": "key takeaways" } } type options: auto (default) ยท neural ยท keyword livecrawl options: "never" โ fastest (~300-600ms), pure cached index. Best for reference material, docs, courses. "fallback" โ use cache, crawl live if not cached. Good default. "preferred" โ prefer live crawl. Slower but fresher. "always" โ always crawl live. For breaking news or rapidly-changing pages.
Find pages similar to a given URL: echo '{"action":"find_similar","url":"https://example.com","num_results":5}' \ | EXA_API_KEY=$(grep -E "^EXA_API_KEY=" ~/.openclaw/workspace/.env | cut -d= -f2 | tr -d '"') \ ~/.openclaw/workspace/skills/exa-search/bin/exa-search Params: same contents options as search (text, highlights, summary, livecrawl)
Fetch full contents for one or more URLs: echo '{"action":"get_contents","urls":["https://example.com","https://other.com"],"text":{"max_characters":1000}}' \ | EXA_API_KEY=$(grep -E "^EXA_API_KEY=" ~/.openclaw/workspace/.env | cut -d= -f2 | tr -d '"') \ ~/.openclaw/workspace/skills/exa-search/bin/exa-search
All actions return JSON on stdout: { "ok": true, "action": "search", "results": [ { "url": "https://...", "title": "...", "score": 0.87, "author": "...", "published_date": "2026-01-15", "image": "https://...", "favicon": "https://...", "text": "...", "highlights": ["..."], "summary": "..." } ], "formatted": "## [Title](url)\n..." } On error: { "ok": false, "error": "..." } The formatted field is ready-to-use markdown โ you can send it directly to the user.
ModeAvgPeaklivecrawl: "never" (instant)~440ms308msDefault (no livecrawl)~927ms629ms ~18.7ร faster than Exa MCP at peak.
EXA_API_KEY=$(grep -E "^EXA_API_KEY=" ~/.openclaw/workspace/.env | cut -d= -f2 | tr -d '"') Or export once at the top of a longer workflow: export EXA_API_KEY=$(grep -E "^EXA_API_KEY=" ~/.openclaw/workspace/.env | cut -d= -f2 | tr -d '"') echo '{"query":"..."}' | ~/.openclaw/workspace/skills/exa-search/bin/exa-search
EXA_API_KEY=$(grep -E "^EXA_API_KEY=" ~/.openclaw/workspace/.env | cut -d= -f2 | tr -d '"') echo '{"query":"...","num_results":5,"livecrawl":"never"}' \ | EXA_API_KEY="$EXA_API_KEY" ~/.openclaw/workspace/skills/exa-search/bin/exa-search The formatted field in the output is ready-to-use markdown โ send it directly to the user.
SituationlivecrawltypeDocs, tutorials, courses, reference material"never""neural"General research โ people, tools, concepts, companies"never""neural"Exact function names, error messages, package names"never""keyword"Recent releases, changelogs, GitHub repos"fallback""auto"News or announcements from the last 1-2 weeks"fallback""neural"Breaking news, live prices, today's events"always""neural"Unsure"fallback""auto" Default when in doubt: livecrawl: "never", type: "neural" โ fastest, works for 80% of searches.
search (default) โ use for any information retrieval from a query string. find_similar โ use when you have a URL and want more like it: related articles, alternative tools, similar repos, competing products. echo '{"action":"find_similar","url":"https://...","num_results":5,"livecrawl":"never"}' | EXA_API_KEY="$EXA_API_KEY" ... get_contents โ use when you have a specific URL and need its full text: docs pages, blog posts, GitHub READMEs, papers. Faster than search when the URL is already known. echo '{"action":"get_contents","urls":["https://..."],"text":{"max_characters":3000}}' | EXA_API_KEY="$EXA_API_KEY" ...
When writing reports, summaries, or comparing multiple results โ request highlights or summary per result: { "query": "...", "num_results": 5, "highlights": { "num_sentences": 3, "highlights_per_url": 2 }, "summary": { "query": "key takeaways for a developer" } } Note: highlights/summary add latency (~200-500ms extra). Only use when you actually need them.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.