Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Tavily AI search platform with 5 modes: Search (web/news/finance), Extract (URL content), Crawl (website crawling), Map (sitemap discovery), and Research (deep research with citations). Use for: web search with LLM answers, content extraction, site crawling, deep research.
Tavily AI search platform with 5 modes: Search (web/news/finance), Extract (URL content), Crawl (website crawling), Map (sitemap discovery), and Research (deep research with citations). Use for: web search with LLM answers, content extraction, site crawling, deep research.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
AI-powered web search platform with 5 modes: Search, Extract, Crawl, Map, and Research.
TAVILY_API_KEY environment variable
Env VariableDefaultDescriptionTAVILY_API_KEYβRequired. Tavily API key Set in OpenClaw config: { "env": { "TAVILY_API_KEY": "tvly-..." } }
python3 skills/tavily/lib/tavily_search.py <command> "query" [options]
General-purpose web search with optional LLM-synthesized answer. python3 lib/tavily_search.py search "query" [options] Examples: # Basic search python3 lib/tavily_search.py search "latest AI news" # With LLM answer python3 lib/tavily_search.py search "what is quantum computing" --answer # Advanced depth (better results, 2 credits) python3 lib/tavily_search.py search "climate change solutions" --depth advanced # Time-filtered python3 lib/tavily_search.py search "OpenAI announcements" --time week # Domain filtering python3 lib/tavily_search.py search "machine learning" --include-domains arxiv.org,nature.com # Country boost python3 lib/tavily_search.py search "tech startups" --country US # With raw content and images python3 lib/tavily_search.py search "solar energy" --raw --images -n 10 # JSON output python3 lib/tavily_search.py search "bitcoin price" --json Output format (text): Answer: <LLM-synthesized answer if --answer> Results: 1. Result Title https://example.com/article Content snippet from the page... 2. Another Result https://example.com/other Another snippet...
Search optimized for news articles. Sets topic=news. python3 lib/tavily_search.py news "query" [options] Examples: python3 lib/tavily_search.py news "AI regulation" python3 lib/tavily_search.py news "Israel tech" --time day --answer python3 lib/tavily_search.py news "stock market" --time week -n 10
Search optimized for financial data and news. Sets topic=finance. python3 lib/tavily_search.py finance "query" [options] Examples: python3 lib/tavily_search.py finance "NVIDIA stock analysis" python3 lib/tavily_search.py finance "cryptocurrency market trends" --time month python3 lib/tavily_search.py finance "S&P 500 forecast 2026" --answer
Extract readable content from one or more URLs. python3 lib/tavily_search.py extract URL [URL...] [options] Parameters: urls: One or more URLs to extract (positional args) --depth basic|advanced: Extraction depth --format markdown|text: Output format (default: markdown) --query "text": Rerank extracted chunks by relevance to query Examples: # Extract single URL python3 lib/tavily_search.py extract "https://example.com/article" # Extract multiple URLs python3 lib/tavily_search.py extract "https://url1.com" "https://url2.com" # Advanced extraction with relevance reranking python3 lib/tavily_search.py extract "https://arxiv.org/paper" --depth advanced --query "transformer architecture" # Text format output python3 lib/tavily_search.py extract "https://example.com" --format text Output format: URL: https://example.com/article βββββββββββββββββββββββββββββββββ <Extracted content in markdown/text> URL: https://another.com/page βββββββββββββββββββββββββββββββββ <Extracted content>
Crawl a website starting from a root URL, following links. python3 lib/tavily_search.py crawl URL [options] Parameters: url: Root URL to start crawling --depth basic|advanced: Crawl depth --max-depth N: Maximum link depth to follow (default: 2) --max-breadth N: Maximum pages per depth level (default: 10) --limit N: Maximum total pages (default: 10) --instructions "text": Natural language crawl instructions --select-paths p1,p2: Only crawl these path patterns --exclude-paths p1,p2: Skip these path patterns --format markdown|text: Output format Examples: # Basic crawl python3 lib/tavily_search.py crawl "https://docs.example.com" # Focused crawl with instructions python3 lib/tavily_search.py crawl "https://docs.python.org" --instructions "Find all asyncio documentation" --limit 20 # Crawl specific paths only python3 lib/tavily_search.py crawl "https://example.com" --select-paths "/blog,/docs" --max-depth 3 Output format: Crawled 5 pages from https://docs.example.com Page 1: https://docs.example.com/intro βββββββββββββββββββββββββββββββββ <Content> Page 2: https://docs.example.com/guide βββββββββββββββββββββββββββββββββ <Content>
Discover all URLs on a website (sitemap). python3 lib/tavily_search.py map URL [options] Parameters: url: Root URL to map --max-depth N: Depth to follow (default: 2) --max-breadth N: Breadth per level (default: 20) --limit N: Maximum URLs (default: 50) Examples: # Map a site python3 lib/tavily_search.py map "https://example.com" # Deep map python3 lib/tavily_search.py map "https://docs.python.org" --max-depth 3 --limit 100 Output format: Sitemap for https://example.com (42 URLs found): 1. https://example.com/ 2. https://example.com/about 3. https://example.com/blog ...
Comprehensive AI-powered research on a topic with citations. python3 lib/tavily_search.py research "query" [options] Parameters: query: Research question --model mini|pro|auto: Research model (default: auto) mini: Faster, cheaper pro: More thorough auto: Let Tavily decide --json: JSON output (supports structured output schema) Examples: # Basic research python3 lib/tavily_search.py research "Impact of AI on healthcare in 2026" # Pro model for thorough research python3 lib/tavily_search.py research "Comparison of quantum computing approaches" --model pro # JSON output python3 lib/tavily_search.py research "Electric vehicle market analysis" --json Output format: Research: Impact of AI on healthcare in 2026 <Comprehensive research report with citations> Sources: [1] https://source1.com [2] https://source2.com ...
OptionApplies ToDescriptionDefault--depth basic|advancedsearch, news, finance, extractSearch/extraction depthbasic--time day|week|month|yearsearch, news, financeTime range filternone-n NUMsearch, news, financeMax results (0-20)5--answersearch, news, financeInclude LLM answeroff--rawsearch, news, financeInclude raw page contentoff--imagessearch, news, financeInclude image URLsoff--include-domains d1,d2search, news, financeOnly these domainsnone--exclude-domains d1,d2search, news, financeExclude these domainsnone--country XXsearch, news, financeBoost country resultsnone--jsonallStructured JSON outputoff--format markdown|textextract, crawlContent formatmarkdown--query "text"extractRelevance reranking querynone--model mini|pro|autoresearchResearch modelauto--max-depth Ncrawl, mapMax link depth2--max-breadth Ncrawl, mapMax pages per level10/20--limit Ncrawl, mapMax total pages/URLs10/50--instructions "text"crawlNatural language instructionsnone--select-paths p1,p2crawlInclude path patternsnone--exclude-paths p1,p2crawlExclude path patternsnone
Missing API key: Clear error message with setup instructions. 401 Unauthorized: Invalid API key. 429 Rate Limit: Rate limit exceeded, try again later. Network errors: Descriptive error with cause. No results: Clean "No results found." message. Timeout: 30-second timeout on all HTTP requests.
APIBasicAdvancedSearch1 credit2 creditsExtract1 credit/URL2 credits/URLCrawl1 credit/page2 credits/pageMap1 credit1 creditResearchVaries by model-
bash skills/tavily/install.sh
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.