Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Build RAG (Retrieval Augmented Generation) pipelines with web search and LLMs. Tools: Tavily Search, Exa Search, Exa Answer, Claude, GPT-4, Gemini via OpenRo...
Build RAG (Retrieval Augmented Generation) pipelines with web search and LLMs. Tools: Tavily Search, Exa Search, Exa Answer, Claude, GPT-4, Gemini via OpenRo...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Build RAG (Retrieval Augmented Generation) pipelines via inference.sh CLI.
curl -fsSL https://cli.inference.sh | sh && infsh login # Simple RAG: Search + LLM SEARCH=$(infsh app run tavily/search-assistant --input '{"query": "latest AI developments 2024"}') infsh app run openrouter/claude-sonnet-45 --input "{ \"prompt\": \"Based on this research, summarize the key trends: $SEARCH\" }" Install note: The install script only detects your OS/architecture, downloads the matching binary from dist.inference.sh, and verifies its SHA-256 checksum. No elevated permissions or background processes. Manual install & verification available.
RAG combines: Retrieval: Fetch relevant information from external sources Augmentation: Add retrieved context to the prompt Generation: LLM generates response using the context This produces more accurate, up-to-date, and verifiable AI responses.
[User Query] -> [Web Search] -> [LLM with Context] -> [Answer]
[Query] -> [Multiple Searches] -> [Aggregate] -> [LLM Analysis] -> [Report]
[URLs] -> [Content Extraction] -> [Chunking] -> [LLM Summary] -> [Output]
ToolApp IDBest ForTavily Searchtavily/search-assistantAI-powered search with answersExa Searchexa/searchNeural search, semantic matchingExa Answerexa/answerDirect factual answers
ToolApp IDBest ForTavily Extracttavily/extractClean content from URLsExa Extractexa/extractAnalyze web content
ModelApp IDBest ForClaude Sonnet 4.5openrouter/claude-sonnet-45Complex analysisClaude Haiku 4.5openrouter/claude-haiku-45Fast processingGPT-4oopenrouter/gpt-4oGeneral purposeGemini 2.5 Proopenrouter/gemini-25-proLong context
# 1. Search for information SEARCH_RESULT=$(infsh app run tavily/search-assistant --input '{ "query": "What are the latest breakthroughs in quantum computing 2024?" }') # 2. Generate grounded response infsh app run openrouter/claude-sonnet-45 --input "{ \"prompt\": \"You are a research assistant. Based on the following search results, provide a comprehensive summary with citations. Search Results: $SEARCH_RESULT Provide a well-structured summary with source citations.\" }"
# Search multiple sources TAVILY=$(infsh app run tavily/search-assistant --input '{"query": "electric vehicle market trends 2024"}') EXA=$(infsh app run exa/search --input '{"query": "EV market analysis latest reports"}') # Combine and analyze infsh app run openrouter/claude-sonnet-45 --input "{ \"prompt\": \"Analyze these research results and identify common themes and contradictions. Source 1 (Tavily): $TAVILY Source 2 (Exa): $EXA Provide a balanced analysis with sources.\" }"
# 1. Extract content from specific URLs CONTENT=$(infsh app run tavily/extract --input '{ "urls": [ "https://example.com/research-paper", "https://example.com/industry-report" ] }') # 2. Analyze extracted content infsh app run openrouter/claude-sonnet-45 --input "{ \"prompt\": \"Analyze these documents and extract key insights: $CONTENT Provide: 1. Key findings 2. Data points 3. Recommendations\" }"
# Claim to verify CLAIM="AI will replace 50% of jobs by 2030" # 1. Search for evidence EVIDENCE=$(infsh app run tavily/search-assistant --input "{ \"query\": \"$CLAIM evidence studies research\" }") # 2. Verify claim infsh app run openrouter/claude-sonnet-45 --input "{ \"prompt\": \"Fact-check this claim: '$CLAIM' Based on the following evidence: $EVIDENCE Provide: 1. Verdict (True/False/Partially True/Unverified) 2. Supporting evidence 3. Contradicting evidence 4. Sources\" }"
# Use Exa Answer for direct factual questions infsh app run exa/answer --input '{ "question": "What is the current market cap of NVIDIA?" }'
# Bad: Too vague "AI news" # Good: Specific and contextual "latest developments in large language models January 2024"
# Summarize long search results before sending to LLM SEARCH=$(infsh app run tavily/search-assistant --input '{"query": "..."}') # If too long, summarize first SUMMARY=$(infsh app run openrouter/claude-haiku-45 --input "{ \"prompt\": \"Summarize these search results in bullet points: $SEARCH\" }") # Then use summary for analysis infsh app run openrouter/claude-sonnet-45 --input "{ \"prompt\": \"Based on this research summary, provide insights: $SUMMARY\" }"
Always ask the LLM to cite sources: infsh app run openrouter/claude-sonnet-45 --input '{ "prompt": "... Always cite sources in [Source Name](URL) format." }'
# First pass: broad search INITIAL=$(infsh app run tavily/search-assistant --input '{"query": "topic overview"}') # Second pass: dive deeper based on findings DEEP=$(infsh app run tavily/search-assistant --input '{"query": "specific aspect from initial search"}')
#!/bin/bash # research.sh - Reusable research function research() { local query="$1" # Search local results=$(infsh app run tavily/search-assistant --input "{\"query\": \"$query\"}") # Analyze infsh app run openrouter/claude-haiku-45 --input "{ \"prompt\": \"Summarize: $results\" }" } research "your query here"
# Web search tools npx skills add inference-sh/skills@web-search # LLM models npx skills add inference-sh/skills@llm-models # Content pipelines npx skills add inference-sh/skills@ai-content-pipeline # Full platform skill npx skills add inference-sh/skills@inference-sh Browse all apps: infsh app list
Adding Tools to Agents - Agent tool integration Building a Research Agent - Full guide
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.