Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Conduct deep research using Perplexity Agent API with web search, reasoning, and multi-model analysis. Use when the user needs current information, market re...
Conduct deep research using Perplexity Agent API with web search, reasoning, and multi-model analysis. Use when the user needs current information, market re...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Research assistant powered by Perplexity Agent API with web search and reasoning capabilities.
The Perplexity client is available at scripts/perplexity_client.py in this skill folder. Default model: openai/gpt-5.2 (GPT latest) Key capabilities: Web search for current information High reasoning effort for deep analysis Multi-model comparison Streaming responses Cost tracking
Use for comprehensive analysis requiring web search and reasoning: # Import from skill scripts folder import sys from pathlib import Path sys.path.insert(0, str(Path(__file__).parent / "scripts")) from perplexity_client import PerplexityClient client = PerplexityClient() result = client.research_query( query="Your research question here", model="openai/gpt-5.2", reasoning_effort="high", max_tokens=2000 ) if "error" not in result: print(result["answer"]) print(f"Tokens: {result['tokens']}, Cost: ${result['cost']}")
Use for time-sensitive or current information: result = client.search_query( query="Your question about current events", model="openai/gpt-5.2", max_tokens=1000 )
Use when output quality is critical: results = client.compare_models( query="Your question", models=["openai/gpt-5.2", "anthropic/claude-3-5-sonnet", "google/gemini-2.0-flash"], max_tokens=300 ) for result in results: if "error" not in result: print(f"\n{result['model']}: {result['answer']}")
Use for better UX with lengthy analysis: client.stream_query( query="Your question", model="openai/gpt-5.2", use_search=True, max_tokens=2000 )
When conducting research: Initial exploration: Use research_query() with web search enabled Validate findings: Compare key insights across models with compare_models() Deep dive: Use streaming for detailed analysis on specific aspects Cost-aware: Monitor token usage and costs in results
Default: openai/gpt-5.2 (Latest GPT model) Alternative models: anthropic/claude-3-5-sonnet - Strong reasoning, balanced performance google/gemini-2.0-flash - Fast, cost-effective meta/llama-3.3-70b - Open source alternative Switch models based on: Quality needs (GPT-5.2 for best results) Speed requirements (Gemini Flash for quick answers) Cost constraints (compare costs in results)
Control analysis depth with reasoning_effort: "low" - Quick answers, minimal reasoning "medium" - Balanced reasoning (default for most queries) "high" - Deep analysis, comprehensive research (recommended for research)
Ensure PERPLEXITY_API_KEY is set: export PERPLEXITY_API_KEY='your_api_key_here' Or create .env file in the skill's scripts/ directory: PERPLEXITY_API_KEY=your_api_key_here
All methods return error information: result = client.research_query("Your question") if "error" in result: print(f"Error: {result['error']}") # Handle error appropriately else: # Process successful result print(result["answer"])
Use max_tokens to limit response length Start with lower reasoning effort, increase if needed Use search_query() instead of research_query() for simpler questions Monitor costs via result["cost"] field
client = PerplexityClient() # Market analysis result = client.research_query( query="Analyze recent developments in AI chip market and key competitors", reasoning_effort="high" ) # Company deep dive result = client.search_query( query="Latest earnings report for NVIDIA Q4 2025" ) # Multi-model validation results = client.compare_models( query="What are the biggest risks in the semiconductor industry?", models=["openai/gpt-5.2", "anthropic/claude-3-5-sonnet"] )
# Current trends with web search result = client.research_query( query="Emerging trends in sustainable investing and ESG adoption rates", reasoning_effort="high", max_tokens=2000 ) # Stream for real-time updates client.stream_query( query="Latest developments in quantum computing commercialization", use_search=True )
# Build context across multiple queries messages = [ {"role": "user", "content": "What is the current state of fusion energy?"}, {"role": "assistant", "content": "...previous response..."}, {"role": "user", "content": "Which companies are leading in this space?"} ] result = client.conversation( messages=messages, use_search=True )
Default to research_query() for most research tasks - it combines web search with high reasoning Use streaming for user-facing applications to show progress Compare models for critical decisions or when quality is paramount Set reasonable max_tokens - 1000 for summaries, 2000+ for deep analysis Track costs - access via result["cost"] and result["tokens"] Handle errors gracefully - always check for "error" key in results
See reference.md for complete API documentation, or scripts/perplexity_client.py for: Full method signatures Additional parameters CLI usage examples Implementation details
Run from the skill directory: # Research mode python scripts/perplexity_client.py research "Your question" # Web search python scripts/perplexity_client.py search "Your question" # Streaming python scripts/perplexity_client.py stream "Your question" # Compare models python scripts/perplexity_client.py compare "Your question"
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.