Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Autonomous multi-model deep research with framework-driven reasoning. Spawns 4 parallel model agents (Gemini 2.5 Pro, o3, Opus, MiniMax), each applies best-p...
Autonomous multi-model deep research with framework-driven reasoning. Spawns 4 parallel model agents (Gemini 2.5 Pro, o3, Opus, MiniMax), each applies best-p...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Autonomous research system that runs 4 AI models in parallel, each applying relevant analytical frameworks, then cross-validates and merges findings into a comprehensive cited report.
User Question โ โผ โโ Phase 0: Framework Selection โโ โ Identify best-practice โ โ framework(s) for this question โ โโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโผโโโโโโโโโโโโโโโโ โผ โผ โผ โผ Gemini o3 Opus MiniMax 2.5 Pro 4 M2.5 (search (deep (nuance (China/ heavy) logic) +balance)alt view) โ โ โ โ โโโโโโโโโผโโโโโโโโโโโโโโโโ โผ Phase 5: Merge & Cross-Validate โ โผ Final Report (PDF)
Before researching, ask: "Is there a best-practice framework for answering this type of question?"
Question TypeFrameworks to ApplyCompetitive strategyPorter's Five Forces, 7 Powers (Helmer), Schwerpunkt/High Ground (Packy), SWOTMarket entry / sizingTAM/SAM/SOM, Blue Ocean Strategy, Jobs-to-be-DoneBusiness model evaluationBusiness Model Canvas, Unit Economics, Ramp vs Route test (point solution vs platform?)Investment / valuationDCF, Comparable Analysis, Venture method, Power Law thesisProduct strategyJTBD, Kano Model, Value Prop Canvas, Hook ModelGrowth / GTMAARRR Pirate Metrics, Bullseye Framework, STP (Segmentation-Targeting-Positioning)Technology assessmentGartner Hype Cycle, Wardley Maps, Build vs Buy matrixRisk analysisPre-Mortem, FMEA, Scenario PlanningOrganizational / opsOKR analysis, RACI, Theory of ConstraintsPricingVan Westendorp, Conjoint, Value-based pricing frameworkIndustry analysisValue Chain Analysis, Industry Lifecycle, Winner-Takes-More thesisPerson / hiringTrack Record Analysis, Reference Triangle, Founder-Market Fit If a framework applies: Include it in the prompt to each model Structure the model's analysis around the framework's components The final report should explicitly reference which framework(s) were used and why If no standard framework applies: State "No standard framework identified โ using first-principles analysis" Each model reasons from first principles with explicit assumptions stated
Break the topic into 5-8 research sub-questions. Think like an investigative journalist: What are the key facts? What are different perspectives/sources? What's the timeline/history? What data/evidence exists? What are the unknowns or controversies?
Spawn 4 sub-agents using sessions_spawn, each with a different model: Model 1: gemini (google/gemini-2.5-pro) โ Search-heavy, broad coverage Model 2: o3 (openai/o3) โ Deep logical reasoning, contrarian Model 3: opus (anthropic/claude-opus-4-6) โ Nuanced, balanced synthesis Model 4: minimax (minimax/MiniMax-M2.5) โ Alternative perspectives, China/grey-area
Gemini: "You are the primary search engine. Cast the widest net. Find obscure sources others would miss. Prioritize data and numbers." o3: "You are the deep reasoner. Challenge assumptions. Look for logical flaws in conventional wisdom. Apply the framework with maximum rigor. If the consensus is wrong, explain why." Opus: "You are the synthesizer. Balance multiple perspectives fairly. Identify nuance others miss. Connect dots across disciplines." MiniMax: "You are the alternative perspective agent. Consider non-Western viewpoints, grey areas, unconventional strategies. What would a Chinese entrepreneur or contrarian investor do differently?"
All 4 models run in parallel via sessions_spawn with mode="run". Do NOT poll in a loop โ they auto-announce when done.
Save each model's output: memory/research/[topic]-gemini-[date].md memory/research/[topic]-o3-[date].md memory/research/[topic]-opus-[date].md memory/research/[topic]-minimax-[date].md
This is the most critical phase. The primary agent (you) must:
Create a matrix of key claims and which models agree/disagree: | Claim | Gemini | o3 | Opus | MiniMax | Confidence | |-------|--------|----|----|---------|------------| | [claim 1] | โ | โ | โ | โ | High (3/4) | | [claim 2] | โ | โ | โ | โ | High (3/4) | | [claim 3] | โ | โ | โ | โ | Medium (2/4) |
For each disagreement: Identify the root cause (different data? different logic? different framework interpretation?) Check which model has the stronger source If genuinely uncertain, present both sides in the final report
Map findings back to the framework structure Ensure every framework component has been addressed Note which components had strong consensus vs. disagreement
From experience, models commonly get wrong: Platform-specific limits (posting frequency, API limits) Pricing (especially for niche tools โ often 10-30x off) Regulatory details Recency of data Verify any quantitative claim that only one model makes.
# [Topic] โ Deep Research Report **Framework Used**: [Name] โ [why this framework] **Models**: Gemini 2.5 Pro, o3, Opus 4, MiniMax M2.5 **Date**: [date] **Total Searches**: [count across all models] ## Executive Summary 3-5 sentence overview. Note consensus level. ## Framework Analysis ### [Framework Component 1] Analysis with model consensus noted. [1][2] ### [Framework Component 2] ... ## Key Findings (Beyond Framework) Discoveries that don't fit neatly into the framework. ## Model Disagreements Where models diverged and why. ## Agreement Matrix [The table from 5a] ## Data & Evidence Tables, numbers, comparisons. ## Risks / Unknowns What we couldn't confirm. Low-confidence areas. ## Conclusion & Recommendations Actionable takeaways ranked by confidence. ## Sources [1] Title โ URL [2] ...
Save final report to memory/research/[topic]-็ปๆ็-[date].md Generate PDF via pymupdf and save to ~/.openclaw/media/outbound/ Send PDF to user via message tool
Minimum sources: 15 unique URLs per model (60+ total across 4 models) Source diversity: No more than 3 citations from same domain per model Freshness: Prefer sources < 6 months old; flag older data Cross-validation: Key claims must appear in 2+ models' findings Framework compliance: Every framework component must be addressed Confidence scoring: High (3-4 models agree + strong sources), Medium (2 models or weak sources), Low (1 model or no source) No hallucination: Every factual claim must have a source
Frameworks: DCF, Comparable Analysis, Power Law Check SEC/regulatory filings, earnings transcripts Include key metrics (revenue, margins, P/E, debt) See references/financial-research.md
Frameworks: Porter's Five Forces, TAM/SAM/SOM, 7 Powers Competitive landscape, key players, market share Apply Winner-Takes-More thesis where relevant
Frameworks: Schwerpunkt/High Ground, Business Model Canvas, JTBD Identify the constraint, the scarce asset, expansion path Compare to historical precedents (Rockefeller, Ramp, etc.)
Frameworks: Wardley Maps, Build vs Buy, Gartner Hype Cycle Architecture, benchmarks, alternatives matrix Community sentiment (GitHub, HN, Reddit)
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.