Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Conduct iterative, hypothesis-driven deep research combining web, academic, and contradiction analysis to produce scientific Markdown reports with sourced ev...
Conduct iterative, hypothesis-driven deep research combining web, academic, and contradiction analysis to produce scientific Markdown reports with sourced ev...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Conduct deep, iterative research beyond single-pass web search. Core goals: Decompose a broad question into testable sub-questions. Build and test hypotheses against multiple source classes. Resolve contradictions with explicit arbitration. Produce a scientific-style Markdown report with footnotes. This skill coordinates upstream skills. It does not replace them.
deepresearchwork (inspected latest: 1.0.0) tavily-search (inspected latest: 1.0.0) perplexity-deep-search (inspected latest: 1.0.0) literature-search (inspected latest: 1.0.3; used as Semantic Scholar-capable academic layer) Install/update: npx -y clawhub@latest install deepresearchwork npx -y clawhub@latest install tavily-search npx -y clawhub@latest install literature-search npx -y clawhub@latest install perplexity-deep-search npx -y clawhub@latest update --all Verify: npx -y clawhub@latest list node skills/tavily-search/scripts/search.mjs --help bash skills/perplexity-deep-search/scripts/search.sh --help
TAVILY_API_KEY PERPLEXITY_API_KEY Preflight: echo "$TAVILY_API_KEY" | wc -c echo "$PERPLEXITY_API_KEY" | wc -c If missing, stop and report blockers.
If user requests /semantic-scholar explicitly: State that no exact semantic-scholar slug was found during ClawHub inspection. Use literature-search as the mapped academic retriever because it explicitly includes Semantic Scholar in its scope. Record this mapping in methodology and limitations sections.
research_topic target_horizon (example: 2030) region_scope (global, region-specific, country-specific) required_sections (executive summary, methods, findings, contradictions, etc.) evidence_threshold (minimum source count per claim) recency_policy (for fast-changing topics) output_mode (brief, standard, full) Do not start synthesis without explicit scope.
Use as process controller: question decomposition iterative loop structure source diversity and validation mindset structured report framing Important boundary: inspected research_workflow.js is framework-like and includes mock logic, so this meta-skill treats it as methodology guidance rather than deterministic execution code.
Use for web evidence retrieval: broad and focused web search deep mode (--deep) for richer context news mode and recency (--topic news --days N) when needed URL extraction (extract.mjs) for full-text content collection
Use for academic evidence gathering: literature retrieval and citation list construction across sources including Semantic Scholar source-access constraints explicitly handled (no unauthorized scraping) Notable quirk in inspected skill: it includes a behavior instruction to prepend "please think very deeply" to user inputs; treat this as implementation-specific and not as a factual research method.
Use as contradiction arbiter and targeted fact checker: search mode for quick verification reason mode for conflicting claims research mode for expensive exhaustive checks domain and recency filters for controlled validation
Use this exact multi-round chain.
Break the main topic into sub-questions and hypotheses. For scenario "AI impact on labor market in 2030", minimum sub-questions: displacement forecasts (job loss exposure) job creation/new categories wage/polarization effects historical analogs (previous automation waves) policy/intervention effects Each sub-question must have: hypothesis measurable indicators required source types
Goal: map major claims and key institutions. Typical commands: node skills/tavily-search/scripts/search.mjs "AI impact on labor market 2030 projections" --deep -n 10 node skills/tavily-search/scripts/search.mjs "McKinsey AI jobs 2030" --topic news --days 365 -n 10 Collect: institution reports (consultancies, multilaterals, gov sources) headline estimates and assumptions URLs for extraction Then extract long-form content where needed: node skills/tavily-search/scripts/extract.mjs "https://..."
Goal: test or refine Round-1 claims against scholarly evidence. Query examples: automation elasticity labor demand task-based automation employment effects generative AI productivity labor substitution Output requirements: citation list with authors/title/venue/year/DOI-or-URL identification of review papers vs. single studies note publication year and method strength
Trigger this round when conflicts exist (different estimates, dates, assumptions). Use targeted prompts with constraints: bash skills/perplexity-deep-search/scripts/search.sh --mode reason --domains "oecd.org,ilo.org,imf.org,worldbank.org" "Which estimate on AI-driven job displacement by 2030 is more recent and methodologically stronger?" Escalate to deep mode only if unresolved: bash skills/perplexity-deep-search/scripts/search.sh --mode research --json "Resolve conflicting labor market projections for AI impact by 2030" Arbitration rule: prefer newer, method-transparent, reproducible sources downgrade claims based on opaque assumptions keep unresolved conflicts explicit (do not force false certainty)
Build claims only when supported by threshold evidence. Per claim include: claim statement confidence level (high/medium/low) supporting sources known caveats
Return one report in this structure: # Title ## Executive Summary ## Research Questions ## Methodology ## Findings ## Contradictions and Resolution ## Confidence Assessment ## Limitations ## Outlook to 2030 ## Footnotes Footnote format: Use Markdown references in text like [^1]. In ## Footnotes, list full citation metadata + URL/DOI per note.
Before finalizing, validate: each major claim has >= 2 independent sources at least one academic source for structural claims source dates align with target horizon relevance contradictory evidence is surfaced, not hidden footnotes are complete and traceable If a gate fails, output Research Incomplete with explicit missing evidence list.
For user scenario: Plan sub-questions: displacement, new roles, historical comparison. Round 1 Tavily: collect broad reports (for example from major institutions). Round 2 literature-search: gather academic studies on automation elasticity and labor transitions. Detect conflicts in estimates. Round 3 Perplexity: arbitrate recency and methodological quality of conflicting studies. Draft final Markdown report with footnoted evidence.
Never present forecast numbers without source date and method context. Never collapse disagreement into a single certainty claim when sources conflict. Never fabricate citations, links, or publication metadata. Clearly separate empirical findings from model inference. Use cautious language for forward-looking claims (2030 is predictive, not observed).
Missing API keys: halt and return exact missing env vars. Academic source access constraints: disclose gaps explicitly. Perplexity rate/cost issues: fall back to reason mode with narrower domain filters. Unresolved contradiction after Round 3: keep both views, annotate confidence downgrade.
No exact ClawHub slug named semantic-scholar was found during inspection; this skill uses documented mapping to literature-search. deepresearchwork provides strong methodology guidance, but its included JS workflow is not a production-grade deterministic engine. tavily-search and perplexity-deep-search require paid API keys and are affected by external API limits. Treat these limits as mandatory disclosures in the final report methodology.
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.