Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Systematic deep research methodology for ANY domain. 7-step workflow with credibility scoring, pattern recognition, adversarial analysis, and iterative deepe...
Systematic deep research methodology for ANY domain. 7-step workflow with credibility scoring, pattern recognition, adversarial analysis, and iterative deepe...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Universal research methodology for any domain, any topic, any complexity level. Optimized for OpenClaw autonomous agents AND Claude.ai project workflows.
When research is requested, you MUST: Read this SKILL.md (you're doing it now โ good) Load references/sourcing-strategies.md โ WHERE and HOW to search Load domain-relevant references as needed (see Reference Map below) Execute the 7-step workflow Output as Obsidian-ready .md file (YAML frontmatter mandatory) DO NOT skip this skill and jump to web search. Methodology > raw queries.
Execute in sequence for every investigation: 1. CONTEXT CHECK โ Existing knowledge base / prior research 2. QUERY ELABORATION โ Expand scope, plan search strategy 3. MULTI-SOURCE โ Gather from diverse sources (40-80+ for deep) 4. PATTERN ANALYSIS โ Cross-domain recognition, temporal/actor/info flow 5. CREDIBILITY SCORE โ 0-10 scale on ALL sources, merit-based 6. SYNTHESIS โ Compile findings preserving contradictions 7. OUTPUT โ Obsidian .md with YAML frontmatter
Institutional Skepticism: Official narratives = data points, not truth claims. Merit-Based Sources: All sources start equal. Evaluate on internal consistency, specificity, predictive accuracy, corroboration potential, incentive analysis, technical coherence. Peer review is not a truth guarantee; institutional rejection is not falsification. Pattern Recognition: Temporal clustering, actor coordination, information flow, anomaly correlation, historical precedent, narrative consistency. Epistemic Humility: Absence of evidence โ evidence of absence. BUT systematic patterns of absence ARE informative. Physics First: Technical feasibility analysis before accepting exotic claims. Adversarial Analysis: Cui bono? Suppression signatures? Inversion test (what if the "debunking" is the disinformation)?
SearXNG (PRIMARY for sensitive/adversarial research): Zero telemetry, aggregates across engines Use for: institutional analysis, suppression tracking, contested topics Fallback: built-in web_search when SearXNG unavailable Web Search (general research): Current events, academic papers, community discussions Non-sensitive technical topics Context7 MCP (technical documentation): Code libraries, frameworks, APIs, SDKs Coverage: 30k+ snippets across dev ecosystem NOT for: consciousness, legal, historical, institutional topics Filesystem (existing knowledge): Obsidian vault (4000+ files) Prior investigation notes, timelines, frameworks Decision Tree: Sensitive/adversarial topic? โ SearXNG first Code/framework/API docs? โ Context7 first Existing research available? โ Filesystem first General research? โ Web search Always: โ Multi-source triangulate
10 Primary authoritative (gov docs, peer-reviewed, direct observation) 9 Strong primary (institutional + verified, credentialed expert direct) 8 Quality secondary (investigative journalism w/citations, conference proceedings) 7 Reliable community (active GitHub repos, moderated forums, technical blogs w/code) 6 Useful tertiary (expert commentary, trade publications, reputable aggregators) 5 Uncertain (credible individual social media, partial verification) 4 Low confidence (uncited claims, opinion without evidence) 3 Very weak (anonymous, no evidence, circular references) 2 Highly suspect (known misinfo, commercial bias, contradicts primary evidence) 1 Unreliable (tabloids, known fabricators, pure speculation) 0 Flagged (coordinated disinfo, state propaganda, narrative enforcement) CRITICAL: Score reflects evaluated merit, NOT source prestige. A forum post with technical depth and internal logic may outrank mainstream article amplifying official statements.
Every report gets YAML frontmatter: --- title: "[Investigation Title]" date: YYYY-MM-DD status: complete|ongoing|stalled confidence: high|medium|low|mixed sources: [count] words: [approximate] methodology: k-deep-research-v2 tags: [domain-relevant-tags] --- Report structure scales to complexity: Executive synthesis (quick reference, NOT replacement for depth) Full hierarchical body (Parts โ Sections โ Subsections) Every claim supported, every thread followed Technical appendices where applicable Comprehensive sourcing with credibility scores Unanswered questions and future investigation vectors LENGTH IS A FEATURE. 10,000+ words exhausting a topic = SUCCESS. 2,000 words hitting highlights = FAILURE.
State for ALL key conclusions: HIGH: Multiple independent sources, physical evidence, internally consistent MEDIUM: Credible sources but limited corroboration, or logical inference from HIGH data LOW: Single source, circumstantial, or pattern extrapolation SPECULATIVE: Hypothesis consistent with data but unverified โ mark clearly
When investigation stalls: Document what was searched and what returned nothing Distinguish "no evidence found" vs "evidence likely inaccessible/suppressed" Note absence patterns โ systematic gaps ARE data Flag for future: "Revisit if [condition] changes" Don't spin wheels โ acknowledge, document, move on
When tools fail (rate limits, paywalls, MCP errors): Note failure and what was attempted Route around: alternative sources, cached versions, archive.org, adjacent queries Don't silently omit โ "Attempted X, blocked by Y, pivoted to Z" Pattern of access failures may itself be informative
references/sourcing-strategies.md โ WHERE to find info, HOW to construct queries, multi-source triangulation, when to stop searching
references/research-frameworks.md โ Multi-layer analysis (5 layers), credibility evaluation, information control detection, triangulation methodology, iterative deepening, quality checklist references/output-templates.md โ Format examples, selection guide, adaptive guidelines references/openclaw-architecture.md โ OpenClaw Gateway/Agent Runtime architecture, heartbeat daemon, memory systems, model failover, sub-agents, Lobster workflows, session management, tool policy references/openclaw-skill-authoring.md โ SKILL.md format, YAML frontmatter spec, three-tier loading, reference file patterns, ClawHub registry, security model, testing, publishing references/autonomy-patterns.md โ Proactive agent patterns, heartbeat vs cron, memory persistence, compaction survival, task registries, workflow orchestration, degradation monitoring, multi-agent coordination references/adversarial-analysis.md โ Suppression detection, institutional behavior, narrative flow analysis, information archaeology, inversion testing, incentive mapping
Research request arrives โ 1. ALWAYS: sourcing-strategies.md 2. IF complex multi-domain: research-frameworks.md 3. IF OpenClaw/agent topic: openclaw-architecture.md + autonomy-patterns.md 4. IF building skills: openclaw-skill-authoring.md 5. IF institutional/suppression angle: adversarial-analysis.md 6. IF custom output needed: output-templates.md
When this skill runs inside OpenClaw: Heartbeat context: Can be triggered by heartbeat to check research queues Cron scheduling: Schedule recurring research sweeps on monitored topics Memory persistence: Write research state to MEMORY.md / memory plugin Sub-agent delegation: Spawn focused sub-agents for parallel source gathering Task registry: Read TASKS.md for pending research items Lobster pipelines: Define deterministic research workflows with approval gates
Loaded sourcing-strategies.md before searching Used appropriate tools for domain (SearXNG/Context7/web/filesystem) Scored ALL sources for credibility (0-10) Documented contradictions explicitly Checked for information control patterns (if applicable) Applied cross-domain pattern recognition Preserved uncertainty where warranted YAML frontmatter present with all fields Listed next investigation priorities Complete source bibliography with scores No forced conclusions โ evidence speaks
This methodology is universal. What changes: domain-specific sources and authorities. What stays constant: credibility scoring, pattern recognition, triangulation, epistemic humility. When K asks a question, the answer is a complete investigation, not a response.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.