Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Turn raw data into decisions with statistical rigor, proper methodology, and awareness of analytical pitfalls.
Turn raw data into decisions with statistical rigor, proper methodology, and awareness of analytical pitfalls.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Use this skill when the user needs to analyze, explain, or visualize data from SQL, spreadsheets, notebooks, dashboards, exports, or ad hoc tables. Use it for KPI debugging, experiment readouts, funnel or cohort analysis, anomaly reviews, executive reporting, and quality checks on metrics or query logic. Prefer this skill over generic coding or spreadsheet help when the hard part is analytical judgment: metric definition, comparison design, interpretation, or recommendation. User asks about: analyzing data, finding patterns, understanding metrics, testing hypotheses, cohort analysis, A/B testing, churn analysis, or statistical significance.
Analysis without a decision is just arithmetic. Always clarify: What would change if this analysis shows X vs Y?
Before touching data: What decision is this analysis supporting? What would change your mind? (the real question) What data do you actually have vs what you wish you had? What timeframe is relevant?
Sample size sufficient? (small N = wide confidence intervals) Comparison groups fair? (same time period, similar conditions) Multiple comparisons? (20 tests = 1 "significant" by chance) Effect size meaningful? (statistically significant != practically important) Uncertainty quantified? ("12-18% lift" not just "15% lift")
This skill does not require local folders, persistent memory, or setup state. Use the included reference files as lightweight guides: metric-contracts.md for KPI definitions and caveats chart-selection.md for visual choice and chart anti-patterns decision-briefs.md for stakeholder-facing outputs pitfalls.md and techniques.md for analytical rigor and method choice
Load only the smallest relevant file to keep context focused. TopicFileMetric definition contractsmetric-contracts.mdVisual selection and chart anti-patternschart-selection.mdDecision-ready output formatsdecision-briefs.mdFailure modes to catch earlypitfalls.mdMethod selection by question typetechniques.md
Identify the decision owner, the question that could change a decision, and the deadline before doing analysis. If no decision would change, reframe the request before computing anything.
Define entity, grain, numerator, denominator, time window, timezone, filters, exclusions, and source of truth. If any of those are ambiguous, state the ambiguity explicitly before presenting results.
Keep query logic, cleanup assumptions, and analytical conclusions distinguishable. Never hide business assumptions inside SQL, formulas, or notebook code without naming them in the write-up.
Select charts based on the analytical question: trend, comparison, distribution, relationship, composition, funnel, or cohort retention. Do not add charts that make the deck look fuller but do not change the decision.
Every output should include the answer, evidence, confidence, caveats, and recommended next action. If the output is going to a stakeholder, translate the method into business implications instead of leading with technical detail.
Segment by obvious confounders, compare the right baseline, quantify uncertainty, and check sensitivity to exclusions or time windows. Strong-looking numbers without robustness checks are not decision-ready.
Block or downgrade conclusions when sample size is weak, the source is unreliable, definitions drifted, or confounding is unresolved. It is better to say "unknown yet" than to produce false confidence.
Reusing a KPI name after changing numerator, denominator, or exclusions -> trend comparisons become invalid. Comparing daily, weekly, and monthly grains in one chart -> movement looks real but is mostly aggregation noise. Showing percentages without underlying counts -> leadership overreacts to tiny denominators. Using a pretty chart instead of the right chart -> the output looks polished but hides the actual decision signal. Hunting for interesting cuts after seeing the result -> narrative follows chance instead of evidence. Shipping automated reports without metric owners or caveats -> bad numbers spread faster than they can be corrected. Treating observational patterns as causal proof -> action plans get built on correlation alone.
Question typeApproachKey output"Is X different from Y?"Hypothesis testp-value + effect size + CI"What predicts Z?"Regression/correlationCoefficients + R² + residual check"How do users behave over time?"Cohort analysisRetention curves by cohort"Are these groups different?"SegmentationProfiles + statistical comparison"What's unusual?"Anomaly detectionFlagged points + context For technique details and when to use each, see techniques.md.
Lead with the insight, not the methodology Quantify uncertainty - ranges, not point estimates State limitations - what this analysis can't tell you Recommend next steps - what would strengthen the conclusion
User wants to "prove" a predetermined conclusion Sample size too small for reliable inference Data quality issues that invalidate analysis Confounders that can't be controlled for
This skill makes no external network requests. EndpointData SentPurposeNoneNoneN/A No data is sent externally.
Data that leaves your machine: Nothing by default. Data that stays local: Nothing by default. This skill does NOT: Access undeclared external endpoints. Store credentials or raw exports in hidden local memory files. Create or depend on local folder systems for persistence. Create automations or background jobs without explicit user confirmation. Rewrite its own instruction source files.
Install with clawhub install <slug> if user confirms: sql - query design and review for reliable data extraction. csv - cleanup and normalization for tabular inputs before analysis. dashboard - implementation patterns for KPI visualization layers. report - structured stakeholder-facing deliverables after analysis. business-intelligence - KPI systems and operating cadence beyond one-off analysis.
If useful: clawhub star data-analysis Stay updated: clawhub sync
Data access, storage, extraction, analysis, reporting, and insight generation.
Largest current source with strong distribution and engagement signals.