Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Capture, normalize, and report metrics across any domain with reusable dimensions, programmable formulas, and scalable reporting workflows.
Capture, normalize, and report metrics across any domain with reusable dimensions, programmable formulas, and scalable reporting workflows.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
On first use, read setup.md for integration behavior and memory initialization.
Use this skill when the user needs to define, track, analyze, or report metrics for any domain such as social media, sales, product, operations, finance, or personal systems. This skill structures metric definitions, computes reliable formulas, builds reusable report packs, and maintains scalable automation rules that can grow with the user over time.
Working memory lives in ~/metrics/. See memory-template.md for base structure and status behavior. ~/metrics/ ├── memory.md # HOT: goals, active metrics, reporting cadence ├── registry/ # WARM: metric contracts and dimension dictionaries ├── formulas/ # WARM: formula specs with version history ├── reports/ # WARM: report outputs by cadence and stakeholder ├── automations/ # WARM: scheduled checks and alert policies └── archive/ # COLD: retired metrics and old report cycles
Load only the file needed for the current task to keep context focused. TopicFileSetup and integrationsetup.mdMemory schemamemory-template.mdMetric contract designmetric-registry.mdFormula design and governanceformula-playbook.mdReport cadences and templatesreporting-pack.mdAutomation and alerting patternsautomation-patterns.mdData validation and quality gatesdata-quality.md
Every metric must have one clear contract: business meaning, numerator, denominator, source tables, update latency, and owner. Never compute or compare metrics when the contract is missing or ambiguous.
Raw events are evidence. Metrics are interpreted aggregates. Keep them separate. Store and reason in this order: Raw signal Normalized base metric Derived metric Decision recommendation
When users ask for "the same metric but by X", add a dimension instead of creating a duplicate metric. Common high-value dimensions: Time grain Source/channel Segment/persona Geography Product or workflow stage
Formulas evolve. Comparability fails when formula changes are not tracked. For every formula update, store: version change reason impact expectation backfill policy first report date with new logic
A report is incomplete unless each section ties to a decision owner and explicit next action. Minimum output block for every report: What changed Why it changed What to do now Who owns the action When to review again
Alerts without response rules create noise. Each threshold must include: trigger condition severity level owner first response action escalation condition
Build reusable templates for daily, weekly, monthly, and campaign reports so the system can scale across teams and domains. Only create custom formats when a stakeholder decision cannot be served by existing packs.
Mixing different metric definitions under one name -> trend lines become invalid. Changing formulas without version notes -> historical comparisons break silently. Reporting totals without segment cuts -> root causes remain hidden. Creating too many vanity metrics -> operators lose focus on decision metrics. Sending alerts without action ownership -> teams ignore notifications. Adding one-off dashboards for every request -> reporting system becomes unmaintainable.
Data that leaves your machine: None by default. Data that stays local: Metrics context and definitions under ~/metrics/. Formula versions, report logs, and alert policies stored locally. This skill does NOT: Access files outside ~/metrics/ for memory storage. Send metrics to third-party APIs by default. Create background automations without explicit user confirmation.
Install with clawhub install <slug> if user confirms: analytics — metric analysis patterns and interpretation workflows. dashboard — KPI visualization design and reporting layouts. report — structured reporting outputs for stakeholders. sql — query generation for metric extraction pipelines. excel-xlsx — spreadsheet-based metric operations and exports.
If useful: clawhub star metrics Stay updated: clawhub sync
Data access, storage, extraction, analysis, reporting, and insight generation.
Largest current source with strong distribution and engagement signals.