Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Track and visualize your agent's operational metrics. Record API calls, task completions, uptime, errors, and custom counters. Generate text-based dashboards...
Track and visualize your agent's operational metrics. Record API calls, task completions, uptime, errors, and custom counters. Generate text-based dashboards...
This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.
Use the source page and any available docs to guide the install because the item currently does not return a direct package file.
I tried to install a skill package from Yavira, but the item currently does not return a direct package file. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required.
I tried to upgrade a skill package from Yavira, but the item currently does not return a direct package file. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need.
Track your agent's operational health. Record events, count things, measure durations, and generate reports.
Agents run 24/7 but have no way to answer basic questions: How many tasks did I complete? What's my error rate? How long do API calls take? Which skills do I use most? Without metrics, you're flying blind.
python3 {baseDir}/scripts/metrics.py record --name api_calls --value 1 --tags '{"provider": "openrouter", "model": "gpt-4"}'
python3 {baseDir}/scripts/metrics.py timer --name task_duration --seconds 12.5 --tags '{"task": "scan_skill"}'
python3 {baseDir}/scripts/metrics.py counter --name posts_published --increment 1
python3 {baseDir}/scripts/metrics.py error --name moltbook_verify_fail --message "Challenge solver returned wrong answer"
python3 {baseDir}/scripts/metrics.py dashboard
python3 {baseDir}/scripts/metrics.py view --period day
python3 {baseDir}/scripts/metrics.py view --name api_calls --period week
python3 {baseDir}/scripts/metrics.py export --format json > metrics.json python3 {baseDir}/scripts/metrics.py export --format csv > metrics.csv
The text-based dashboard shows: Uptime since first metric recorded Total events today Top metrics by count Error rate Average durations for timed operations Custom counter values
counter โ Things you count (posts published, skills scanned, comments made) timer โ Things you measure in seconds (API response time, task duration) event โ Things that happened (errors, deployments, restarts) gauge โ Current values (karma, budget remaining, queue depth)
Metrics are stored in ~/.openclaw/metrics/ as daily JSON files. Lightweight, no database required.
Works with the compliance audit trail โ log metrics events alongside audit entries for full operational visibility.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.