Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Track and visualize your agent's operational metrics. Record API calls, task completions, uptime, errors, and custom counters. Generate text-based dashboards...
Track and visualize your agent's operational metrics. Record API calls, task completions, uptime, errors, and custom counters. Generate text-based dashboards...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Track your agent's operational health. Record events, count things, measure durations, and generate reports.
Agents run 24/7 but have no way to answer basic questions: How many tasks did I complete? What's my error rate? How long do API calls take? Which skills do I use most? Without metrics, you're flying blind.
python3 {baseDir}/scripts/metrics.py record --name api_calls --value 1 --tags '{"provider": "openrouter", "model": "gpt-4"}'
python3 {baseDir}/scripts/metrics.py timer --name task_duration --seconds 12.5 --tags '{"task": "scan_skill"}'
python3 {baseDir}/scripts/metrics.py counter --name posts_published --increment 1
python3 {baseDir}/scripts/metrics.py error --name moltbook_verify_fail --message "Challenge solver returned wrong answer"
python3 {baseDir}/scripts/metrics.py dashboard
python3 {baseDir}/scripts/metrics.py view --period day
python3 {baseDir}/scripts/metrics.py view --name api_calls --period week
python3 {baseDir}/scripts/metrics.py export --format json > metrics.json python3 {baseDir}/scripts/metrics.py export --format csv > metrics.csv
The text-based dashboard shows: Uptime since first metric recorded Total events today Top metrics by count Error rate Average durations for timed operations Custom counter values
counter โ Things you count (posts published, skills scanned, comments made) timer โ Things you measure in seconds (API response time, task duration) event โ Things that happened (errors, deployments, restarts) gauge โ Current values (karma, budget remaining, queue depth)
Metrics are stored in ~/.openclaw/metrics/ as daily JSON files. Lightweight, no database required.
Works with the compliance audit trail โ log metrics events alongside audit entries for full operational visibility.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.