Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Track and analyze OpenClaw token usage across main, cron, and sub-agent sessions with category, client, model, and tool attribution. Use when the user asks where tokens are being spent, wants daily/weekly token reports, needs per-session drilldowns, or is planning token-cost optimizations and needs evidence from transcript data.
Track and analyze OpenClaw token usage across main, cron, and sub-agent sessions with category, client, model, and tool attribution. Use when the user asks where tokens are being spent, wants daily/weekly token reports, needs per-session drilldowns, or is planning token-cost optimizations and needs evidence from transcript data.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Use this skill to produce token usage reports from local OpenClaw data. It parses session transcripts (.jsonl), session metadata, and cron definitions, then reports usage by category, client, tool, model, and top token consumers.
Run: $OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter --period 7d
Basic report: $OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter --period 7d Focus on selected breakdowns: $OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \ --period 1d \ --breakdown tools,category,client Analyze one session: $OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \ --session agent:main:cron:d3d76f7a-7090-41c3-bb19-e2324093f9b1 Export JSON: $OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \ --period 30d \ --format json \ --output $OPENCLAW_WORKSPACE/token-usage/token-usage-30d.json Persist daily snapshot: $OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \ --period 1d \ --save This writes JSON to: $OPENCLAW_WORKSPACE/token-usage/daily/YYYY-MM-DD.json
Sessions index: $OPENCLAW_DATA_DIR/agents/main/sessions/sessions.json Session transcripts: $OPENCLAW_DATA_DIR/agents/main/sessions/*.jsonl Cron definitions: $OPENCLAW_DATA_DIR/cron/jobs.json The parser reads assistant usage fields for token counts and uses tool-call records for attribution.
Tool token attribution is heuristic: assistant-message tokens are split across tool calls in that message. Session totalTokens may come from either session index metadata or transcript usage sums (max is used). Client detection is rules-based (personal, bonsai, mixed, unknown) using path/domain/email markers.
Run: python3 $OPENCLAW_SKILLS_DIR/skill-creator/scripts/quick_validate.py \ $OPENCLAW_SKILLS_DIR/token-counter
See: references/classification-rules.md for category/client detection logic and keyword mapping.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.