โ† All skills
Tencent SkillHub ยท Data Analysis

Token Watch

Track and analyze token usage and costs across AI providers with budget alerts, model cost comparison, optimization tips, and local data storage.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Track and analyze token usage and costs across AI providers with budget alerts, model cost comparison, optimization tips, and local data storage.

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
LICENSE.md, README.md, manifest.yaml, SKILL.md, tokenwatch.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.2.3

Documentation

ClawHub primary doc Primary doc: SKILL.md 30 sections Open source page

TokenWatch

Track, analyze, and optimize token usage and costs across AI providers. Set budgets, get alerts, compare models, and reduce your spend. Free and open-source (MIT License) โ€ข Zero dependencies โ€ข Works locally โ€ข No API keys required

Why This Skill?

After OpenAI's acquisition of OpenClaw, token costs are the #1 concern for power users. This skill gives you full visibility into what you're spending, where it's going, and exactly how to reduce it.

Problems it solves:

You don't know how much you're spending until the bill arrives No way to compare costs across providers before choosing a model No alerts when you're approaching your budget No actionable suggestions for reducing spend

1. Record Usage & Auto-Calculate Costs

from tokenwatch import TokenWatch monitor = TokenWatch() monitor.record_usage( model="claude-haiku-4-5-20251001", input_tokens=1200, output_tokens=400, task_label="summarize article" ) # โœ… Recorded: $0.00192

2. Auto-Record from API Responses

from tokenwatch import record_from_anthropic_response, record_from_openai_response # Anthropic response = client.messages.create(model="claude-haiku-4-5-20251001", ...) record_from_anthropic_response(monitor, response, task_label="my task") # OpenAI response = client.chat.completions.create(model="gpt-4o-mini", ...) record_from_openai_response(monitor, response, task_label="my task")

3. Set Budgets with Alerts

monitor.set_budget( daily_usd=1.00, weekly_usd=5.00, monthly_usd=15.00, per_call_usd=0.10, alert_at_percent=80.0 # Alert at 80% of budget ) # โœ… Budget set: daily=$1.0, weekly=$5.0, monthly=$15.0 # ๐Ÿšจ BUDGET ALERT fires automatically when threshold is crossed

4. Dashboard

print(monitor.format_dashboard()) ๐Ÿ’ฐ SPENDING SUMMARY Today: $0.0042 (4 calls, 13,600 tokens) Week: $0.0231 (18 calls, 67,200 tokens) Month: $0.1847 (92 calls, 438,000 tokens) ๐Ÿ“‹ BUDGET STATUS Daily: [โ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘] 42% $0.0042 / $1.00 โœ… Monthly: [โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘] 37% $0.1847 / $0.50 โš ๏ธ ๐Ÿ’ก OPTIMIZATION TIPS ๐Ÿ”ด Swap Opus โ†’ Sonnet for non-reasoning tasks (save ~$8.20/mo) ๐ŸŸก High avg cost/call on gpt-4o โ€” reduce prompt length

5. Compare Models Before Calling

# For 2000 input + 500 output tokens: for m in monitor.compare_models(2000, 500)[:6]: print(f"{m['model']:<42} ${m['cost_usd']:.6f}") gemini-2.5-flash $0.000300 gpt-4o-mini $0.000600 mistral-small-2501 $0.000350 claude-haiku-4-5-20251001 $0.003600 mistral-large-2501 $0.007000 gemini-2.5-pro $0.007500

6. Estimate Before You Call

estimate = monitor.estimate_cost("claude-sonnet-4-5-20250929", input_tokens=5000, output_tokens=1000) print(f"Estimated cost: ${estimate['estimated_cost_usd']:.6f}")

7. Optimization Suggestions

suggestions = monitor.get_optimization_suggestions() for s in suggestions: savings = s.get("estimated_monthly_savings_usd", 0) print(f"[{s['priority'].upper()}] {s['message']}") if savings: print(f" โ†’ Save ~${savings:.2f}/month")

8. Export Reports

monitor.export_report("monthly_report.json", period="month")

Supported Models (Feb 2026)

41 models across 10 providers โ€” updated Feb 16, 2026. ProviderModelInput/1MOutput/1MAnthropicclaude-opus-4-6$5.00$25.00Anthropicclaude-opus-4-5$5.00$25.00Anthropicclaude-sonnet-4-5-20250929$3.00$15.00Anthropicclaude-haiku-4-5-20251001$1.00$5.00OpenAIgpt-5.2-pro$21.00$168.00OpenAIgpt-5.2$1.75$14.00OpenAIgpt-5$1.25$10.00OpenAIgpt-4.1$2.00$8.00OpenAIgpt-4.1-mini$0.40$1.60OpenAIgpt-4.1-nano$0.10$0.40OpenAIo3$10.00$40.00OpenAIo4-mini$1.10$4.40Googlegemini-3-pro$2.00$12.00Googlegemini-3-flash$0.50$3.00Googlegemini-2.5-pro$1.25$10.00Googlegemini-2.5-flash$0.30$2.50Googlegemini-2.5-flash-lite$0.10$0.40Googlegemini-2.0-flash$0.10$0.40Mistralmistral-large-2411$2.00$6.00Mistralmistral-medium-3$0.40$2.00Mistralmistral-small$0.10$0.30Mistralmistral-nemo$0.02$0.10Mistraldevstral-2$0.40$2.00xAIgrok-4$3.00$15.00xAIgrok-3$3.00$15.00xAIgrok-4.1-fast$0.20$0.50Kimikimi-k2.5$0.60$3.00Kimikimi-k2$0.60$2.50Kimikimi-k2-turbo$1.15$8.00Qwenqwen3.5-plus$0.11$0.44Qwenqwen3-max$0.40$1.60Qwenqwen3-vl-32b$0.91$3.64DeepSeekdeepseek-v3.2$0.14$0.28DeepSeekdeepseek-r1$0.55$2.19DeepSeekdeepseek-v3$0.27$1.10Metallama-4-maverick$0.27$0.85Metallama-4-scout$0.18$0.59Metallama-3.3-70b$0.23$0.40MiniMaxminimax-m2.5$0.30$1.20MiniMaxminimax-m1$0.43$1.93MiniMaxminimax-text-01$0.20$1.10 To add a custom model: add it to PROVIDER_PRICING dict at the top of tokenwatch.py.

TokenWatch(storage_path)

Initialize monitor. Data stored in .tokenwatch/ by default.

record_usage(model, input_tokens, output_tokens, task_label, session_id)

Record a single API call. Returns TokenUsageRecord with calculated cost.

set_budget(daily_usd, weekly_usd, monthly_usd, per_call_usd, alert_at_percent)

Configure spending limits. Alerts fire automatically when thresholds are crossed.

get_spend(period)

Get aggregated spend. Period: "today", "week", "month", "all", or "YYYY-MM-DD".

get_spend_by_model(period)

Spending breakdown by model, sorted by cost descending.

get_spend_by_provider(period)

Spending breakdown by provider.

compare_models(input_tokens, output_tokens)

Compare costs across all known models. Returns list sorted cheapest first.

estimate_cost(model, input_tokens, output_tokens)

Estimate cost before making a call.

get_optimization_suggestions()

Analyze usage and return ranked suggestions with estimated monthly savings.

format_dashboard()

Human-readable spending dashboard with budget bars and tips.

export_report(output_file, period)

Export full report to JSON.

record_from_anthropic_response(monitor, response, task_label)

Helper to auto-record from Anthropic SDK response object.

record_from_openai_response(monitor, response, task_label)

Helper to auto-record from OpenAI SDK response object.

Privacy & Security

โœ… Zero telemetry โ€” No data sent anywhere โœ… Local-only storage โ€” Everything in .tokenwatch/ on your machine โœ… No API keys required โ€” The monitor itself needs no credentials โœ… No authentication โ€” No accounts or logins needed โœ… Full transparency โ€” MIT licensed, source code included

[1.2.3] - 2026-02-16

๐Ÿ“‹ Updated SKILL.md model table to match code: 41 models across 10 providers

[1.2.0] - 2026-02-16

โœจ Added DeepSeek, Meta Llama, MiniMax providers โœจ Expanded to 41 models across 10 providers โœจ Updated all Anthropic/OpenAI/Google/Mistral pricing to Feb 2026 rates

[1.1.0] - 2026-02-16

โœจ Added xAI Grok, Kimi (Moonshot), Qwen (Alibaba) โœจ Expanded to 32 models across 7 providers

[1.0.0] - 2026-02-16

โœจ Initial release โ€” TokenWatch โœจ Pricing table for 11 models across 5 providers โœจ Budget alerts: daily, weekly, monthly, per-call thresholds โœจ Model cost comparison, cost estimation, optimization suggestions โœจ Auto-hooks for Anthropic and OpenAI response objects โœจ Dashboard, JSON export, local-only storage, MIT licensed Last Updated: February 16, 2026 Current Version: 1.2.3 Status: Active & Community-Maintained ยฉ 2026 UnisAI Community

Category context

Data access, storage, extraction, analysis, reporting, and insight generation.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Docs1 Scripts1 Config
  • SKILL.md Primary doc
  • LICENSE.md Docs
  • README.md Docs
  • tokenwatch.py Scripts
  • manifest.yaml Config