Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Prevents LLM API 429 errors by estimating tokens, tracking quotas, throttling requests, detecting duplicates, caching responses, and auto-fallback by model.
Prevents LLM API 429 errors by estimating tokens, tracking quotas, throttling requests, detecting duplicates, caching responses, and auto-fallback by model.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Version: 1.5.0 Author: Aoineco & Co. License: MIT Tags: rate-limit, 429, token-management, cost-optimization, llm-guard, high-performance
Prevents LLM API 429 (Rate Limit / Resource Exhausted) errors by intercepting requests before they're sent. Designed for users on free/low-cost API plans who need maximum intelligence per dollar. Core philosophy: "Intelligence is measured not by how much you spend, but by how little you need."
When using LLM APIs (especially Google Gemini Flash with 1M TPM limit): Large documents (docx, PDFs) can consume the entire minute quota in one request Failed requests still count toward token usage Retry loops after 429 errors waste more tokens โ death spiral No built-in way to detect runaway/duplicate requests
FeatureDescriptionPre-flight Token EstimationEstimates token count before API call (CJK-aware, no tiktoken dependency)Real-time Quota TrackingTracks per-model per-minute token usage with sliding windowSmart ThrottleAuto-waits when quota > 80%, blocks at > 95%Duplicate DetectionBlocks identical requests within 60s window (3+ = runaway)Response CachingCaches successful responses for duplicate requestsAuto Model FallbackSwitches to cheaper/available model when primary is exhausted429 Error ParserExtracts exact retry delay from Google/Anthropic error responsesBatch vs Mistake DetectionDistinguishes intentional bulk processing from error loops
Pre-configured quotas for: gemini-3-flash (1M TPM) gemini-3-pro (2M TPM) claude-haiku (50K TPM) claude-sonnet (200K TPM) claude-opus (200K TPM) gpt-4o (800K TPM) deepseek (1M TPM) Custom quotas can be added for any model.
from token_guard import TokenGuard guard = TokenGuard() # Before every API call: decision = guard.check(prompt_text, model="gemini-3-flash") if decision.action == "proceed": response = call_your_api(prompt_text) guard.record_usage(decision.estimated_tokens, model="gemini-3-flash") guard.cache_response(prompt_text, response) elif decision.action == "wait": time.sleep(decision.wait_seconds) # retry elif decision.action == "fallback": response = call_your_api(prompt_text, model=decision.fallback_model) elif decision.action == "block": print(f"Blocked: {decision.reason}") # If you get a 429 error: guard.record_429("gemini-3-flash", retry_delay=53.0)
Add to your agent's config or use as a middleware: skills: - token-guard The agent can invoke TokenGuard before any LLM API call to prevent quota exhaustion.
token-guard/ โโโ SKILL.md # This file โโโ scripts/ โโโ token_guard.py # Main engine (zero external dependencies)
{ "models": { "gemini-3-flash": { "tpm_limit": 1000000, "used_this_minute": 750000, "remaining": 250000, "usage_pct": "75.0%", "status": "๐ข OK" } }, "stats": { "total_checks": 42, "tokens_saved": 128000, "blocks": 3, "fallbacks": 2 } }
Pure Python 3.10+. No pip install needed. No tiktoken, no external API calls. Designed for the $7 Bootstrap Protocol โ every byte counts.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.