โ† All skills
Tencent SkillHub ยท Developer Tools

API Rate Limiting

Rate limiting algorithms, implementation strategies, HTTP conventions, tiered limits, distributed patterns, and client-side handling. Use when protecting APIs from abuse, implementing usage tiers, or configuring gateway-level throttling.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Rate limiting algorithms, implementation strategies, HTTP conventions, tiered limits, distributed patterns, and client-side handling. Use when protecting APIs from abuse, implementing usage tiers, or configuring gateway-level throttling.

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
README.md, SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 13 sections Open source page

Algorithms

AlgorithmAccuracyBurst HandlingBest ForToken BucketHighAllows controlled burstsAPI rate limiting, traffic shapingLeaky BucketHighSmooths bursts entirelySteady-rate processing, queuesFixed WindowLowAllows edge bursts (2x)Simple use cases, prototypingSliding Window LogVery HighPrecise controlStrict compliance, billing-criticalSliding Window CounterHighGood approximationProduction APIs โ€” best tradeoff Fixed window problem: A user sends the full limit at 11:59 and again at 12:01, doubling the effective rate. Sliding window fixes this.

Token Bucket

Bucket holds tokens up to capacity. Tokens refill at a fixed rate. Each request consumes one. class TokenBucket: def __init__(self, capacity: int, refill_rate: float): self.capacity = capacity self.tokens = capacity self.refill_rate = refill_rate # tokens per second self.last_refill = time.monotonic() def allow(self) -> bool: now = time.monotonic() elapsed = now - self.last_refill self.tokens = min(self.capacity, self.tokens + elapsed * self.refill_rate) self.last_refill = now if self.tokens >= 1: self.tokens -= 1 return True return False

Sliding Window Counter

Hybrid of fixed window and sliding window log โ€” weights the previous window's count by overlap percentage: def sliding_window_allow(key: str, limit: int, window_sec: int) -> bool: now = time.time() current_window = int(now // window_sec) position_in_window = (now % window_sec) / window_sec prev_count = get_count(key, current_window - 1) curr_count = get_count(key, current_window) estimated = prev_count * (1 - position_in_window) + curr_count if estimated >= limit: return False increment_count(key, current_window) return True

Implementation Options

ApproachScopeBest ForIn-memorySingle serverZero latency, no dependenciesRedis (INCR + EXPIRE)DistributedMulti-instance deploymentsAPI GatewayEdgeNo code, built-in dashboardsMiddlewarePer-serviceFine-grained per-user/endpoint control Use gateway-level limiting as outer defense + application-level for fine-grained control.

HTTP Headers

Always return rate limit info, even on successful requests: RateLimit-Limit: 1000 RateLimit-Remaining: 742 RateLimit-Reset: 1625097600 Retry-After: 30 HeaderWhen to IncludeRateLimit-LimitEvery responseRateLimit-RemainingEvery responseRateLimit-ResetEvery responseRetry-After429 responses only

429 Response Body

{ "error": { "code": "rate_limit_exceeded", "message": "Rate limit exceeded. Maximum 1000 requests per hour.", "retry_after": 30, "limit": 1000, "reset_at": "2025-07-01T12:00:00Z" } } Never return 500 or 503 for rate limiting โ€” 429 is the correct status code.

Rate Limit Tiers

Apply limits at multiple granularities: ScopeKeyExample LimitPurposePer-IPClient IP100 req/minAbuse preventionPer-UserUser ID1000 req/hrFair usagePer-API-KeyAPI key5000 req/hrService-to-servicePer-EndpointRoute + key60 req/min on /searchProtect expensive ops Tiered pricing: TierRate LimitBurstCostFree100 req/hr10$0Pro5,000 req/hr100$49/moEnterprise100,000 req/hr2,000Custom Evaluate from most specific to least specific: per-endpoint > per-user > per-IP.

Distributed Rate Limiting

Redis-based pattern for consistent limiting across instances: def redis_rate_limit(redis, key: str, limit: int, window: int) -> bool: pipe = redis.pipeline() now = time.time() window_key = f"rl:{key}:{int(now // window)}" pipe.incr(window_key) pipe.expire(window_key, window * 2) results = pipe.execute() return results[0] <= limit Atomic Lua script (prevents race conditions): local key = KEYS[1] local limit = tonumber(ARGV[1]) local window = tonumber(ARGV[2]) local current = redis.call('INCR', key) if current == 1 then redis.call('EXPIRE', key, window) end return current <= limit and 1 or 0 Never do separate GET then SET โ€” the gap allows overcount.

API Gateway Configuration

NGINX: http { limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s; server { location /api/ { limit_req zone=api burst=20 nodelay; limit_req_status 429; } } } Kong: plugins: - name: rate-limiting config: minute: 60 hour: 1000 policy: redis redis_host: redis.internal

Client-Side Handling

Clients must handle 429 gracefully: async function fetchWithRetry(url: string, maxRetries = 3): Promise<Response> { for (let attempt = 0; attempt < maxRetries; attempt++) { const res = await fetch(url); if (res.status !== 429) return res; const retryAfter = res.headers.get('Retry-After'); const delay = retryAfter ? parseInt(retryAfter, 10) * 1000 : Math.min(1000 * 2 ** attempt, 30000); await new Promise(r => setTimeout(r, delay)); } throw new Error('Rate limit exceeded after retries'); } Always respect Retry-After when present Use exponential backoff with jitter when absent Implement request queuing for batch operations

Monitoring

Track these metrics: Rate limit hit rate โ€” % of requests returning 429 (alert if >5% sustained) Near-limit warnings โ€” requests where remaining < 10% of limit Top offenders โ€” keys/IPs hitting limits most frequently Limit headroom โ€” how close normal traffic is to the ceiling False positives โ€” legitimate users being rate limited

Anti-Patterns

Anti-PatternFixApplication-only limitingAlways combine with infrastructure-level limitsNo retry guidanceAlways include Retry-After header on 429Inconsistent limitsSame endpoint, same limits across servicesNo burst allowanceAllow controlled bursts for legitimate trafficSilent droppingAlways return 429 so clients can distinguish from errorsGlobal single counterPer-endpoint counters to protect expensive operationsHard-coded limitsUse configuration, not code constants

NEVER Do

NEVER rate limit health check endpoints โ€” monitoring systems will false-alarm NEVER use client-supplied identifiers as sole rate limit key โ€” trivially spoofed NEVER return 200 OK when rate limiting โ€” clients must know they were throttled NEVER set limits without measuring actual traffic first โ€” you'll block legitimate users or set limits too high to matter NEVER share counters across unrelated tenants โ€” noisy neighbor problem NEVER skip rate limiting on internal APIs โ€” misbehaving internal services can take down shared infrastructure NEVER implement rate limiting without logging โ€” you need visibility to tune limits and detect abuse

Category context

Code helpers, APIs, CLIs, browser automation, testing, and developer operations.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
2 Docs
  • SKILL.md Primary doc
  • README.md Docs