Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Monitor the Claude API for outages and latency spikes with rich Telegram alerts. Status monitoring, latency probes, and automatic recovery notifications.
Monitor the Claude API for outages and latency spikes with rich Telegram alerts. Status monitoring, latency probes, and automatic recovery notifications.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Monitor the Anthropic/Claude API for outages and latency spikes. Sends rich alerts to Telegram โ no agent tokens consumed for status checks.
Polls status.claude.com every 15 minutes via cron Alerts with incident name, latest update text, per-component status Tags incidents as "(not our model)" if e.g. Haiku is affected but you use Sonnet Sends all-clear on recovery Zero token cost
Sends a minimal request through OpenClaw's local gateway every 15 minutes Measures real end-to-end latency to Anthropic API Maintains rolling baseline (median of last 20 samples) Alerts with ๐ก/๐ /๐ด severity based on spike magnitude Sends all-clear when latency recovers ~$0.000001 per probe
Run the interactive setup script: bash /path/to/skills/claude-watchdog/scripts/setup.sh You'll need: Telegram Bot Token โ from @BotFather Telegram Chat ID โ send a message to your bot, then check https://api.telegram.org/bot<TOKEN>/getUpdates OpenClaw Gateway Token โ run: python3 -c "from pathlib import Path; import json; print(json.load(open(Path.home() / '.openclaw/openclaw.json'))['gateway']['auth']['token'])" Gateway Port โ default 18789 The setup script writes config, installs cron jobs, and runs an initial check. To uninstall (removes cron jobs, optionally config/state): bash /path/to/skills/claude-watchdog/scripts/setup.sh --uninstall
Stored in ~/.openclaw/skills/claude-watchdog/claude-watchdog.env. To reconfigure, either re-run setup.sh or edit this file directly โ changes take effect on the next cron run (within 15 minutes). TELEGRAM_BOT_TOKEN=... TELEGRAM_CHAT_ID=... OPENCLAW_GATEWAY_TOKEN=... OPENCLAW_GATEWAY_PORT=18789 MONITOR_MODEL=sonnet PROBE_MODEL=openclaw PROBE_AGENT_ID=main VariableDefaultDescriptionTELEGRAM_BOT_TOKEN(required)Telegram bot token from @BotFatherTELEGRAM_CHAT_ID(required)Target chat for alertsOPENCLAW_GATEWAY_TOKEN(required)Auth token for the local OpenClaw gatewayOPENCLAW_GATEWAY_PORT18789Port the OpenClaw gateway listens onMONITOR_MODELsonnetModel name to match in status incidents (e.g. "sonnet", "haiku")PROBE_MODELopenclawModel alias sent to the gateway for latency probes. openclaw uses the gateway's default model routingPROBE_AGENT_IDmainValue of the x-openclaw-agent-id header sent with probesFILTER_KEYWORDS(none)Comma-separated keywords to filter out of status alerts (e.g. "skills,Artifacts,Memory"). Empty = receive all alerts Scripts also accept these as environment variables (env file takes priority).
The env file contains sensitive tokens (Telegram bot token, gateway token). The setup script sets permissions to 600 (owner-only read/write). If you create or edit the file manually, ensure restricted permissions: chmod 600 ~/.openclaw/skills/claude-watchdog/claude-watchdog.env
Status incident: ๐ Anthropic Status: Partially Degraded Service ๐ Elevated error rates on Claude 3.5 Haiku (not our model) Status: Investigating Update: "We are investigating increased error rates..." Components: ๐ API: partial outage ๐ https://status.claude.com Latency spike: ๐ก Anthropic API โ High Latency Detected Current: 12.3s Baseline: 3.1s (median of last 19 samples) Ratio: 4.0ร Slow responses are expected right now. Recovery: โ Anthropic API โ Latency Back to Normal Current: 2.8s Baseline: 3.1s Was: 12.3s when alert fired
All state and log files are stored in ~/.openclaw/skills/claude-watchdog/: FilePurposeclaude-watchdog-status.jsonStatus check stateclaude-watchdog-latency.jsonLatency probe state & samplesclaude-watchdog-status.logStatus check logclaude-watchdog-latency.logLatency probe log
Edit constants at the top of latency-probe.py: ConstantDefaultMeaningALERT_MULTIPLIER2.5Alert if latency > Nร baseline medianALERT_HARD_FLOOR10.0sAlways alert above this absolute thresholdRECOVER_MULTIPLIER1.5Clear alert when below Nร baselineBASELINE_WINDOW20Rolling sample window sizeBASELINE_MIN_SAMPLES5Minimum samples before alerting startsPROBE_TIMEOUT45sGive up on probe after this long
Python 3.10+ (stdlib only, no pip dependencies) OpenClaw gateway running locally Telegram bot with access to the target chat
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.