Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Comprehensive SRE platform enabling SLO definition, reliability assessment, incident response, chaos engineering, and error budget management without externa...
Comprehensive SRE platform enabling SLO definition, reliability assessment, incident response, chaos engineering, and error budget management without externa...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Complete Site Reliability Engineering system โ from SLO definition through incident response, chaos engineering, and operational excellence. Zero dependencies.
Before building anything, assess where you are.
service: name: "" tier: "" # critical | important | standard | experimental owner_team: "" oncall_rotation: "" dependencies: upstream: [] # services we call downstream: [] # services that call us data_classification: "" # public | internal | confidential | restricted deployment_frequency: "" # daily | weekly | biweekly | monthly architecture: "" # monolith | microservice | serverless | hybrid language: "" infra: "" # k8s | ECS | Lambda | VM | bare-metal traffic_pattern: "" # steady | diurnal | spiky | seasonal peak_rps: 0 storage_gb: 0 monthly_cost_usd: 0
Dimension1 (Ad-hoc)3 (Defined)5 (Optimized)ScoreSLOsNo SLOs definedSLOs exist, reviewed quarterlyData-driven SLOs, auto error budgetsMonitoringBasic health checksGolden signals + dashboardsFull observability, anomaly detectionIncident ResponseNo runbooks, hero cultureDocumented process, postmortemsAutomated detection, structured ICSAutomationManual deploymentsCI/CD pipeline, some automationSelf-healing, auto-scaling, GitOpsChaos EngineeringNo testingBasic failure injectionContinuous chaos in productionCapacity PlanningReactive scalingQuarterly forecastingPredictive auto-scalingToil Management>50% toilToil tracked, reduction plans<25% toil, systematic eliminationOn-Call HealthBurnout, 24/7 individualsRotation exists, escalation pathsBalanced load, <2 pages/shift Score interpretation: 8-16: Firefighting mode โ start with SLOs + incident process 17-24: Foundation built โ add chaos engineering + toil reduction 25-32: Maturing โ optimize error budgets + capacity planning 33-40: Advanced โ focus on predictive reliability + culture
Service TypePrimary SLISecondary SLIsAPI/BackendRequest success rateLatency p50/p95/p99, throughputFrontend/WebPage load (LCP)FID/INP, CLS, error rateData PipelineFreshnessCorrectness, completeness, throughputStorageDurabilityAvailability, latencyStreamingProcessing latencyThroughput, ordering, data loss rateBatch JobSuccess rateDuration, SLA complianceML ModelPrediction latencyAccuracy drift, feature freshness
sli: name: "request_success_rate" description: "Proportion of valid requests served successfully" type: "availability" # availability | latency | quality | freshness measurement: good_events: "HTTP responses with status < 500" total_events: "All HTTP requests excluding health checks" source: "load balancer access logs" aggregation: "sum(good) / sum(total) over rolling 28-day window" exclusions: - "Health check endpoints (/healthz, /readyz)" - "Synthetic monitoring traffic" - "Requests from blocked IPs" - "4xx responses (client errors)"
NinesUptime %Downtime/monthAppropriate for2 nines99%7h 18mInternal tools, dev environments2.599.5%3h 39mNon-critical services, backoffice3 nines99.9%43m 50sStandard production services3.599.95%21m 55sImportant customer-facing services4 nines99.99%4m 23sCritical services, payments, auth5 nines99.999%26sLife-safety, financial clearing Rules for setting targets: Start lower than you think โ you can always tighten SLO < SLA (always have buffer โ typically 0.1-0.5% margin) Internal SLO < External SLO (catch problems before customers do) Each nine costs ~10x more to achieve If you can't measure it, you can't SLO it
slo: service: "" sli: "" target: 99.9 # percentage window: "28d" # rolling window error_budget: 0.1 # 100% - target error_budget_minutes: 40 # per 28-day window burn_rate_alerts: - name: "fast_burn" burn_rate: 14.4 # exhausts budget in 2 hours short_window: "5m" long_window: "1h" severity: "page" - name: "medium_burn" burn_rate: 6.0 # exhausts budget in ~5 hours short_window: "30m" long_window: "6h" severity: "page" - name: "slow_burn" burn_rate: 1.0 # exhausts budget in 28 days short_window: "6h" long_window: "3d" severity: "ticket" review_cadence: "monthly" owner: "" stakeholders: [] escalation_when_budget_exhausted: - "Halt non-critical deployments" - "Redirect engineering to reliability work" - "Escalate to VP Engineering if no improvement in 48h"
error_budget_policy: service: "" budget_states: healthy: condition: "remaining_budget > 50%" actions: - "Normal development velocity" - "Feature work prioritized" - "Chaos experiments allowed" warning: condition: "remaining_budget 25-50%" actions: - "Increase monitoring scrutiny" - "Review recent changes for risk" - "Limit risky deployments to business hours" - "No chaos experiments" critical: condition: "remaining_budget 0-25%" actions: - "Feature freeze โ reliability work only" - "All deployments require SRE approval" - "Mandatory rollback plan for every change" - "Daily error budget review" exhausted: condition: "remaining_budget <= 0" actions: - "Complete deployment freeze" - "All engineering redirected to reliability" - "VP Engineering notified" - "Postmortem required for budget exhaustion" - "Freeze maintained until budget recovers to 10%" exceptions: - "Security patches always allowed" - "Regulatory compliance changes always allowed" - "Data loss prevention always allowed" reset: "Rolling 28-day window (no manual resets)"
Track weekly: MetricCurrentTrendStatusBudget remaining (%)โโโ๐ข๐ก๐ดBudget consumed this weekBurn rate (1h / 6h / 24h)Incidents consuming budgetTop error contributorProjected exhaustion date
SignalWhat to MeasureAlert WhenLatencyp50, p95, p99 response timep99 > 2x baseline for 5 minTrafficRequests/sec, concurrent users>30% drop (indicates upstream issue) OR >50% spikeErrors5xx rate, timeout rate, exception rateError rate > SLO burn rate thresholdSaturationCPU, memory, disk, connections, queue depth>80% sustained for 10 min
For every resource, track: Utilization: % of capacity used (0-100%) Saturation: queue depth / wait time (0 = no waiting) Errors: error count / error rate
For every service, track: Rate: requests per second Errors: failed requests per second Duration: latency distribution
Every alert must have a runbook link โ no exceptions Every alert must be actionable โ if you can't act on it, delete it Symptoms over causes โ alert on "users can't check out" not "database CPU high" Multi-window, multi-burn-rate โ avoid single-threshold alerts Page only for customer impact โ everything else is a ticket Alert fatigue = death โ review alert volume monthly; target <5 pages/week per service
SeverityResponse TimeNotificationExamplesP0/Page<5 minPagerDuty + phoneSLO burn rate critical, data loss, security breachP1/Urgent<30 minSlack + PagerDutyDegraded service, elevated errors, capacity warningP2/TicketNext business dayTicket auto-createdSlow burn, non-critical component downP3/LogWeekly reviewDashboard onlyInformational, trend detection
{ "timestamp": "2026-02-17T11:24:00.000Z", "level": "error", "service": "payment-api", "trace_id": "abc123", "span_id": "def456", "message": "Payment processing failed", "error_type": "TimeoutException", "error_message": "Gateway timeout after 30s", "http_method": "POST", "http_path": "/api/v1/payments", "http_status": 504, "duration_ms": 30012, "customer_id": "cust_xxx", "payment_id": "pay_yyy", "amount_cents": 4999, "retry_count": 2, "environment": "production", "host": "payment-api-7b4d9-xk2p1", "region": "us-east-1" }
Impact: 1 UserImpact: <25% UsersImpact: >25% UsersImpact: All UsersCore function downSEV3SEV2SEV1SEV1Degraded performanceSEV4SEV3SEV2SEV1Non-core feature downSEV4SEV3SEV3SEV2Cosmetic/minorSEV4SEV4SEV3SEV3 Auto-escalation triggers: Any data loss โ SEV1 minimum Security breach with PII โ SEV1 Revenue-impacting โ SEV1 or SEV2 SLA breach imminent โ auto-escalate one level
RoleResponsibilityAssignedIncident Commander (IC)Owns resolution, makes decisions, manages timelineCommunications LeadStatus updates, stakeholder comms, customer-facingOperations LeadHands-on-keyboard, executing fixesSubject Matter ExpertDeep knowledge of affected systemScribeDocumenting timeline, actions, decisions IC Rules: IC does NOT debug โ IC coordinates IC makes final decisions when team disagrees IC can escalate severity at any time IC owns handoff if rotation changes IC calls end-of-incident
DETECT โ TRIAGE โ RESPOND โ MITIGATE โ RESOLVE โ REVIEW Step 1: DETECT (0-5 min) โโโ Alert fires OR user report received โโโ On-call acknowledges within SLA โโโ Quick assessment: is this real? What severity? Step 2: TRIAGE (5-15 min) โโโ Classify severity using matrix above โโโ Assign IC and roles โโโ Open incident channel (#inc-YYYY-MM-DD-title) โโโ Post initial status update โโโ Start timeline document Step 3: RESPOND (15 min - ongoing) โโโ IC briefs team: "Here's what we know, here's what we don't" โโโ Operations Lead begins investigation โโโ Check: recent deployments? Config changes? Dependency issues? โโโ Parallel investigation tracks if needed โโโ 15-minute check-ins for SEV1, 30-min for SEV2 Step 4: MITIGATE (ASAP) โโโ Priority: STOP THE BLEEDING โโโ Options (fastest first): โ โโโ Rollback last deployment โ โโโ Feature flag disable โ โโโ Traffic shift / failover โ โโโ Scale up / circuit breaker โ โโโ Manual data fix โโโ Mitigated โ Resolved โ temporary fix is OK โโโ Update status: "Impact mitigated, root cause investigation ongoing" Step 5: RESOLVE โโโ Root cause identified and fixed โโโ Verification: SLIs back to normal for 30+ minutes โโโ All-clear communicated โโโ IC declares incident resolved Step 6: REVIEW (within 5 business days) โโโ Blameless postmortem written โโโ Action items assigned with owners and deadlines โโโ Postmortem review meeting โโโ Action items tracked to completion
Initial notification (internal): ๐ด INCIDENT: [Title] Severity: SEV[X] Impact: [Who/what is affected] Status: Investigating IC: [Name] Channel: #inc-[date]-[slug] Next update: [time] Customer-facing status: [Service] - Investigating increased error rates We are currently investigating reports of [symptom]. Some users may experience [user-visible impact]. Our team is actively working on a resolution. We will provide an update within [time]. Resolution notification: โ RESOLVED: [Title] Duration: [X hours Y minutes] Impact: [Summary] Root cause: [One sentence] Postmortem: [Link] (within 5 business days)
postmortem: title: "" date: "" severity: "" # SEV1-4 duration: "" # total incident duration authors: [] reviewers: [] status: "draft" # draft | in-review | final summary: | One paragraph: what happened, what was the impact, how was it resolved. impact: users_affected: 0 duration_minutes: 0 revenue_impact_usd: 0 slo_budget_consumed_pct: 0 data_loss: false customer_tickets: 0 timeline: - time: "" event: "" # Chronological, every significant event # Include detection time, escalation, mitigation attempts root_cause: | Technical explanation of WHY it happened. Go deep โ surface causes are not root causes. contributing_factors: - "" # What made it worse or delayed resolution? detection: how_detected: "" # alert | user report | manual check time_to_detect_minutes: 0 could_have_detected_sooner: "" resolution: how_resolved: "" time_to_mitigate_minutes: 0 time_to_resolve_minutes: 0 what_went_well: - "" # Explicitly call out what worked what_went_wrong: - "" where_we_got_lucky: - "" # Things that could have made it worse action_items: - id: "AI-001" type: "" # prevent | detect | mitigate | process description: "" owner: "" priority: "" # P0 | P1 | P2 deadline: "" status: "open" # open | in-progress | done ticket: ""
Five Whys (simple incidents): Why did users see errors? โ API returned 500s Why did API return 500s? โ Database connection pool exhausted Why was pool exhausted? โ Long-running query held connections Why was query long-running? โ Missing index on new column Why was index missing? โ Migration didn't include index; no query performance review in CI โ Root cause: No automated query performance check in deployment pipeline โ Action: Add query plan analysis to CI for migration PRs Fishbone / Ishikawa (complex incidents): Categories to investigate: โโโ People: Training? Fatigue? Communication? โโโ Process: Runbook? Escalation? Change management? โโโ Technology: Bug? Config? Capacity? Dependency? โโโ Environment: Network? Cloud provider? Third party? โโโ Monitoring: Detection gap? Alert fatigue? Dashboard gap? โโโ Testing: Test coverage? Load testing? Chaos testing? Contributing Factor Categories: CategoryQuestionsTriggerWhat change or event started it?PropagationWhy did it spread? Why wasn't it contained?DetectionWhy wasn't it caught earlier?ResolutionWhat slowed the fix?ProcessWhat process gaps contributed?
1. Timeline walk-through (15 min) - Author presents chronology - Attendees add context ("I remember seeing X at this point") 2. Root cause deep-dive (15 min) - Do we agree on root cause? - Are there additional contributing factors? 3. Action item review (20 min) - Are these the RIGHT actions? - Are they prioritized correctly? - Do owners agree on deadlines? 4. Process improvements (10 min) - Could we have detected this sooner? - Could we have resolved this faster? - What would have prevented this entirely?
LevelNameActivities0NoneNo chaos testing1ExploratoryManual fault injection in staging2SystematicScheduled chaos experiments in staging3ProductionControlled chaos in production (Game Days)4ContinuousAutomated chaos in production with safety controls
experiment: name: "" hypothesis: "When [fault], the system will [expected behavior]" steady_state: metrics: - name: "" baseline: "" acceptable_range: "" method: fault_type: "" # network | compute | storage | dependency | data target: "" # which service/component blast_radius: "" # single pod | single AZ | percentage of traffic duration: "" safety: abort_conditions: - "SLO burn rate exceeds 10x" - "Customer-visible errors detected" - "Alert fires that we didn't expect" rollback_plan: "" required_approvals: [] results: outcome: "" # confirmed | disproved | inconclusive observations: [] action_items: []
CategoryExperimentValidatesNetworkAdd 200ms latency to DB callsTimeout handling, circuit breakersNetworkDrop 5% of packets to downstreamRetry logic, error handlingNetworkDNS resolution failureCaching, fallback, error messagesComputeKill random pod every 10 minAuto-restart, load balancingComputeCPU stress to 95% on 1 nodeAuto-scaling, graceful degradationComputeFill disk to 95%Disk monitoring, log rotation, alertsStorageIncrease DB latency 5xConnection pool handling, timeoutsStorageSimulate cache failure (Redis down)Cache-aside pattern, DB fallbackDependencyBlock external API (payment provider)Circuit breaker, queuing, retryDependencyReturn 429s from auth serviceRate limit handling, backoffDataClock skew on subset of nodesTimestamp handling, orderingScale10x traffic spike over 5 minutesAuto-scaling speed, queue depth
PRE-GAME (1 week before): โก Experiment designed and reviewed โก Steady-state metrics identified โก Abort conditions defined โก All participants briefed โก Runbacks tested in staging โก Stakeholders notified GAME DAY: โก Verify steady state (15 min baseline) โก Announce in #engineering: "Chaos Game Day starting" โก Inject fault โก Observe and document โก If abort condition hit โ rollback immediately โก Run for planned duration โก Remove fault โก Verify recovery to steady state POST-GAME (same day): โก Results documented โก Surprises noted โก Action items created โก Share findings in team meeting
Definition: Work that is manual, repetitive, automatable, tactical, without enduring value, and scales linearly with service growth.
toil_item: name: "" category: "" # deployment | scaling | config | data | access | monitoring | recovery frequency: "" # daily | weekly | monthly | per-incident time_per_occurrence_min: 0 occurrences_per_month: 0 total_hours_per_month: 0 teams_affected: [] automation_difficulty: "" # low | medium | high automation_value: 0 # hours saved per month priority_score: 0 # value / difficulty
Low EffortMedium EffortHigh EffortHigh Value (>10 hrs/mo)DO FIRSTDO SECONDPLANMed Value (2-10 hrs/mo)DO SECONDPLANEVALUATELow Value (<2 hrs/mo)QUICK WINSKIPSKIP
Manual deployments โ CI/CD pipeline + GitOps Access provisioning โ Self-service + auto-approval for low-risk Certificate renewals โ Auto-renewal (cert-manager, Let's Encrypt) Scaling decisions โ HPA + predictive auto-scaling Log investigation โ Structured logging + correlation + dashboards Data fixes โ Self-service admin tools + validation at ingestion Config changes โ Config-as-code + automated rollout Incident response โ Automated runbooks for known issues Capacity reporting โ Automated dashboards + forecasting On-call triage โ Noise reduction + auto-remediation for known patterns
Target: <25% of SRE time spent on toil. Track monthly. If above 25%, prioritize automation over all feature work.
capacity_model: service: "" bottleneck_resource: "" # CPU | memory | storage | connections | bandwidth current_state: peak_utilization_pct: 0 headroom_pct: 0 cost_per_month_usd: 0 growth_forecast: metric: "" # MAU | requests/sec | storage_gb current: 0 monthly_growth_pct: 0 projected_6mo: 0 projected_12mo: 0 scaling_strategy: type: "" # horizontal | vertical | hybrid auto_scaling: true min_instances: 0 max_instances: 0 scale_up_threshold: 80 # % utilization scale_down_threshold: 30 cooldown_seconds: 300 cost_projection: current_monthly: 0 projected_6mo_monthly: 0 projected_12mo_monthly: 0
FrequencyActionDailyReview auto-scaling events, check for anomaliesWeeklyReview utilization trends, spot-check headroomMonthlyUpdate growth model, review cost projectionsQuarterlyFull capacity review, budget planning, architecture checkPre-launchLoad test to 2x expected peak, verify scaling
ScenarioMethodDurationTargetBaselineSteady load at current peak30 minEstablish metricsGrowth2x current peak15 minVerify scaling worksSpike10x normal in 60 seconds5 minCircuit breakers holdSoak1.5x normal load4 hoursNo memory leaks, degradationStressRamp until failureUntil breakFind actual limits
MetricHealthyWarningCriticalPages per shift<22-5>5Off-hours pages<1/week1-3/week>3/weekTime to acknowledge<5 min5-15 min>15 minTime to mitigate<30 min30-60 min>60 minFalse positive rate<10%10-30%>30%Escalation rate<20%20-40%>40%On-call satisfaction>4/53-4/5<3/5
Minimum rotation size: 5 people (one week on, four weeks off) No back-to-back weeks unless team is too small (fix the team size) Follow-the-sun for global teams (no one pages at 3 AM if avoidable) Primary + secondary on-call always Handoff document at rotation change โ open issues, recent deploys, known risks Compensation โ on-call pay, time off in lieu, or equivalent
runbook: title: "" alert_name: "" # exact alert that triggers this last_updated: "" owner: "" overview: | What this alert means in plain English. impact: | What users/systems are affected and how. diagnosis: - step: "Check service health" command: "" expected: "" if_unexpected: "" - step: "Check recent deployments" command: "" expected: "" if_unexpected: "Rollback: [command]" - step: "Check dependencies" command: "" expected: "" if_unexpected: "" mitigation: - option: "Rollback" when: "Recent deployment suspected" steps: [] - option: "Scale up" when: "Traffic spike" steps: [] - option: "Failover" when: "Single component failure" steps: [] escalation: after_minutes: 30 contact: "" context_to_provide: ""
1. SLO Status (5 min) - Budget remaining per service - Any burn rate alerts this week? 2. Incident Review (10 min) - Incidents this week: count, severity, duration - Open postmortem action items: status check 3. On-Call Health (5 min) - Pages this week (total, off-hours, false positives) - Any on-call feedback? 4. Reliability Work (10 min) - Automation shipped this week - Toil reduced (hours saved) - Chaos experiments run - Capacity concerns
monthly_report: period: "" slo_summary: services_meeting_slo: 0 services_breaching_slo: 0 worst_performing: "" incidents: total: 0 by_severity: { SEV1: 0, SEV2: 0, SEV3: 0, SEV4: 0 } mttr_minutes: 0 mttd_minutes: 0 repeat_incidents: 0 error_budget: services_in_healthy: 0 services_in_warning: 0 services_in_critical: 0 services_exhausted: 0 toil: hours_spent: 0 hours_automated_away: 0 pct_of_sre_time: 0 on_call: total_pages: 0 off_hours_pages: 0 false_positive_pct: 0 avg_ack_time_min: 0 action_items: open: 0 completed_this_month: 0 overdue: 0 highlights: [] concerns: [] next_month_priorities: []
Before any new service goes to production: CategoryCheckStatusSLOsSLIs defined and measuredSLOsSLO targets set with stakeholder agreementSLOsError budget policy documentedMonitoringGolden signals dashboardedMonitoringAlerting configured with runbooksMonitoringStructured logging implementedMonitoringDistributed tracing enabledIncidentsOn-call rotation establishedIncidentsEscalation paths documentedIncidentsRunbooks for top 5 failure modesCapacityLoad tested to 2x expected peakCapacityAuto-scaling configured and testedCapacityResource limits set (CPU, memory)ResilienceGraceful degradation implementedResilienceCircuit breakers for dependenciesResilienceRetry with exponential backoffResilienceTimeout configured for all external callsDeployRollback tested and documentedDeployCanary/blue-green deployment readyDeployFeature flags for risky featuresSecurityAuthentication and authorizationSecuritySecrets in vault (not env vars)SecurityDependencies scannedDataBackup and restore testedDataData retention policy definedDocsArchitecture diagram currentDocsAPI documentation publishedDocsOperational runbook complete
auto_remediation: - trigger: "pod_crash_loop" condition: "restart_count > 3 in 10 min" action: "Delete pod, let scheduler reschedule" escalate_if: "Still crashing after 3 auto-remediations" - trigger: "disk_usage_high" condition: "disk_usage > 85%" action: "Run log cleanup script, archive old data" escalate_if: "Still above 85% after cleanup" - trigger: "connection_pool_exhausted" condition: "available_connections = 0" action: "Kill idle connections, increase pool temporarily" escalate_if: "Pool exhausted again within 1 hour" - trigger: "certificate_expiring" condition: "days_until_expiry < 14" action: "Trigger cert renewal" escalate_if: "Renewal fails"
StrategyComplexityRTOCostActive-passiveLowMinutes1.5xActive-active readMediumSeconds1.8xActive-active fullHighNear-zero2-3xCell-basedVery highPer-cell2-4x Decision guide: SLO < 99.9% โ Single region with good backups SLO 99.9-99.95% โ Active-passive with automated failover SLO > 99.95% โ Active-active (read or full) SLO > 99.99% โ Cell-based architecture
Healthy signals: Postmortems are blameless and well-attended Error budgets are respected (feature freeze actually happens) On-call is shared fairly and compensated Toil is tracked and reducing quarter-over-quarter Chaos experiments happen regularly Teams own their reliability (not just SRE) Warning signs: "Hero culture" โ same person always saves the day Postmortems are blame-focused or skipped Error budget exhaustion doesn't change behavior On-call is dreaded, same 2 people always paged "We'll fix reliability after this feature ships" (always) SRE team is just an ops team with a new name
DimensionWeight0-23-45SLO Coverage20%No SLOsSLOs for critical servicesAll services with SLOs, error budgets, reviewsMonitoring15%Basic health checksGolden signals + dashboardsFull observability stack + anomaly detectionIncident Response15%Ad-hoc, no processICS roles, runbooks, postmortemsStructured ICS, blameless culture, action trackingAutomation15%Manual everythingCI/CD + some automationSelf-healing, GitOps, <25% toilChaos Engineering10%NoneStaging experimentsContinuous production chaos with safetyCapacity Planning10%ReactiveQuarterly forecastingPredictive, auto-scaling, cost-optimizedOn-Call Health10%Burnout, hero cultureFair rotation, <5 pages/shiftBalanced, compensated, <2 pages/shiftDocumentation5%Nothing writtenRunbooks existComplete, current, tested runbooks
"Assess reliability for [service]" โ Run maturity assessment "Define SLOs for [service]" โ Walk through SLI selection + SLO setting "Check error budget for [service]" โ Calculate current budget status "Start incident for [description]" โ Create incident channel, assign IC, begin workflow "Write postmortem for [incident]" โ Generate structured postmortem "Plan chaos experiment for [service]" โ Design experiment with hypothesis "Audit toil for [team]" โ Inventory and prioritize toil "Review on-call health" โ Analyze page volume, satisfaction, fairness "Production readiness review for [service]" โ Run full checklist "Monthly reliability report" โ Generate comprehensive report "Design runbook for [alert]" โ Create structured runbook "Plan capacity for [service] growing at [X%]" โ Build capacity model
Long-tail utilities that do not fit the current primary taxonomy cleanly.
Largest current source with strong distribution and engagement signals.