Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Unified log search across Loki, Elasticsearch, and CloudWatch. Natural language queries translated to LogQL, ES DSL, or CloudWatch filter patterns. Read-only...
Unified log search across Loki, Elasticsearch, and CloudWatch. Natural language queries translated to LogQL, ES DSL, or CloudWatch filter patterns. Read-only...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Search logs across Loki, Elasticsearch/OpenSearch, and AWS CloudWatch from a single interface. Ask in plain English; the skill translates to the right query language. โ ๏ธ Sensitive Data Warning: Logs frequently contain PII, secrets, tokens, passwords, and other sensitive data. Never cache, store, or repeat raw log content beyond the current conversation. Treat all log output as confidential.
This skill activates when the user mentions: "search logs", "find in logs", "log search", "check the logs" "Loki", "LogQL", "logcli" "Elasticsearch logs", "Kibana", "OpenSearch" "CloudWatch logs", "AWS logs", "log groups" "error logs", "find errors", "what happened in [service]" "tail logs", "follow logs", "live logs" "log backends", "which log sources", "log indices", "log labels" Incident triage involving log analysis "log-dive" explicitly
permissions: exec: true # Required to run backend scripts read: true # Read script files write: false # Never writes files โ logs may contain secrets network: true # Queries remote log backends
"Find error logs from the checkout service in the last 30 minutes" "Search for timeout exceptions across all services" "What log backends do I have configured?" "List available log indices in Elasticsearch" "Show me the labels available in Loki" "Tail the payment-service logs" "Find all 5xx errors in CloudWatch for api-gateway" "Correlate errors between user-service and payment-service" "What happened in production between 2pm and 3pm today?"
Each backend uses environment variables. Users may have one, two, or all three configured.
VariableRequiredDescriptionLOKI_ADDRYesLoki server URL (e.g., http://loki.internal:3100)LOKI_TOKENNoBearer token for authenticationLOKI_TENANT_IDNoMulti-tenant header (X-Scope-OrgID)
VariableRequiredDescriptionELASTICSEARCH_URLYesBase URL (e.g., https://es.internal:9200)ELASTICSEARCH_TOKENNoBasic <base64> or Bearer <token> for auth
VariableRequiredDescriptionAWS_PROFILE or AWS_ACCESS_KEY_IDYesStandard AWS credentialsAWS_REGIONYesAWS region for CloudWatch
Follow this sequence:
Run the backends check to see what's configured: bash <skill_dir>/scripts/log-dive.sh backends Parse the JSON output. If no backends are configured, tell the user which environment variables to set.
This is the critical step. Convert the user's natural language request into the appropriate backend-specific query. Use the query language reference below. For ALL backends, pass the query through the dispatcher: # Search across all configured backends bash <skill_dir>/scripts/log-dive.sh search --query '<QUERY>' [OPTIONS] # Search a specific backend bash <skill_dir>/scripts/log-dive.sh search --backend loki --query '{app="checkout"} |= "error"' --since 30m --limit 200 bash <skill_dir>/scripts/log-dive.sh search --backend elasticsearch --query '{"query":{"bool":{"must":[{"match":{"message":"error"}},{"match":{"service":"checkout"}}]}}}' --index 'app-logs-*' --since 30m --limit 200 bash <skill_dir>/scripts/log-dive.sh search --backend cloudwatch --query '"ERROR" "checkout"' --log-group '/ecs/checkout-service' --since 30m --limit 200
Before searching, you may need to discover what's available: # Loki: list labels and label values bash <skill_dir>/scripts/log-dive.sh labels --backend loki bash <skill_dir>/scripts/log-dive.sh labels --backend loki --label app # Elasticsearch: list indices bash <skill_dir>/scripts/log-dive.sh indices --backend elasticsearch # CloudWatch: list log groups bash <skill_dir>/scripts/log-dive.sh indices --backend cloudwatch
bash <skill_dir>/scripts/log-dive.sh tail --backend loki --query '{app="checkout"}' bash <skill_dir>/scripts/log-dive.sh tail --backend cloudwatch --log-group '/ecs/checkout-service' Tail runs for a limited time (default 30s) and streams results.
After receiving log output, you MUST: Identify unique error types โ group similar errors, count occurrences Find the root cause โ look for the earliest error, trace dependency chains Correlate across services โ if errors in service A mention service B, note the dependency Build a timeline โ order events chronologically Summarize actionably โ "The checkout service started returning 500s at 14:23 because the database connection pool was exhausted (max 10 connections, 10 in use). The pool exhaustion was triggered by a slow query in the inventory service." NEVER dump raw log output to the user. Always summarize, extract patterns, and present structured findings.
When the conversation is happening in a Discord channel: Send a compact incident summary first (backend, query intent, top error types, root-cause hypothesis), then ask if the user wants full detail. Keep the first response under ~1200 characters and avoid dumping raw log lines in the first message. If Discord components are available, include quick actions: Show Error Timeline Show Top Error Patterns Run Related Service Query If components are not available, provide the same follow-ups as a numbered list. Prefer short follow-up chunks (<=15 lines per message) when sharing timelines or grouped findings.
LogQL has two parts: a stream selector and a filter pipeline. Stream selectors: {app="myapp"} # exact match {namespace="prod", app=~"api-.*"} # regex match {app!="debug"} # negative match Filter pipeline (chained after selector): {app="myapp"} |= "error" # line contains "error" {app="myapp"} != "healthcheck" # line does NOT contain {app="myapp"} |~ "error|warn" # regex match on line {app="myapp"} !~ "DEBUG|TRACE" # negative regex Structured metadata (parsed logs): {app="myapp"} | json # parse JSON logs {app="myapp"} | json | status >= 500 # filter by parsed field {app="myapp"} | logfmt # parse logfmt {app="myapp"} | regexp `(?P<ip>\d+\.\d+\.\d+\.\d+)` # regex extract Common patterns: Errors in service: {app="checkout"} |= "error" | json | level="error" HTTP 5xx: {app="api"} | json | status >= 500 Slow requests: {app="api"} | json | duration > 5s Stack traces: {app="myapp"} |= "Exception" |= "at "
Simple match: {"query": {"match": {"message": "error"}}} Boolean query (AND/OR): { "query": { "bool": { "must": [ {"match": {"message": "error"}}, {"match": {"service.name": "checkout"}} ], "must_not": [ {"match": {"message": "healthcheck"}} ] } }, "sort": [{"@timestamp": "desc"}], "size": 200 } Time range filter: { "query": { "bool": { "must": [{"match": {"message": "timeout"}}], "filter": [ {"range": {"@timestamp": {"gte": "now-30m", "lte": "now"}}} ] } } } Wildcard / regex: {"query": {"regexp": {"message": "error.*timeout"}}} Common patterns: Errors in service: {"query":{"bool":{"must":[{"match":{"message":"error"}},{"match":{"service.name":"checkout"}}]}}} HTTP 5xx: {"query":{"range":{"http.status_code":{"gte":500}}}} Aggregate by field: Use "aggs" โ but prefer simple queries for agent use
Simple text match: "ERROR" # contains ERROR "ERROR" "checkout" # contains ERROR AND checkout JSON filter patterns: { $.level = "error" } # JSON field match { $.statusCode >= 500 } # numeric comparison { $.duration > 5000 } # duration threshold { $.level = "error" && $.service = "checkout" } # compound Negation and wildcards: ?"ERROR" ?"timeout" # ERROR OR timeout (any term) -"healthcheck" # does NOT contain (use with other terms) Common patterns: Errors: "ERROR" Errors in service: { $.level = "error" && $.service = "checkout" } HTTP 5xx: { $.statusCode >= 500 } Exceptions: "Exception" "at "
Check backends โ search for errors in affected service โ search upstream/downstream services โ correlate โ build timeline โ recommend actions.
Search for slow requests (duration > 5s) โ identify common patterns โ check for database slow queries โ check for external service timeouts.
Search for errors in the deployed service since deploy time โ compare error rate with pre-deploy period โ flag new error types.
Read-only: This skill can only search and read logs. It cannot delete, modify, or create log entries. Output size: Default limit is 200 entries. Log output is pre-filtered to reduce token consumption. For larger investigations, use multiple targeted queries rather than one broad query. Network access: Log backends must be reachable from the machine running OpenClaw. No streaming aggregation: For complex aggregations (percentiles, rates), consider using your backend's native UI (Grafana, Kibana, CloudWatch Insights).
ErrorCauseFix"No backends configured"No env vars setSet LOKI_ADDR, ELASTICSEARCH_URL, or configure AWS CLI"logcli not found"logcli not installedInstall from https://grafana.com/docs/loki/latest/tools/logcli/"aws: command not found"AWS CLI not installedInstall from https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html"curl: command not found"curl not installedapt install curl or brew install curl"jq: command not found"jq not installedapt install jq or brew install jq"connection refused"Backend unreachableCheck URL, VPN, firewall rules"401 Unauthorized"Bad credentialsCheck LOKI_TOKEN, ELASTICSEARCH_TOKEN, or AWS credentials Powered by Anvil AI ๐คฟ
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.