Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Logs/metrics → Python statistics → LLM interpretation → Notion reports. Use when: generating daily/weekly/monthly operational insights from AI system logs, p...
Logs/metrics → Python statistics → LLM interpretation → Notion reports. Use when: generating daily/weekly/monthly operational insights from AI system logs, p...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Data-driven insights from operational logs: collect → stats → LLM interpretation → Notion.
collect (Python stats only) ├── Langfuse OTEL traces/scores/observations ├── OpenClaw/gateway logs ├── Git activity └── Control plane scores ↓ build_*_data_packet() ← all stats computed in Python before LLM call ↓ call_claude(system_prompt, structured_json) ← LLM interprets, doesn't compute ↓ write_*_reflection() → Notion See references/architecture.md for full design rationale.
# Install deps pip install anthropic requests pyyaml # Configure cp scripts/config/analyst.yaml.example config/analyst.yaml # Edit config/analyst.yaml — set langfuse URL, notion IDs, model choices # Dry run (local Ollama, no Notion write) python3 scripts/src/engine.py --mode daily --dry-run # Print data packet + prompt to stdout (for agent consumption, no API calls) python3 scripts/src/engine.py --mode daily --data-only # Live run python3 scripts/src/engine.py --mode daily python3 scripts/src/engine.py --mode weekly python3 scripts/src/engine.py --mode monthly
ANTHROPIC_API_KEY=sk-ant-... # Anthropic API key NOTION_API_KEY=secret_... # Notion integration token LANGFUSE_BASE_URL=http://localhost:3100 # Langfuse server URL LANGFUSE_PUBLIC_KEY=pk-lf-... # Langfuse public key LANGFUSE_SECRET_KEY=sk-lf-... # Langfuse secret key NOTION_ROOT_PAGE_ID=<uuid> # Root Notion page for reports NOTION_DAILY_DB_ID=<uuid> # Notion database for daily entries Or configure in config/analyst.yaml.
Stats before LLM — Python computes all numbers. The LLM interprets, doesn't aggregate. Citation-enforcing prompts — System prompts require every claim to cite a specific number. No hallucinated trends — < 7 data points → report "insufficient data (n=X)" Dry-run mode — Uses local Ollama (free) to preview output; skip Notion write. Data-only mode — Outputs the full data packet + prompts for agent/subagent use.
<!-- ~/Library/LaunchAgents/com.yourname.insight-engine-daily.plist --> <key>StartCalendarInterval</key> <dict> <key>Hour</key><integer>23</integer> <key>Minute</key><integer>0</integer> </dict> <key>ProgramArguments</key> <array> <string>/usr/bin/python3</string> <string>/path/to/insight-engine/scripts/src/engine.py</string> <string>--mode</string><string>daily</string> </array>
Add a collector in scripts/src/collectors/: Create my_source.py with a fetch_*() function returning a plain dict Import and call it in build_daily_data_packet() in engine.py Reference the new key in prompts/daily_analyst.md under "Data sources"
references/architecture.md — full design rationale and layer descriptions scripts/prompts/daily_analyst.md — system prompt with citation rules scripts/config/analyst.yaml.example — config template
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.