Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Add causal reasoning to agent actions. Trigger on ANY high-level action with observable outcomes - emails, messages, calendar changes, file operations, API calls, notifications, reminders, purchases, deployments. Use for planning interventions, debugging failures, predicting outcomes, backfilling historical data for analysis, or answering "what happens if I do X?" Also trigger when reviewing past actions to understand what worked/failed and why.
Add causal reasoning to agent actions. Trigger on ANY high-level action with observable outcomes - emails, messages, calendar changes, file operations, API calls, notifications, reminders, purchases, deployments. Use for planning interventions, debugging failures, predicting outcomes, backfilling historical data for analysis, or answering "what happens if I do X?" Also trigger when reviewing past actions to understand what worked/failed and why.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
A lightweight causal layer for predicting action outcomes, not by pattern-matching correlations, but by modeling interventions and counterfactuals.
Every action must be representable as an explicit intervention on a causal model, with predicted effects + uncertainty + a falsifiable audit trail. Plans must be causally valid, not just plausible.
Trigger this skill on ANY high-level action, including but not limited to: DomainActions to LogCommunicationSend email, send message, reply, follow-up, notification, mentionCalendarCreate/move/cancel meeting, set reminder, RSVPTasksCreate/complete/defer task, set priority, assignFilesCreate/edit/share document, commit code, deploySocialPost, react, comment, share, DMPurchasesOrder, subscribe, cancel, refundSystemConfig change, permission grant, integration setup Also trigger when: Reviewing outcomes โ "Did that email get a reply?" โ log outcome, update estimates Debugging failures โ "Why didn't this work?" โ trace causal graph Backfilling history โ "Analyze my past emails/calendar" โ parse logs, reconstruct actions Planning โ "Should I send now or later?" โ query causal model
Don't start from zero. Parse existing logs to reconstruct past actions + outcomes.
# Extract sent emails with reply status gog gmail list --sent --after 2024-01-01 --format json > /tmp/sent_emails.json # For each sent email, check if reply exists python3 scripts/backfill_email.py /tmp/sent_emails.json
# Extract past events with attendance gog calendar list --after 2024-01-01 --format json > /tmp/events.json # Reconstruct: did meeting happen? was it moved? attendee count? python3 scripts/backfill_calendar.py /tmp/events.json
# Parse message history for send/reply patterns wacli search --after 2024-01-01 --from me --format json > /tmp/wa_sent.json python3 scripts/backfill_messages.py /tmp/wa_sent.json
# For any historical data source: for record in historical_data: action_event = { "action": infer_action_type(record), "context": extract_context(record), "time": record["timestamp"], "pre_state": reconstruct_pre_state(record), "post_state": extract_post_state(record), "outcome": determine_outcome(record), "backfilled": True # Mark as reconstructed } append_to_log(action_event)
Every executed action emits a structured event: { "action": "send_followup", "domain": "email", "context": {"recipient_type": "warm_lead", "prior_touches": 2}, "time": "2025-01-26T10:00:00Z", "pre_state": {"days_since_last_contact": 7}, "post_state": {"reply_received": true, "reply_delay_hours": 4}, "outcome": "positive_reply", "outcome_observed_at": "2025-01-26T14:00:00Z", "backfilled": false } Store in memory/causal/action_log.jsonl.
Start with 10-30 observable variables per domain. Email domain: send_time โ reply_prob subject_style โ open_rate recipient_type โ reply_prob followup_count โ reply_prob (diminishing) time_since_last โ reply_prob Calendar domain: meeting_time โ attendance_rate attendee_count โ slip_risk conflict_degree โ reschedule_prob buffer_time โ focus_quality Messaging domain: response_delay โ conversation_continuation message_length โ response_length time_of_day โ response_prob platform โ response_delay Task domain: due_date_proximity โ completion_prob priority_level โ completion_speed task_size โ deferral_risk context_switches โ error_rate Store graph definitions in memory/causal/graphs/.
For each "knob" (intervention variable), estimate treatment effects: # Pseudo: effect of morning vs evening sends effect = mean(reply_prob | send_time=morning) - mean(reply_prob | send_time=evening) uncertainty = std_error(effect) Use simple regression or propensity matching first. Graduate to do-calculus when graphs are explicit and identification is needed.
Before executing actions: Identify intervention variable(s) Query causal model for expected outcome distribution Compute expected utility + uncertainty bounds If uncertainty > threshold OR expected harm > threshold โ refuse or escalate to user Log prediction for later validation
BEFORE executing: 1. Log pre_state 2. If enough historical data: query model for expected outcome 3. If high uncertainty or risk: confirm with user AFTER executing: 1. Log action + context + time 2. Set reminder to check outcome (if not immediate) WHEN outcome observed: 1. Update action log with post_state + outcome 2. Re-estimate treatment effects if enough new data
1. User request โ identify candidate actions 2. For each action: a. Map to intervention(s) on causal graph b. Predict P(outcome | do(action)) c. Estimate uncertainty d. Compute expected utility 3. Rank by expected utility, filter by safety 4. Execute best action, log prediction 5. Observe outcome, update model
1. Identify failed outcome 2. Trace back through causal graph 3. For each upstream node: a. Was the value as expected? b. Did the causal link hold? 4. Identify broken link(s) 5. Compute minimal intervention set that would have prevented failure 6. Log counterfactual for learning
# 1. Create the infrastructure mkdir -p memory/causal/graphs memory/causal/estimates # 2. Initialize config cat > memory/causal/config.yaml << 'EOF' domains: - email - calendar - messaging - tasks thresholds: max_uncertainty: 0.3 min_expected_utility: 0.1 protected_actions: - delete_email - cancel_meeting - send_to_new_contact - financial_transaction EOF # 3. Backfill one domain (start with email) python3 scripts/backfill_email.py # 4. Estimate initial effects python3 scripts/estimate_effect.py --treatment send_time --outcome reply_received --values morning,evening
Define "protected variables" that require explicit user approval: protected: - delete_email - cancel_meeting - send_to_new_contact - financial_transaction thresholds: max_uncertainty: 0.3 # don't act if P(outcome) uncertainty > 30% min_expected_utility: 0.1 # don't act if expected gain < 10%
memory/causal/action_log.jsonl โ all logged actions with outcomes memory/causal/graphs/ โ domain-specific causal graph definitions memory/causal/estimates/ โ learned treatment effects memory/causal/config.yaml โ safety thresholds and protected variables
See references/do-calculus.md for formal intervention semantics See references/estimation.md for treatment effect estimation methods
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.