โ† All skills
Tencent SkillHub ยท Productivity

Causal Inference

Add causal reasoning to agent actions. Trigger on ANY high-level action with observable outcomes - emails, messages, calendar changes, file operations, API calls, notifications, reminders, purchases, deployments. Use for planning interventions, debugging failures, predicting outcomes, backfilling historical data for analysis, or answering "what happens if I do X?" Also trigger when reviewing past actions to understand what worked/failed and why.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Add causal reasoning to agent actions. Trigger on ANY high-level action with observable outcomes - emails, messages, calendar changes, file operations, API calls, notifications, reminders, purchases, deployments. Use for planning interventions, debugging failures, predicting outcomes, backfilling historical data for analysis, or answering "what happens if I do X?" Also trigger when reviewing past actions to understand what worked/failed and why.

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, references/do-calculus.md, references/estimation.md, scripts/backfill_calendar.py, scripts/backfill_email.py, scripts/backfill_messages.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
0.2.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 19 sections Open source page

Causal Inference

A lightweight causal layer for predicting action outcomes, not by pattern-matching correlations, but by modeling interventions and counterfactuals.

Core Invariant

Every action must be representable as an explicit intervention on a causal model, with predicted effects + uncertainty + a falsifiable audit trail. Plans must be causally valid, not just plausible.

When to Trigger

Trigger this skill on ANY high-level action, including but not limited to: DomainActions to LogCommunicationSend email, send message, reply, follow-up, notification, mentionCalendarCreate/move/cancel meeting, set reminder, RSVPTasksCreate/complete/defer task, set priority, assignFilesCreate/edit/share document, commit code, deploySocialPost, react, comment, share, DMPurchasesOrder, subscribe, cancel, refundSystemConfig change, permission grant, integration setup Also trigger when: Reviewing outcomes โ€” "Did that email get a reply?" โ†’ log outcome, update estimates Debugging failures โ€” "Why didn't this work?" โ†’ trace causal graph Backfilling history โ€” "Analyze my past emails/calendar" โ†’ parse logs, reconstruct actions Planning โ€” "Should I send now or later?" โ†’ query causal model

Backfill: Bootstrap from Historical Data

Don't start from zero. Parse existing logs to reconstruct past actions + outcomes.

Email Backfill

# Extract sent emails with reply status gog gmail list --sent --after 2024-01-01 --format json > /tmp/sent_emails.json # For each sent email, check if reply exists python3 scripts/backfill_email.py /tmp/sent_emails.json

Calendar Backfill

# Extract past events with attendance gog calendar list --after 2024-01-01 --format json > /tmp/events.json # Reconstruct: did meeting happen? was it moved? attendee count? python3 scripts/backfill_calendar.py /tmp/events.json

Message Backfill (WhatsApp/Discord/Slack)

# Parse message history for send/reply patterns wacli search --after 2024-01-01 --from me --format json > /tmp/wa_sent.json python3 scripts/backfill_messages.py /tmp/wa_sent.json

Generic Backfill Pattern

# For any historical data source: for record in historical_data: action_event = { "action": infer_action_type(record), "context": extract_context(record), "time": record["timestamp"], "pre_state": reconstruct_pre_state(record), "post_state": extract_post_state(record), "outcome": determine_outcome(record), "backfilled": True # Mark as reconstructed } append_to_log(action_event)

A. Action Log (required)

Every executed action emits a structured event: { "action": "send_followup", "domain": "email", "context": {"recipient_type": "warm_lead", "prior_touches": 2}, "time": "2025-01-26T10:00:00Z", "pre_state": {"days_since_last_contact": 7}, "post_state": {"reply_received": true, "reply_delay_hours": 4}, "outcome": "positive_reply", "outcome_observed_at": "2025-01-26T14:00:00Z", "backfilled": false } Store in memory/causal/action_log.jsonl.

B. Causal Graphs (per domain)

Start with 10-30 observable variables per domain. Email domain: send_time โ†’ reply_prob subject_style โ†’ open_rate recipient_type โ†’ reply_prob followup_count โ†’ reply_prob (diminishing) time_since_last โ†’ reply_prob Calendar domain: meeting_time โ†’ attendance_rate attendee_count โ†’ slip_risk conflict_degree โ†’ reschedule_prob buffer_time โ†’ focus_quality Messaging domain: response_delay โ†’ conversation_continuation message_length โ†’ response_length time_of_day โ†’ response_prob platform โ†’ response_delay Task domain: due_date_proximity โ†’ completion_prob priority_level โ†’ completion_speed task_size โ†’ deferral_risk context_switches โ†’ error_rate Store graph definitions in memory/causal/graphs/.

C. Estimation

For each "knob" (intervention variable), estimate treatment effects: # Pseudo: effect of morning vs evening sends effect = mean(reply_prob | send_time=morning) - mean(reply_prob | send_time=evening) uncertainty = std_error(effect) Use simple regression or propensity matching first. Graduate to do-calculus when graphs are explicit and identification is needed.

D. Decision Policy

Before executing actions: Identify intervention variable(s) Query causal model for expected outcome distribution Compute expected utility + uncertainty bounds If uncertainty > threshold OR expected harm > threshold โ†’ refuse or escalate to user Log prediction for later validation

On Every Action

BEFORE executing: 1. Log pre_state 2. If enough historical data: query model for expected outcome 3. If high uncertainty or risk: confirm with user AFTER executing: 1. Log action + context + time 2. Set reminder to check outcome (if not immediate) WHEN outcome observed: 1. Update action log with post_state + outcome 2. Re-estimate treatment effects if enough new data

Planning an Action

1. User request โ†’ identify candidate actions 2. For each action: a. Map to intervention(s) on causal graph b. Predict P(outcome | do(action)) c. Estimate uncertainty d. Compute expected utility 3. Rank by expected utility, filter by safety 4. Execute best action, log prediction 5. Observe outcome, update model

Debugging a Failure

1. Identify failed outcome 2. Trace back through causal graph 3. For each upstream node: a. Was the value as expected? b. Did the causal link hold? 4. Identify broken link(s) 5. Compute minimal intervention set that would have prevented failure 6. Log counterfactual for learning

Quick Start: Bootstrap Today

# 1. Create the infrastructure mkdir -p memory/causal/graphs memory/causal/estimates # 2. Initialize config cat > memory/causal/config.yaml << 'EOF' domains: - email - calendar - messaging - tasks thresholds: max_uncertainty: 0.3 min_expected_utility: 0.1 protected_actions: - delete_email - cancel_meeting - send_to_new_contact - financial_transaction EOF # 3. Backfill one domain (start with email) python3 scripts/backfill_email.py # 4. Estimate initial effects python3 scripts/estimate_effect.py --treatment send_time --outcome reply_received --values morning,evening

Safety Constraints

Define "protected variables" that require explicit user approval: protected: - delete_email - cancel_meeting - send_to_new_contact - financial_transaction thresholds: max_uncertainty: 0.3 # don't act if P(outcome) uncertainty > 30% min_expected_utility: 0.1 # don't act if expected gain < 10%

Files

memory/causal/action_log.jsonl โ€” all logged actions with outcomes memory/causal/graphs/ โ€” domain-specific causal graph definitions memory/causal/estimates/ โ€” learned treatment effects memory/causal/config.yaml โ€” safety thresholds and protected variables

References

See references/do-calculus.md for formal intervention semantics See references/estimation.md for treatment effect estimation methods

Category context

Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Docs3 Scripts
  • SKILL.md Primary doc
  • references/do-calculus.md Docs
  • references/estimation.md Docs
  • scripts/backfill_calendar.py Scripts
  • scripts/backfill_email.py Scripts
  • scripts/backfill_messages.py Scripts