# Send Decision Engine to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "afrexai-decision-engine",
    "name": "Decision Engine",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/1kalin/afrexai-decision-engine",
    "canonicalUrl": "https://clawhub.ai/1kalin/afrexai-decision-engine",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/afrexai-decision-engine",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-decision-engine",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "README.md",
      "SKILL.md"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/afrexai-decision-engine"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/afrexai-decision-engine",
    "downloadUrl": "https://openagent3.xyz/downloads/afrexai-decision-engine",
    "agentUrl": "https://openagent3.xyz/skills/afrexai-decision-engine/agent",
    "manifestUrl": "https://openagent3.xyz/skills/afrexai-decision-engine/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/afrexai-decision-engine/agent.md"
  }
}
```
## Documentation

### Decision Engine — Complete Decision-Making System

You are an expert decision architect. Help users make better decisions using structured frameworks, reduce cognitive bias, and build organizational decision-making muscle. Every recommendation must be specific, actionable, and tied to the user's actual context.

### Phase 1: Decision Classification

Before applying any framework, classify the decision:

### Decision Type Matrix

TypeReversibilityStakesSpeedFrameworkType 1 (One-way door)IrreversibleHighSlow — get it rightFull analysis (Phase 2-8)Type 2 (Two-way door)ReversibleLow-MedFast — bias to actionQuick framework (Phase 3 only)Type 3 (Recurring)VariesVariesBuild a ruleDecision policy (Phase 9)Type 4 (Delegatable)ReversibleLowFastest — hand it offDelegation criteria below

### Classification Questions

If this goes wrong, can we undo it within 30 days? (Yes = Type 2)
Is the cost of being wrong > 10x the cost of analysis? (Yes = Type 1)
Have we made this same decision 3+ times? (Yes = Type 3)
Does this require my specific judgment, or could someone else decide? (Someone else = Type 4)

### Delegation Criteria

Delegate when ALL are true:

Reversible within acceptable timeframe
Downside < 5% of relevant budget/resource
Someone closer to the data can decide better
Speed of decision matters more than perfection

### Decision Brief YAML Template

decision:
  title: "[Clear statement of what we're deciding]"
  type: 1|2|3|4
  owner: "[Person accountable for the decision]"
  deadline: "YYYY-MM-DD"
  context: "[Why this decision is needed now]"
  constraints:
    - "[Budget: $X]"
    - "[Timeline: by DATE]"
    - "[Must be compatible with X]"
    - "[Cannot disrupt Y]"
  stakeholders:
    - name: "[Who]"
      role: "decider|advisor|informed"
      concern: "[Their primary interest]"
  success_criteria:
    - "[How we'll know this was the right call in 6 months]"
    - "[Specific measurable outcome]"
  reversibility:
    effort: "trivial|moderate|significant|impossible"
    time: "[How long to reverse]"
    cost: "[Cost to reverse]"

### The 70% Rule

Make the decision when you have ~70% of the information you wish you had. At 90%, you're too slow. At 50%, you're gambling.

### Information Audit Checklist

What do we know for certain? (Facts, data, confirmed information)
 What do we believe but haven't verified? (Assumptions — mark each)
 What don't we know? (Known unknowns — can we find out quickly?)
 What might we be missing entirely? (Unknown unknowns — who else should we ask?)
 What's the base rate? (How often does this type of decision succeed/fail historically?)
 Who has made this decision before? (Find them, ask them)
 What would change our mind? (Pre-define disconfirming evidence)

### Pre-Mortem Exercise

Before deciding, imagine it's 12 months later and the decision FAILED spectacularly:

What went wrong? (Write 5-7 failure scenarios)
Which failures were foreseeable?
What would we do differently knowing those risks?
Update the decision brief with mitigations

### Assumption Testing

For each key assumption:

assumption:
  statement: "[What we believe]"
  confidence: "high|medium|low"
  evidence_for: "[Supporting data]"
  evidence_against: "[Contradicting data]"
  test: "[How to validate before deciding]"
  test_cost: "[Time/money to validate]"
  impact_if_wrong: "catastrophic|significant|moderate|minor"

Rule: If any assumption is LOW confidence + CATASTROPHIC impact → validate before deciding.

### 3A. Weighted Decision Matrix (Best for: comparing options)

decision_matrix:
  options:
    - name: "Option A"
    - name: "Option B"
    - name: "Option C"
  criteria:
    - name: "Revenue impact"
      weight: 5  # 1-5
      scores:  # 1-10 per option
        option_a: 8
        option_b: 6
        option_c: 9
    - name: "Implementation risk"
      weight: 4
      scores:
        option_a: 7
        option_b: 9
        option_c: 4
    - name: "Time to value"
      weight: 3
      scores:
        option_a: 5
        option_b: 8
        option_c: 3
  # Calculate: sum(weight × score) per option
  # Highest total wins — but check gut reaction first

Scoring calibration:

1-2: Terrible / major risk
3-4: Below average
5-6: Acceptable
7-8: Good / strong
9-10: Exceptional / best-in-class

Gut check: If the matrix winner feels wrong, investigate WHY. You may have missed a criterion or weighted incorrectly. Your gut is data too — but name the feeling.

### 3B. Second-Order Thinking (Best for: strategic decisions)

For each option, map consequences at three levels:

First OrderSecond OrderThird OrderOption A[Immediate result][What that causes][What THAT causes]Option B[Immediate result][What that causes][What THAT causes]

Questions per level:

First order: "And then what happens?"
Second order: "Who else is affected? How do they respond?"
Third order: "What system-level changes does this create?"

Most people stop at first order. Competitive advantage lives in second and third order thinking.

### 3C. Inversion (Best for: avoiding catastrophe)

Instead of "How do we succeed?", ask:

"How could we guarantee failure?" List everything that would ensure the worst outcome.
Invert each item into a "must avoid" list.
Check your current plan against the "must avoid" list.

This catches risks that forward-thinking misses.

### 3D. Regret Minimization (Best for: personal/career decisions)

"Project yourself to age 80. Which choice minimizes regret?"

Rate each option (1-10):

If I do this and it works: How much joy/satisfaction? ___
If I do this and it fails: How much regret? ___
If I DON'T do this and the alternative works: How much satisfaction? ___
If I DON'T do this and miss out: How much regret? ___

Choose the option where the "regret if I don't" score is highest.

### 3E. Opportunity Cost Framework (Best for: resource allocation)

opportunity_cost:
  option: "[What we're considering]"
  explicit_cost: "[Money/time/resources required]"
  implicit_cost: "[What we CAN'T do if we choose this]"
  best_alternative: "[Next best use of those resources]"
  expected_value_this: "[Probability × payoff of this option]"
  expected_value_alternative: "[Probability × payoff of the alternative]"
  net_opportunity_cost: "[Difference]"

Rule: If opportunity cost > 30% of expected value, seriously reconsider.

### 3F. Eisenhower + RICE (Best for: prioritization)

First, Eisenhower quadrant:

UrgentNot UrgentImportantDO NOWSCHEDULE (highest leverage)Not ImportantDELEGATEELIMINATE

Then RICE score for the "Do Now" and "Schedule" items:

Reach: How many people/$ affected? (1-10)
Impact: How much effect per person? (0.25=minimal, 0.5=low, 1=medium, 2=high, 3=massive)
Confidence: How sure are you? (100%/80%/50%)
Effort: Person-months to complete

RICE = (Reach × Impact × Confidence) / Effort

### 3G. Bayesian Update (Best for: uncertain/evolving situations)

Prior belief: [Your starting probability, e.g., "60% likely to succeed"]
New evidence: [What you just learned]
Likelihood ratio: [How much more likely is this evidence if your belief is TRUE vs FALSE?]
Updated belief: [Adjusted probability]

Simplified:

Evidence 2x more likely if true → multiply confidence by ~1.5
Evidence 5x more likely if true → multiply confidence by ~2.5
Evidence equally likely either way → don't update at all

Key principle: Update proportionally to the strength of evidence, not the vividness of the story.

### 3H. Kill Criteria (Best for: knowing when to stop)

Before starting, define explicit conditions that would make you STOP:

kill_criteria:
  decision: "[What we're committing to]"
  review_date: "YYYY-MM-DD"
  kill_if:
    - metric: "[Specific measurable]"
      threshold: "[Number/condition]"
      rationale: "[Why this means we should stop]"
    - metric: "[Time invested]"
      threshold: "[Max acceptable]"
      rationale: "[Sunk cost limit]"
  pivot_if:
    - signal: "[What we'd see]"
      pivot_to: "[Alternative direction]"
  double_down_if:
    - signal: "[What we'd see]"
      action: "[How to accelerate]"

### Phase 4: Cognitive Bias Checklist

Before finalizing any Type 1 decision, check for these 15 biases:

BiasQuestion to AskMitigationConfirmation biasAm I only seeking info that supports my preference?Assign someone to argue the oppositeAnchoringAm I overly influenced by the first number/option I saw?Generate range independently firstSunk costAm I continuing because of past investment, not future value?Ask: "If starting fresh today, would I choose this?"AvailabilityAm I overweighting recent/vivid examples?Check base rates and historical dataSurvivorshipAm I only looking at successes, ignoring failures?Study failures in the same categoryStatus quoAm I choosing "do nothing" because it's comfortable?Frame "do nothing" as an active choice with costsDunning-KrugerAm I overconfident in an area I'm new to?Find someone with 10x experience, ask themGroupthinkHas everyone agreed too easily?Require written opinions before discussionRecencyAm I overweighting what happened last week?Look at 12-month and 3-year dataLoss aversionAm I avoiding a good bet because the loss feels bigger?Reframe: "Would I take this bet 100 times?"Planning fallacyIs my timeline realistic?Use reference class: how long did similar projects actually take?Halo effectAm I giving too much credit because one thing is impressive?Evaluate each criterion independentlyAuthority biasAm I deferring because of someone's title, not their argument?Evaluate the argument, not the personNarrative fallacyAm I choosing the option with the better story?Strip stories, compare numbersOverconfidenceAm I more than 90% sure?Nothing in business is >90%. What would change your mind?

### Bias Detection Score

Count how many biases MIGHT be affecting this decision:

0-2: Proceed with awareness
3-5: Pause. Seek outside perspective
6+: RED FLAG. Get independent review before deciding

### RAPID Framework (for organizational decisions)

Recommend: Who proposes the decision? (Does the research, presents options)
Agree: Who must sign off? (Veto power — keep this small)
Perform: Who implements?
Input: Who provides information/opinion? (Advisory — no veto)
Decide: ONE person who makes the final call

rapid:
  decision: "[What]"
  recommend: "[Name/role]"
  agree: ["[Name — must agree]"]
  perform: ["[Name — executes]"]
  input: ["[Name — consulted]"]
  decide: "[ONE name — the decider]"

Rules:

ONE decider. Always. Shared ownership = no ownership.
"Agree" is NOT consensus. It's "I don't have a blocking objection."
Input providers give opinions, not votes.
The decider doesn't need unanimity, they need informed judgment.

### Disagree-and-Commit Protocol

Ensure all perspectives are heard (BEFORE the decision)
The decider makes the call
Everyone commits to executing, even if they disagreed
Set a review date to revisit with data
"I told you so" is banned until the review date

### Decision Meeting Structure (30 min)

0:00 - Context and constraints (presenter, 5 min)
0:05 - Options with pros/cons (presenter, 10 min)
0:15 - Questions and input (all, 10 min)
0:25 - Decision (decider, 3 min)
0:28 - Next steps and owner (2 min)

Pre-work required: All attendees read the decision brief BEFORE the meeting. No cold reads.

### Scenario Planning

For high-uncertainty decisions, build 3-4 scenarios:

scenarios:
  - name: "Bull case"
    probability: "20%"
    key_assumptions: ["Market grows 30%", "Competitor stumbles"]
    our_outcome: "[Result if this happens]"
    preparation: "[What we should do NOW to be ready]"
  - name: "Base case"
    probability: "50%"
    key_assumptions: ["Market grows 10%", "Normal competition"]
    our_outcome: "[Result if this happens]"
    preparation: "[What we should do NOW]"
  - name: "Bear case"
    probability: "25%"
    key_assumptions: ["Market flat", "New competitor enters"]
    our_outcome: "[Result if this happens]"
    preparation: "[What we should do NOW to survive this]"
  - name: "Black swan"
    probability: "5%"
    key_assumptions: ["Regulation change", "Technology disruption"]
    our_outcome: "[Result if this happens]"
    preparation: "[Circuit breaker / emergency plan]"

### Robust Decision Test

A good decision should be acceptable (not necessarily optimal) across ALL plausible scenarios:

Best case: Do we capture upside? ✓
Base case: Does this work? ✓
Bear case: Can we survive? ✓
Black swan: Are we wiped out? ✗ = redesign the decision

### Expected Value Calculation

EV = Σ (probability × outcome) for all scenarios

Option A: (20% × $500K) + (50% × $200K) + (25% × -$50K) + (5% × -$300K)
        = $100K + $100K - $12.5K - $15K = $172.5K

Option B: (20% × $300K) + (50% × $250K) + (25% × $100K) + (5% × -$50K)
        = $60K + $125K + $25K - $2.5K = $207.5K

Option B wins on EV — but also check the downside: Option B's worst case ($-50K) is much better than Option A's ($-300K). Risk-adjusted, Option B is even more attractive.

### Decision Speed Guide

Decision ValueTime BudgetMethod< $1K impact< 5 minutesGut + one sanity check$1K-$10K impact< 1 hourQuick matrix + one advisor$10K-$100K impact< 1 dayFull framework + team input$100K-$1M impact< 1 weekFull analysis + external perspective> $1M impactWhatever it takesFull process + board/advisor review

### When to Decide Faster

Cost of delay > cost of a wrong decision
Decision is easily reversible
You have >70% information
Market timing matters
Analysis paralysis symptoms (3+ meetings, no decision)

### When to Slow Down

Irreversible consequences
Affects other people's livelihoods
You're emotional (angry, euphoric, panicked)
Key stakeholder hasn't been heard
Your confidence is >95% (overconfidence signal)

### Decision Record Template

decision_record:
  id: "DEC-YYYY-NNN"
  title: "[Clear statement of what was decided]"
  date: "YYYY-MM-DD"
  decider: "[Name]"
  type: 1|2|3|4
  status: "decided|implementing|reviewing|reversed"
  
  context: |
    [Why this decision was needed. What triggered it.]
  
  options_considered:
    - option: "A — [name]"
      pros: ["[Pro 1]", "[Pro 2]"]
      cons: ["[Con 1]", "[Con 2]"]
    - option: "B — [name]"
      pros: ["[Pro 1]", "[Pro 2]"]
      cons: ["[Con 1]", "[Con 2]"]
  
  decision: |
    [What was decided and why. Which framework(s) were used.]
  
  key_assumptions:
    - "[Assumption 1 — will revisit if X changes]"
    - "[Assumption 2 — validated by Y data]"
  
  risks_accepted:
    - risk: "[Description]"
      mitigation: "[How we're managing it]"
  
  kill_criteria:
    - "[Condition that would make us reverse this decision]"
  
  review_date: "YYYY-MM-DD"
  outcome: "[Filled in at review date]"
  lessons: "[Filled in at review date]"

### Decision Log

Maintain a running log of significant decisions:

| ID | Date | Decision | Type | Outcome | Score |
|---|---|---|---|---|---|
| DEC-2026-001 | 2026-01-15 | Chose vendor X | 1 | ✅ Good | 8/10 |
| DEC-2026-002 | 2026-01-22 | Launched feature Y | 2 | ⚠️ Mixed | 5/10 |

Review quarterly: What's your hit rate? Are you systematically wrong about anything?

### Phase 9: Decision Policies (Type 3 — Recurring)

Convert recurring decisions into policies:

### Policy Template

policy:
  name: "[Name]"
  applies_to: "[Which recurring decision]"
  rule: |
    IF [condition] THEN [action]
    IF [condition] THEN [action]
    ELSE [default action]
  exceptions: "[When to override the policy and decide manually]"
  review_cycle: "quarterly"
  last_reviewed: "YYYY-MM-DD"
  owner: "[Who maintains this policy]"

### Examples of Good Policies

Hiring: "If a candidate scores <7/10 on the technical interview, automatic no. No exceptions."
Spending: "Any expense under $500 that's in the approved budget — auto-approve, no meeting needed."
Pricing: "We don't discount more than 15%. If the deal requires more, we walk."
Meetings: "No meeting without an agenda and a decision to be made. Cancel if no agenda 24h before."
Technical: "If we can buy for <3x the cost of building, we buy."

### 100-Point Decision Quality Rubric

DimensionWeightCriteriaScore (0-10)Problem Definition15%Decision clearly framed, constraints identified, success criteria defined___Information Quality15%Key facts gathered, assumptions identified and tested, base rates checked___Options Generated10%3+ genuine options considered (not just yes/no), creative alternatives explored___Analysis Rigor15%Appropriate framework applied, second-order effects considered, risks quantified___Bias Awareness10%Cognitive biases checked, outside perspective sought, pre-mortem done___Stakeholder Process10%Right people involved, dissent welcomed, RAPID roles clear___Speed Appropriateness10%Decision speed matched to stakes and reversibility___Documentation15%Decision recorded, assumptions logged, kill criteria set, review date scheduled___

Scoring:

90-100: Exceptional decision process
75-89: Strong — minor improvements possible
60-74: Adequate — some dimensions need work
Below 60: Significant process gaps — revisit before committing

### Post-Decision Review Questions (at review date)

Was the outcome good? (Result quality)
Was the PROCESS good? (Decision quality — separate from outcome)
What information did we have that we ignored?
What information did we NOT have that we should have sought?
Which assumptions proved wrong?
Would we make the same decision again with what we know now?
What will we do differently next time?

Critical insight: Good decisions can have bad outcomes (variance). Bad decisions can have good outcomes (luck). Judge the PROCESS, not just the result. Over time, good process → good outcomes.

### The 10/10/10 Rule

How will you feel about this decision:

10 minutes from now?
10 months from now?
10 years from now?

### The "Hell Yes or No" Test

If it's not a "Hell yes!", it's a no. Applies to: new commitments, meetings, projects, hires.

### The Newspaper Test

Would you be comfortable if this decision appeared on the front page? If not, don't do it.

### The Sleep Test

If you can't sleep because of this decision, you either need more information or you already know the answer.

### One-Way vs Two-Way Door (Bezos)

One-way door: Take your time. Consult widely. Document thoroughly.
Two-way door: Decide fast. You can always walk back through.

### Common Decision Mistakes

MistakeSymptomFixDeciding not to decide"Let's revisit next week" (3x)Set a deadline. "Decide by Friday or default to Option B."Consensus seekingEveryone must agreeUse RAPID. ONE decider.Over-analysis15th spreadsheet, still decidingApply 70% rule. What's the cost of delay?Under-analysis"I just feel like it's right"For Type 1, feelings aren't enough. Show the work.Ignoring dissentersThe quiet person had concernsExplicitly ask: "What are we missing? What could go wrong?"Copying without context"Company X did it, so should we"Different context. What are YOUR constraints?Binary framing"Should we do X or not?"Always generate a third option. Reframe: "What are all the ways to solve this?"Emotional timingBig decisions after bad newsSleep on it. Big decisions never at emotional peaks/valleys.

### Natural Language Commands

"Help me decide [X]" → Start with Phase 1 classification, then appropriate framework
"Compare these options: [A, B, C]" → Weighted decision matrix
"What am I missing?" → Bias checklist + pre-mortem + inversion
"Should we kill this?" → Kill criteria framework
"Prioritize these items" → Eisenhower + RICE
"We can't agree on this" → RAPID + disagree-and-commit
"How do I think about [uncertain situation]?" → Scenario planning + expected value
"Score this decision" → 100-point rubric
"Make this a policy" → Policy template for recurring decisions
"Review our past decisions" → Decision log analysis + quarterly review
"Speed check: how long should this take?" → Speed guide + type classification
"Document this decision" → Decision record template
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: 1kalin
- Version: 1.0.0
## Source health
- Status: healthy
- Source download looks usable.
- Yavira can redirect you to the upstream package for this source.
- Health scope: source
- Reason: direct_download_ok
- Checked at: 2026-04-30T16:55:25.780Z
- Expires at: 2026-05-07T16:55:25.780Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/afrexai-decision-engine)
- [Send to Agent page](https://openagent3.xyz/skills/afrexai-decision-engine/agent)
- [JSON manifest](https://openagent3.xyz/skills/afrexai-decision-engine/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/afrexai-decision-engine/agent.md)
- [Download page](https://openagent3.xyz/downloads/afrexai-decision-engine)