Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Complete strategic thinking & mental models toolkit. 50+ decision frameworks organized by situation type — business strategy, investing, hiring, pricing, ris...
Complete strategic thinking & mental models toolkit. 50+ decision frameworks organized by situation type — business strategy, investing, hiring, pricing, ris...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
The comprehensive decision-making methodology for founders, operators, investors, and leaders. 50+ mental models organized by when to use them, with templates and scoring systems.
When the user says "help me decide" or "analyze this decision": Ask: What's the decision? (one sentence) Ask: What type? (business / investment / hiring / product / personal / technical) Ask: Reversibility? (easy to undo / hard to undo / permanent) Ask: Time pressure? (minutes / days / weeks / no deadline) Select the right framework(s) from the catalog below Walk through step-by-step Score using the Decision Quality Rubric (Phase 10) Output a Decision Record (Phase 11)
Score the current decision process (1-5 each): DimensionScoreSignalProblem clarity_ /5Can you state the decision in one sentence?Options explored_ /5Have you considered 3+ alternatives including "do nothing"?Evidence quality_ /5Data-backed or gut feeling?Bias awareness_ /5Have you actively looked for disconfirming evidence?Reversibility mapped_ /5Do you know the cost of being wrong?Stakeholders consulted_ /5Has anyone challenged this?Second-order effects_ /5What happens AFTER this decision plays out?Time-appropriateness_ /5Are you spending the right amount of time on this? ≥32: Strong process — proceed with confidence 24-31: Decent — address weak dimensions before committing 16-23: Gaps — slow down and fill them ≤15: Stop — you're about to wing a consequential decision
Not all decisions deserve the same process. Classify first.
Type 1 (One-Way Door)Type 2 (Two-Way Door)ReversibilityIrreversible or very costly to reverseEasily reversibleProcessFull analysis, multiple perspectives, sleep on itDecide fast, iterate, don't overthinkWho decidesSenior person or groupIndividual closest to the informationTime budgetHours to weeksMinutes to hoursExamplesAcquisition, firing someone, pricing model, market entryFeature priority, tool selection, meeting format, hiring channel The #1 mistake: Treating Type 2 decisions like Type 1. This creates organizational paralysis. Speed on Type 2 decisions is a competitive advantage.
Before choosing a framework, map consequences: decision: "[What you're deciding]" type: 1 | 2 reversibility_cost: "$X / Y hours / Z reputation damage" upside_if_right: "[Best realistic outcome]" downside_if_wrong: "[Worst realistic outcome]" time_to_know: "[When will you know if this was right?]" asymmetry: "positive | negative | symmetric" # positive = upside >> downside (bet freely) # negative = downside >> upside (be cautious) # symmetric = roughly equal (use expected value)
Before reaching for frameworks, strip the problem to fundamentals.
Don't solve symptoms. Ask "Why?" five times: Why are we losing customers? → They churn after month 3. Why month 3? → That's when the free premium features expire. Why do they leave when features expire? → They haven't built habits around core features. Why haven't they built habits? → Onboarding doesn't guide them to sticky features. Why doesn't onboarding cover this? → It focuses on setup, not value realization. Root cause: Onboarding design, not pricing or product gaps.
Instead of "How do I succeed?", ask "How would I guarantee failure?" Template: Goal: [What you want to achieve] How to guarantee failure: 1. [Anti-pattern 1] 2. [Anti-pattern 2] 3. [Anti-pattern 3] 4. [Anti-pattern 4] 5. [Anti-pattern 5] Therefore, avoid: 1. [Inverted actionable rule] 2. [Inverted actionable rule] 3. [Inverted actionable rule]
For life-altering Type 1 decisions: "Project yourself to age 80. Which choice minimizes regret?" Use when: Career changes (leave job to start company?) Major financial commitments Relationship decisions The analytical frameworks feel inadequate because values are at stake
Porter's Five Forces (Industry Attractiveness) Score each 1-5 (1 = favorable, 5 = threatening): ForceScoreEvidenceThreat of new entrants_ /5Barriers to entry? Capital requirements? Network effects?Supplier power_ /5Few suppliers? Switching costs? Unique inputs?Buyer power_ /5Few buyers? Price sensitive? Easy to switch?Threat of substitutes_ /5Alternative solutions? Different categories solving same job?Competitive rivalry_ /5Many competitors? Slow growth? High fixed costs?Industry Score_ /25≤10 = attractive, 11-17 = moderate, ≥18 = difficult Moat Assessment (Competitive Advantage) Score each dimension 0-10: Moat TypeScoreEvidenceDurability (years)Network effects_ /10Each user makes product more valuable for others?Switching costs_ /10Pain of leaving? Data lock-in? Learning curve?Brand_ /10Premium pricing power? Trust? Recognition?Scale economies_ /10Cost advantages that grow with size?Proprietary tech/data_ /10Patents? Unique datasets? Trade secrets?Regulatory_ /10Licenses? Compliance barriers? Government relationships?Distribution_ /10Exclusive channels? Embedded in workflows?Counter-positioning_ /10Incumbent can't copy without hurting their core business?Total Moat_ /80≥50 = fortress, 30-49 = solid, 15-29 = narrow, <15 = no moat OODA Loop (Speed Advantage) For competitive situations where speed matters: Observe: What's happening? Raw data, signals, changes. Orient: What does it mean? Context, mental models, cultural factors. Decide: What will we do? Select action from options. Act: Execute. Then observe again. Key insight: The winner isn't who has the best strategy — it's who cycles through OODA faster. If you can observe and orient faster than competitors, you'll always be inside their decision loop. Wardley Mapping (Strategic Positioning) Map components by: Y-axis: Visibility to user (top = visible, bottom = invisible) X-axis: Evolution stage: Genesis → Custom → Product → Commodity Rules: Build what's in Genesis/Custom (your differentiation) Buy what's in Product/Commodity (don't reinvent wheels) Watch for components about to shift stages (opportunity/threat)
ICE Scoring (Quick Prioritization) InitiativeImpact (1-10)Confidence (1-10)Ease (1-10)ICE ScoreFeature A875280Feature B698432Feature C943108 Score = Impact × Confidence × Ease Calibration: Impact: Revenue, retention, or growth effect Confidence: How sure are you about Impact? (data-backed = 8+, gut = 3-5) Ease: 10 = hours, 7 = days, 4 = weeks, 1 = months Jobs To Be Done (JTBD) Template: When [situation/trigger], I want to [motivation/job], So I can [expected outcome]. Functional job: [What they're literally trying to do] Emotional job: [How they want to feel] Social job: [How they want to be perceived] Insight: People don't buy products. They hire them to make progress. Understand the job, and the product/feature decisions become obvious. Eisenhower Matrix (Time/Priority) UrgentNot UrgentImportantDO (crises, deadlines)SCHEDULE (strategy, relationships, health)Not ImportantDELEGATE (interruptions, some emails)ELIMINATE (busywork, most meetings) Key insight: Most people spend 80% of time in Urgent (both quadrants). Winners spend 80% in Important/Not Urgent (Q2) — that's where compounding happens.
Pre-Mortem (Klein) Before committing to a plan: "Imagine it's 6 months from now. This decision was a disaster. What went wrong?" Template: decision: "[What we're about to do]" pre_mortem_failures: - failure: "[What went wrong]" probability: "high | medium | low" severity: "catastrophic | major | minor" prevention: "[What we'll do to prevent this]" detection: "[How we'll know early if this is happening]" Run with 3+ people independently, then combine. The exercise works because it gives permission to voice concerns that "positive thinking" culture suppresses. Scenario Planning (Shell Method) Don't predict the future. Prepare for multiple futures. scenarios: optimistic: name: "[Descriptive name]" assumptions: ["[Key assumption 1]", "[Key assumption 2]"] probability: "X%" our_response: "[Strategy if this happens]" leading_indicators: ["[Signal 1]", "[Signal 2]"] base_case: name: "[Descriptive name]" assumptions: ["[Key assumption 1]", "[Key assumption 2]"] probability: "X%" our_response: "[Strategy if this happens]" leading_indicators: ["[Signal 1]", "[Signal 2]"] pessimistic: name: "[Descriptive name]" assumptions: ["[Key assumption 1]", "[Key assumption 2]"] probability: "X%" our_response: "[Strategy if this happens]" leading_indicators: ["[Signal 1]", "[Signal 2]"] black_swan: name: "[Descriptive name]" assumptions: ["[Unlikely but catastrophic event]"] probability: "<5%" our_response: "[Survival plan]" hedges: ["[Protection 1]", "[Protection 2]"] Rule: If your plan only works in one scenario, it's not a plan — it's a prayer. Antifragility Assessment (Taleb) Score your system/business/portfolio: DimensionFragile (-2 to 0)Robust (0)Antifragile (0 to +2)Revenue concentration1 client = 80% revenueDiversified, equalGets stronger with market chaosOperational dependenciesSingle point of failureRedundantFailures trigger improvementsFinancial structureLeveraged, thin marginsCash reserves, no debtOptionality, cash to deploy in downturnsKnowledge/IPKey-person dependentDocumented, distributedLearning system that compoundsMarket positionCommodity, price-takerDifferentiatedBenefits from competitor mistakes Total: ≥4 = antifragile, 0 = robust, ≤-4 = fragile (fix immediately)
BATNA Analysis (Fisher/Ury) Before any negotiation: my_batna: "[Best Alternative To Negotiated Agreement — what I do if we don't agree]" my_batna_value: "$X or equivalent" their_batna: "[Their best alternative]" their_batna_value: "$Y or equivalent" zopa: "[Zone Of Possible Agreement: range between our walk-away points]" my_reservation_price: "[Minimum I'd accept]" my_aspiration: "[What I actually want]" their_likely_reservation: "[Best guess at their minimum]" power_assessment: "I have more power | balanced | they have more power" # Whoever has the better BATNA has the power Cialdini's 6 Principles (Influence Audit) For any persuasion situation, check which levers apply: PrincipleApplicationYour MoveReciprocityGive first, then ask[What value can you provide upfront?]Commitment/ConsistencyGet small yeses first[What's the micro-commitment?]Social proofOthers are doing it[Who else has done this successfully?]AuthorityExpert endorsement[What credentials or evidence establish authority?]LikingBuild rapport first[What genuine connection exists?]ScarcityLimited availability[What's genuinely scarce — time, spots, pricing?]
Build vs Buy Decision Matrix CriterionWeightBuildBuyCore differentiator?5If yes: +5If no: +5Time to market4Score 1-5Score 1-5Long-term cost (3yr)4Score 1-5Score 1-5Customization needed3Score 1-5Score 1-5Team capability3Score 1-5Score 1-5Maintenance burden3Score 1-5Score 1-5Vendor risk2N/A (0)Score 1-5Integration complexity2Score 1-5Score 1-5 Shortcut: If it's your core differentiator → build. If it's commodity → buy. Everything else → this matrix. Reversibility-First Architecture Design decisions by reversibility: ReversibilityExamplesApproachEasy (hours)Feature flags, config, UI copyJust do it. Iterate.Medium (days-weeks)API design, database indexes, tool choicesLight analysis, time-box to 1 dayHard (months)Database engine, programming language, cloud providerFull evaluation, prototype, team inputPermanentPublic API contracts, data deletion, legal agreementsMaximum rigor, external review, sleep on it
Biases are the #1 threat to decision quality. Active defense required. BiasWhat It DoesDefenseConfirmation biasSeek info that confirms what you already believeAssign someone to argue the opposite. Search for "why [your thesis] is wrong"AnchoringFirst number you hear dominates your estimateGenerate your own estimate BEFORE looking at anyone else'sSunk cost fallacyContinue because you've already investedAsk: "If I were starting fresh today, would I begin this?"Survivorship biasStudy winners, ignore the deadAsk: "How many tried this and failed? What did they have in common?"Dunning-KrugerOverconfidence in areas of low competenceCheck: Am I inside my circle of competence?Recency biasOverweight recent eventsLook at 5-10 year base rates, not last quarterStatus quo biasPrefer current state even when suboptimalEvaluate "do nothing" as an active choice with its own costsGroupthinkAgree with the room to avoid conflictWrite opinions independently BEFORE discussing. Use anonymous voting.Availability heuristicJudge probability by how easily examples come to mindCheck actual data. Plane crashes feel common because they're memorable.Loss aversionFeel losses 2x more than equivalent gainsReframe: "What do I gain by NOT doing this?"Narrative fallacyConstruct stories to explain random eventsAsk: "Is this a pattern or am I connecting random dots?"Planning fallacyUnderestimate time/cost for tasksUse reference class forecasting: how long did SIMILAR projects take others?
Have I actively sought disconfirming evidence? Am I anchored to someone else's number/frame? Am I continuing because of sunk costs? Would I make this same choice starting from zero? Have I considered the base rate, not just my situation? Has someone challenged this decision?
Before acting on any estimate: Your ConfidenceWhat It Should MeanCalibration Test50%Coin flip — could go either wayWould you bet your own money at even odds?70%More likely than not, but real chance of being wrongWould you bet 2:1?90%Very confident, would be surprised if wrongWould you bet 9:1?95%Extremely confidentWould you bet 19:1?99%Near certainHave you been wrong at "99% confidence" before? (You have.) Rule: Most people are overconfident. If you think you're 90% sure, you're probably 70% sure. Adjust down.
SituationOptimal Decision TimeWhyInformation depreciates quicklyImmediately (minutes)Waiting destroys the optionEasy to reverseQuickly (hours)Cost of being wrong < cost of delayModerate stakes, some data70% information ruleAt 70% confidence, decide. Waiting for 95% means you're too late.High stakes, irreversibleTake available time (days-weeks)Use it all. Sleep on it. Get perspectives.Emotional decisionWait minimum 24 hoursEmotions are data, not directives. Let them settle.
For team/partner decisions where people disagree: Independent write-up: Each person writes their recommendation and reasoning (5 min, no discussion) Share simultaneously: Everyone reveals at once (prevents anchoring) Steel man opposition: Each person must articulate the best version of the opposing view Identify cruxes: What's the ONE factual question where if resolved, you'd change your mind? Resolve or decide: If crux is resolvable → get the data. If not → whoever has the best BATNA decides, or the person closest to the information decides.
RoleDefinitionRuleR — ResponsibleDoes the analysis, prepares recommendationMax 2 peopleA — AccountableMakes the final callExactly 1 personC — ConsultedProvides input before decisionKeep small (3-5)I — InformedTold after decision is madeEveryone affected Common failure: No clear A. If two people think they're the decider, no decision gets made.
Where to intervene in a system, ranked by effectiveness: Paradigms (most powerful) — Change the mindset/goals of the system Goals — What the system is optimizing for Rules — Incentives, constraints, punishments Information flows — Who knows what, when Feedback loops — Speed and accuracy of response Structure — How components connect Parameters (least powerful) — Numbers, budgets, quotas Insight: Most people intervene at #7 (adjust the budget). The highest-leverage interventions are at #1-3 (change what we're optimizing for).
Not all decisions need the same energy: high_energy_decisions: # Use frameworks, sleep on it - Career changes - Major financial commitments (>10% of net worth) - Hiring/firing - Market entry/exit - Relationship commitments medium_energy_decisions: # 30-min analysis, then decide - Quarterly priorities - Tool/vendor selection - Pricing adjustments - Content strategy low_energy_decisions: # Decide in <5 min or automate - What to eat, wear, read - Meeting attendance - Social media responses - Routine purchases rule: "Match decision energy to decision stakes. Most people overthink low-energy decisions and underthink high-energy ones."
Create personal defaults so you don't waste energy: defaults: new_meeting_request: "Default NO unless clearly advances top 3 priorities" price_negotiation: "Never discount more than 15% — offer value instead" new_project: "Default NO unless it replaces something on current list" email_response: "Batch 2x/day. Respond in ≤3 sentences or schedule a call" investment: "Default index fund. Active only with genuine edge + margin of safety" delegation: "If someone can do it 80% as well, delegate" saying_yes: "If it's not a HELL YES, it's a no"
Quick reference — which framework for which situation: SituationPrimary FrameworkSupporting ModelShould we enter this market?Porter's Five Forces + Moat AssessmentScenario PlanningShould I take this job/opportunity?Regret Minimization + Circle of CompetenceAsymmetric RiskWhich feature to build next?ICE Scoring + JTBD2nd Order ThinkingShould we invest/bet on X?Expected Value + Margin of SafetyPre-MortemHow to price our product?See afrexai-pricing-strategyCompetitive PositioningHiring decision?See afrexai-interview-architectCircle of CompetenceHow to negotiate this deal?BATNA + CialdiniSee afrexai-negotiation-masteryBuild or buy this component?Build vs Buy MatrixReversibility AssessmentTeam disagrees on directionStructured Disagreement ProtocolPre-MortemI'm overwhelmed with optionsEisenhower Matrix + Default RulesEnergy AuditBusiness feels fragileAntifragility AssessmentScenario PlanningCompetitor making movesOODA Loop + See afrexai-competitive-intelWardley MappingSomething failed, now what?5 Whys + InversionSunk Cost checkBig life decisionRegret Minimization + Second-OrderSleep on it (24h rule)
Score any decision AFTER making it (or retrospectively): DimensionWeightScore (0-10)WeightedProblem definition clarity15%__Options explored (≥3, incl. "do nothing")15%__Evidence quality (data vs. gut)15%__Bias mitigation (actively countered?)15%__Stakeholder input (right people consulted?)10%__Second-order effects considered10%__Reversibility & downside mapped10%__Time-appropriate process10%__Total100%_ /100 ≥80: Excellent process — outcome is in fortune's hands, not yours 60-79: Good — minor gaps but fundamentally sound 40-59: Mediocre — important dimensions skipped ≤39: Poor — outcome is a coin flip regardless of luck Critical insight: Judge decisions by PROCESS quality, not outcomes. A good process can produce bad outcomes (variance). A bad process that produces good outcomes is dangerous — it teaches bad habits.
Document every significant decision: decision_record: id: "DR-[YYYY-MM-DD]-[number]" date: "YYYY-MM-DD" decision: "[One sentence — what we decided]" type: "1 | 2" context: "[Why this decision was needed now]" options_considered: - option: "[Option A]" pros: ["...", "..."] cons: ["...", "..."] - option: "[Option B]" pros: ["...", "..."] cons: ["...", "..."] - option: "Do nothing" pros: ["...", "..."] cons: ["...", "..."] decision_rationale: "[Why we chose this option]" frameworks_used: ["[Framework 1]", "[Framework 2]"] key_assumptions: ["[Assumption 1]", "[Assumption 2]"] risks_accepted: ["[Risk 1]", "[Risk 2]"] success_criteria: "[How we'll know this was right]" review_date: "YYYY-MM-DD (when to evaluate)" quality_score: "X/100 (Phase 10 rubric)" decided_by: "[Name]" consulted: ["[Name 1]", "[Name 2]"] # Fill in at review_date: outcome: "[What actually happened]" lessons: "[What we learned]" would_decide_differently: "yes | no" why: "[If yes, what would we change about the PROCESS?]"
Combine extreme safety with extreme risk. Avoid the middle. Portfolio: 85-90% ultra-safe (treasuries, cash, index) + 10-15% high-risk/high-reward (startups, crypto, moonshots) Time: 80% predictable deep work + 20% wild exploration/experimentation Products: Cash cow product (boring, reliable) + speculative bets (innovative, might fail) Career: Stable income source + asymmetric side projects Why no middle: The "medium risk" zone gives you medium returns with hidden tail risk. Better to KNOW you're safe on one side and gambling on the other.
The longer something has survived, the longer it will likely survive. Applications: Books: A 100-year-old book is more likely relevant in 10 years than a 1-year-old book Technologies: SQL (50 years) will outlast this year's hot framework Business models: Subscription model (centuries old as concept) > novel monetization Advice: Wisdom from 2,000 years ago (Stoics, Sun Tzu) > last week's Twitter thread
Often the best decision is what to REMOVE, not what to add: Remove a feature (focus) Remove a meeting (time) Remove a client (sanity, team morale) Remove a goal (clarity) Remove a bad habit (energy) Remove complexity (reliability) Template: "What's the ONE thing I could eliminate that would improve everything else?"
Classify before analyzing. Type 1 or Type 2? Match process to stakes. "Do nothing" is always an option. Evaluate it explicitly. Seek disconfirming evidence. The moment you like an idea, hunt for why it's wrong. Separate process from outcome. Good process, bad outcome = fine. Bad process, good outcome = lucky. Time-box decisions. Set a deadline. Perfectionism is a form of procrastination. Write it down. Unwritten decisions can't be reviewed, learned from, or challenged. One decider. Every decision needs exactly one person who makes the final call. Sleep on Type 1 decisions. Your brain processes during sleep. Use it. Review decisions. Quarterly, look at your decision records. What patterns emerge? Compound decision quality. Each good decision process makes the next one better. This is the real edge.
#MistakeFix1Deciding too slowly on Type 2 decisionsSet a timer. If reversible, decide now.2Never writing down assumptionsEvery decision has assumptions. Write them. Test them.3Asking for consensus instead of inputConsensus = lowest common denominator. Get input, then one person decides.4Optimizing for one variableLife is multi-variable. Use weighted scoring.5Ignoring opportunity cost"This is good" isn't enough. "This is better than alternatives" is the bar.6Deciding when emotional24-hour rule for anything you'd regret.7Copying without context"Amazon does X" means nothing if you're not Amazon. Understand WHY they do X.8Analysis paralysis on small decisionsAutomate (defaults) or delegate anything under $500/2 hours.9Never reviewing past decisionsSame mistakes on repeat. Quarterly decision reviews = compounding improvement.10Conflating confidence with competenceLoud ≠ right. Data ≠ understanding. Check circle of competence.
"Help me decide [X]" → Full decision walkthrough (Quick Start) "Score this decision" → Decision Quality Rubric (Phase 10) "Pre-mortem [plan]" → Pre-mortem exercise (Phase 4) "Is this inside my circle?" → Circle of Competence check "Bias check" → Daily Bias Checklist "Expected value of [bet]" → EV calculation "Map the second-order effects" → Second-order thinking template "BATNA analysis for [negotiation]" → Full BATNA template "Rate this market" → Porter's Five Forces scoring "How strong is the moat?" → Moat Assessment "Which framework should I use?" → Phase 9 situation lookup "Write a decision record" → DR template (Phase 11)
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.