Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Structured multi-criteria decision analysis for ranking options with weights, constraints, confidence, tradeoff reasoning, sensitivity analysis, and explaina...
Structured multi-criteria decision analysis for ranking options with weights, constraints, confidence, tradeoff reasoning, sensitivity analysis, and explaina...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Turn a messy tradeoff problem into a structured, auditable multi-criteria decision and return a ranked recommendation with confidence and explanation.
Use this skill when the user needs structured decision support rather than open-ended brainstorming. Typical triggers include: multi-criteria decision analysis weighted scoring or option ranking vendor selection or procurement route planning with explicit tradeoffs hiring shortlist ranking tool or platform comparison policy-driven or auditable agent decisions
This skill supports exactly two input modes.
The user already has a decision request with: options criteria optional constraints optional policy_name optional evidence, confidence, or context Use scripts/validate_request.py first if request quality is uncertain, then scripts/run_adi.py to execute it.
The user provides a natural-language tradeoff problem. First use scripts/normalize_problem.py to produce a request skeleton. Do not pretend the request is complete if important fields are missing. If the skeleton is not ready, ask for the missing inputs instead of inventing scores or constraints.
If ADI runs successfully, the final answer must contain: best_option a short rationale for why it won top-ranked alternatives confidence summary constraint impact summary sensitivity or stability summary when available explicit assumptions If the request is not complete enough to run, return a request-completion prompt rather than a fabricated ranking.
Determine whether the user input is structured or freeform. For freeform input, normalize it into a request skeleton using scripts/normalize_problem.py. Validate candidate requests with scripts/validate_request.py. Run complete requests with scripts/run_adi.py. Present the ADI result in clear decision-support language: recommendation first strongest tradeoff second caveats and sensitivity after that
Never rank options without explicit criteria. Never silently invent hard constraints. If criterion direction is ambiguous, stop and clarify. Normalize vague goals into named criteria before scoring. Prefer a small, explicit criteria set over many overlapping criteria. Keep the policy choice visible: balanced, risk_averse, or exploratory.
Show the top recommendation first. Explain why it won. Mention the strongest tradeoff. Call out eliminated or constraint-violating options. Include confidence caveats when evidence is weak. Use a compact comparison table or structured bullet list when comparing several options.
No hidden math. No fake scores. No fabricated evidence. Do not claim ADI ran if the runtime dependency is missing. Do not request API keys. Do not require network access for the core workflow. Do not tell the user to trust the ranking if the request is under-specified.
python3 either an importable adi-decision package or the adi CLI on PATH If the ADI runtime is unavailable, stop with a clear error and explain that the dependency must be installed locally.
Request schema: references/request_schema.md Result interpretation: references/result_interpretation.md Policy guide: references/policy_guide.md Use cases: references/use_cases.md
examples/vendor_selection.json examples/route_planning.json examples/hiring_shortlist.json examples/research_methods.json examples/tool_selection.json
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.