Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Adversarial multi-agent debate engine for stress-testing decisions, ideas, and strategies. Orchestrates multiple AI agents with conflicting worldviews (bull,...
Adversarial multi-agent debate engine for stress-testing decisions, ideas, and strategies. Orchestrates multiple AI agents with conflicting worldviews (bull,...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Stress-test any decision by having AI agents with conflicting worldviews debate it.
One of these coding agent CLIs (uses your existing subscription β no API key needed): Claude Code (default): claude β npm i -g @anthropic-ai/claude-code Codex: codex β npm i -g @openai/codex Gemini: gemini β npm i -g @google/gemini-cli No Python dependencies beyond the standard library.
# Basic 3-persona debate (uses Max subscription via claude CLI) python3 ~/.openclaw/skills/red-team/scripts/red-team.py \ --question "Should we do X?" \ --personas "bull,bear,operator" # Full debate with context and output file python3 ~/.openclaw/skills/red-team/scripts/red-team.py \ -q "Should we invest $50k in this deal?" \ -p "bull,bear,cash-flow,local-realist" \ -r 3 \ -c /path/to/deal-data.md \ -o /tmp/red-team-result.md # Use a different model python3 ~/.openclaw/skills/red-team/scripts/red-team.py \ -q "Should we launch this product?" \ -p "bull,customer,operator" \ -m opus # List all available personas python3 ~/.openclaw/skills/red-team/scripts/red-team.py --list-personas
When the user asks you to "red team" something, "stress test" an idea, play "devil's advocate", or asks "what could go wrong": Identify the question/decision from the user's message Choose appropriate personas (default: bull,bear,operator β adjust based on domain) Run the script and save output Summarize the key findings to the user, share the full report if requested Persona selection guide: Investment/financial decisions β bull, bear, cash-flow, economist Product/startup ideas β bull, customer, operator, technologist Legal/compliance questions β regulator, bear, operator Strategy/direction β contrarian, economist, historian, bull General "should we do X?" β bull, bear, operator (good default)
KeyNameWorldviewbullThe BullOptimistic, opportunity-focusedbearThe BearRisk-averse, capital preservationcontrarianThe ContrarianOppositional, consensus-challengingoperatorThe OperatorExecution-focused pragmatisteconomistThe EconomistMacro trends, opportunity costlocal-realistThe Local RealistGround truth, local specificscash-flowThe Cash Flow AnalystIncome, carrying costs, IRRregulatorThe RegulatorCompliance, legal risktechnologistThe TechnologistAutomation, scalabilitycustomerThe CustomerEnd-user demand, willingness to payethicistThe EthicistMoral implications, stakeholder impacthistorianThe HistorianHistorical patterns, precedent
Create a JSON file: { "my-persona": { "name": "The Skeptic", "description": "Questions everything, trusts nothing", "system": "You are The Skeptic β you question every assumption..." } } Use with --custom-personas /path/to/file.json. Custom personas merge with built-ins.
FlagDefaultDescription--question, -qrequiredThe question to debate--personas, -pbull,bear,operatorComma-separated persona keys--rounds, -r2Number of critique rounds--output, -ostdoutOutput file path--context-file, -cnoneAdditional context file--custom-personasnoneCustom personas JSON--model, -msonnetModel alias (sonnet, opus, haiku, gpt-4o, etc.)--backend, -bclaudeCLI backend: claude, codex, or gemini--list-personasβList personas and exit
The output is a markdown document with: Initial Proposals β Each agent's independent take Critique Rounds β Agents critique each other Refinement β Agents update positions based on critiques Conviction Scores β Each agent scores all positions (0-100) Synthesis & Decision Brief β Neutral agent produces: Executive summary Consensus points Key disagreements Risk matrix Conviction score summary Synthesized recommendation Next steps
β Good for: Important decisions, investment analysis, product strategy, "go/no-go" calls, pre-mortems, challenging groupthink β Not for: Simple factual questions, time-sensitive emergencies, decisions already made, emotional/personal choices
Save output to memory files for future reference Create BEADS tasks from the "Next Steps" section Feed context files from Obsidian or project docs Re-run with different personas for different perspectives Use --rounds 1 for quick takes, --rounds 3 for deep analysis
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.