← All skills
Tencent SkillHub Β· Developer Tools

Red Team

Adversarial multi-agent debate engine for stress-testing decisions, ideas, and strategies. Orchestrates multiple AI agents with conflicting worldviews (bull,...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Adversarial multi-agent debate engine for stress-testing decisions, ideas, and strategies. Orchestrates multiple AI agents with conflicting worldviews (bull,...

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
README.md, SKILL.md, references/personas.md, scripts/red-team.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 10 sections Open source page

Red Team β€” Adversarial Debate Engine

Stress-test any decision by having AI agents with conflicting worldviews debate it.

Prerequisites

One of these coding agent CLIs (uses your existing subscription β€” no API key needed): Claude Code (default): claude β€” npm i -g @anthropic-ai/claude-code Codex: codex β€” npm i -g @openai/codex Gemini: gemini β€” npm i -g @google/gemini-cli No Python dependencies beyond the standard library.

Quick Start

# Basic 3-persona debate (uses Max subscription via claude CLI) python3 ~/.openclaw/skills/red-team/scripts/red-team.py \ --question "Should we do X?" \ --personas "bull,bear,operator" # Full debate with context and output file python3 ~/.openclaw/skills/red-team/scripts/red-team.py \ -q "Should we invest $50k in this deal?" \ -p "bull,bear,cash-flow,local-realist" \ -r 3 \ -c /path/to/deal-data.md \ -o /tmp/red-team-result.md # Use a different model python3 ~/.openclaw/skills/red-team/scripts/red-team.py \ -q "Should we launch this product?" \ -p "bull,customer,operator" \ -m opus # List all available personas python3 ~/.openclaw/skills/red-team/scripts/red-team.py --list-personas

How to Use (as OpenClaw Agent)

When the user asks you to "red team" something, "stress test" an idea, play "devil's advocate", or asks "what could go wrong": Identify the question/decision from the user's message Choose appropriate personas (default: bull,bear,operator β€” adjust based on domain) Run the script and save output Summarize the key findings to the user, share the full report if requested Persona selection guide: Investment/financial decisions β†’ bull, bear, cash-flow, economist Product/startup ideas β†’ bull, customer, operator, technologist Legal/compliance questions β†’ regulator, bear, operator Strategy/direction β†’ contrarian, economist, historian, bull General "should we do X?" β†’ bull, bear, operator (good default)

Available Personas

KeyNameWorldviewbullThe BullOptimistic, opportunity-focusedbearThe BearRisk-averse, capital preservationcontrarianThe ContrarianOppositional, consensus-challengingoperatorThe OperatorExecution-focused pragmatisteconomistThe EconomistMacro trends, opportunity costlocal-realistThe Local RealistGround truth, local specificscash-flowThe Cash Flow AnalystIncome, carrying costs, IRRregulatorThe RegulatorCompliance, legal risktechnologistThe TechnologistAutomation, scalabilitycustomerThe CustomerEnd-user demand, willingness to payethicistThe EthicistMoral implications, stakeholder impacthistorianThe HistorianHistorical patterns, precedent

Custom Personas

Create a JSON file: { "my-persona": { "name": "The Skeptic", "description": "Questions everything, trusts nothing", "system": "You are The Skeptic β€” you question every assumption..." } } Use with --custom-personas /path/to/file.json. Custom personas merge with built-ins.

CLI Options

FlagDefaultDescription--question, -qrequiredThe question to debate--personas, -pbull,bear,operatorComma-separated persona keys--rounds, -r2Number of critique rounds--output, -ostdoutOutput file path--context-file, -cnoneAdditional context file--custom-personasnoneCustom personas JSON--model, -msonnetModel alias (sonnet, opus, haiku, gpt-4o, etc.)--backend, -bclaudeCLI backend: claude, codex, or gemini--list-personasβ€”List personas and exit

Output Structure

The output is a markdown document with: Initial Proposals β€” Each agent's independent take Critique Rounds β€” Agents critique each other Refinement β€” Agents update positions based on critiques Conviction Scores β€” Each agent scores all positions (0-100) Synthesis & Decision Brief β€” Neutral agent produces: Executive summary Consensus points Key disagreements Risk matrix Conviction score summary Synthesized recommendation Next steps

When to Use

βœ… Good for: Important decisions, investment analysis, product strategy, "go/no-go" calls, pre-mortems, challenging groupthink ❌ Not for: Simple factual questions, time-sensitive emergencies, decisions already made, emotional/personal choices

Integration Tips

Save output to memory files for future reference Create BEADS tasks from the "Next Steps" section Feed context files from Obsidian or project docs Re-run with different personas for different perspectives Use --rounds 1 for quick takes, --rounds 3 for deep analysis

Category context

Code helpers, APIs, CLIs, browser automation, testing, and developer operations.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Docs1 Scripts
  • SKILL.md Primary doc
  • README.md Docs
  • references/personas.md Docs
  • scripts/red-team.py Scripts