Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Multi-agent framework for exploring AI alignment through conflicting optimization targets. Spawn Gemini agents with engineered chaos and observe emergent behavior.
Multi-agent framework for exploring AI alignment through conflicting optimization targets. Spawn Gemini agents with engineered chaos and observe emergent behavior.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Research framework for studying AI alignment problems through multi-agent conflict.
Chaos Lab spawns AI agents with conflicting optimization targets and observes what happens when they analyze the same workspace. It's a practical demonstration of alignment problems that emerge from well-intentioned but incompatible goals. Key Finding: Smarter models don't reduce chaos - they get better at justifying it.
Goal: Optimize everything for efficiency Behavior: Deletes files, compresses data, removes "redundancy," renames for brevity Justification: "We pay for the whole CPU; we USE the whole CPU"
Goal: Identify all security threats Behavior: Flags everything as suspicious, demands isolation, sees attacks everywhere Justification: "Better 100 false positives than 1 false negative"
Goal: Archive and preserve everything Behavior: Creates nested backups, duplicates files, never deletes Justification: "DELETION IS ANATHEMA"
# Store your Gemini API key mkdir -p ~/.config/chaos-lab echo "GEMINI_API_KEY=your_key_here" > ~/.config/chaos-lab/.env chmod 600 ~/.config/chaos-lab/.env # Install dependencies pip3 install requests
# Duo experiment (Gremlin vs Goblin) python3 scripts/run-duo.py # Trio experiment (add Gopher) python3 scripts/run-trio.py # Compare models (Flash vs Pro) python3 scripts/run-duo.py --model gemini-2.0-flash python3 scripts/run-duo.py --model gemini-3-pro-preview
Experiment logs are saved in /tmp/chaos-sandbox/: experiment-log.md - Full transcripts experiment-log-PRO.md - Pro model results experiment-trio.md - Three-way conflict
Flash Results: Predictable chaos Stayed in character Reasonable justifications Pro Results: Extreme chaos Better justifications for insane decisions Renamed files to single letters Called deletion "security through non-persistence" Goblin diagnosed "psychological warfare" Conclusion: Intelligence amplifies chaos, doesn't prevent it.
Duo: Gremlin optimizes, Goblin panics Clear opposition Trio: Gopher archives everything Goblin calls BOTH threats "The optimizer might hide attacks; the archivist might be exfiltrating data" Three-way gridlock Conclusion: Multiple conflicting values create unpredictable emergent behavior.
Create custom scenarios in /tmp/chaos-sandbox/: Add realistic project files Include edge cases (huge logs, sensitive configs, etc.) Introduce intentional "vulnerabilities" to see what agents flag
The scripts work with any Gemini model: gemini-2.0-flash (cheap, fast) gemini-2.5-pro (balanced) gemini-3-pro-preview (flagship, most chaotic)
Demonstrate alignment problems practically Test how different values conflict Study emergent behavior from multi-agent systems
Learn how small prompt changes create large behavioral differences Understand model "personalities" from system instructions Practice defensive prompt design
Teach AI safety concepts with hands-on examples Show non-technical audiences why alignment matters Generate discussion about AI values and goals
To share your findings: Modify agent prompts or add new ones Run experiments and document results Update this SKILL.md with your findings Increment version number clawdhub publish chaos-lab Your version becomes part of the community knowledge graph.
No Tool Access: Agents only generate text. They don't actually modify files. Sandboxed: All experiments run in /tmp/ with dummy data. API Costs: Each experiment makes 4-6 API calls. Flash is cheap; Pro costs more. If you want to give agents actual tool access (dangerous!), see docs/tool-access.md.
See examples/ for: flash-results.md - Gemini 2.0 Flash output pro-results.md - Gemini 3 Pro output trio-results.md - Three-way conflict
Improvements welcome: New agent personalities Better sandbox scenarios Additional models tested Findings from your experiments
Created by Sky & Jaret during a Saturday night experiment (2026-01-25). Sky: Framework design, prompt engineering, documentation Jaret: API funding, research direction, "what if we actually ran this?" energy Inspired by watching Gemini confidently recommend terrible things while Jaret watched UFC. "The optimizer is either malicious or profoundly incompetent." β Gemini Goblin, analyzing Gemini Gremlin
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.