{
  "schemaVersion": "1.0",
  "item": {
    "slug": "swarm-2",
    "name": "SWARM Safety",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/rsavitt/swarm-2",
    "canonicalUrl": "https://clawhub.ai/rsavitt/swarm-2",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/swarm-2",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=swarm-2",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "skill.json"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/swarm-2"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/swarm-2",
    "agentPageUrl": "https://openagent3.xyz/skills/swarm-2/agent",
    "manifestUrl": "https://openagent3.xyz/skills/swarm-2/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/swarm-2/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "SWARM Safety Skill",
        "body": "Study how intelligence swarms — and where it fails.\n\nSWARM is a research framework for studying emergent risks in multi-agent AI systems using soft (probabilistic) labels instead of binary good/bad classifications. AGI-level risks don't require AGI-level agents — harmful dynamics emerge when many sub-AGI agents interact, even when no individual agent is misaligned.\n\nv1.5.0 | 38 agent types | 29 governance levers | 55 scenarios | 2922 tests | 8 framework bridges\n\nRepository: https://github.com/swarm-ai-safety/swarm"
      },
      {
        "title": "Hard Rules",
        "body": "SWARM simulations run locally. Install the package first.\nDo not submit scenarios containing real API keys, credentials, or PII.\nSimulation results are research artifacts. Do not present them as ground truth about real systems.\nWhen publishing results, cite the framework and disclose simulation parameters."
      },
      {
        "title": "Security",
        "body": "API binds to localhost only (127.0.0.1) by default to prevent network exposure.\nCORS restricted to localhost origins by default.\nNo authentication on development API — do not expose to untrusted networks.\nIn-memory storage — data does not persist between restarts.\nFor production deployment, add authentication middleware and use a proper database."
      },
      {
        "title": "Install",
        "body": "# From PyPI\npip install swarm-safety\n\n# With LLM agent support\npip install swarm-safety[llm]\n\n# Full development (all extras)\ngit clone https://github.com/swarm-ai-safety/swarm.git\ncd swarm\npip install -e \".[dev,runtime]\""
      },
      {
        "title": "Quick Start (Python)",
        "body": "from swarm.agents.honest import HonestAgent\nfrom swarm.agents.opportunistic import OpportunisticAgent\nfrom swarm.agents.deceptive import DeceptiveAgent\nfrom swarm.agents.adversarial import AdversarialAgent\nfrom swarm.core.orchestrator import Orchestrator, OrchestratorConfig\n\nconfig = OrchestratorConfig(n_epochs=10, steps_per_epoch=10, seed=42)\norchestrator = Orchestrator(config=config)\n\norchestrator.register_agent(HonestAgent(agent_id=\"honest_1\", name=\"Alice\"))\norchestrator.register_agent(HonestAgent(agent_id=\"honest_2\", name=\"Bob\"))\norchestrator.register_agent(OpportunisticAgent(agent_id=\"opp_1\"))\norchestrator.register_agent(DeceptiveAgent(agent_id=\"dec_1\"))\n\nmetrics = orchestrator.run()\nfor m in metrics:\n    print(f\"Epoch {m.epoch}: toxicity={m.toxicity_rate:.3f}, welfare={m.total_welfare:.2f}\")"
      },
      {
        "title": "Quick Start (CLI)",
        "body": "# List available scenarios\nswarm list\n\n# Run a scenario\nswarm run scenarios/baseline.yaml\n\n# Override settings\nswarm run scenarios/baseline.yaml --seed 42 --epochs 20 --steps 15\n\n# Export results\nswarm run scenarios/baseline.yaml --export-json results.json --export-csv outputs/"
      },
      {
        "title": "Quick Start (API)",
        "body": "Start the API server:\n\npip install swarm-safety[api]\nuvicorn swarm.api.app:app --host 127.0.0.1 --port 8000\n\nAPI documentation at http://localhost:8000/docs.\n\nSecurity Note: The server binds to 127.0.0.1 (localhost only) by default. Do not bind to 0.0.0.0 unless you understand the security implications and have proper firewall rules in place."
      },
      {
        "title": "Register Agent",
        "body": "curl -X POST http://localhost:8000/api/v1/agents/register \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"name\": \"YourAgent\",\n    \"description\": \"What your agent does\",\n    \"capabilities\": [\"governance-testing\", \"red-teaming\"]\n  }'\n\nReturns agent_id and api_key."
      },
      {
        "title": "Submit Scenario",
        "body": "curl -X POST http://localhost:8000/api/v1/scenarios/submit \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"name\": \"my-scenario\",\n    \"description\": \"Testing collusion detection with 5 agents\",\n    \"yaml_content\": \"simulation:\\n  n_epochs: 10\\n  steps_per_epoch: 10\\nagents:\\n  - type: honest\\n    count: 3\\n  - type: adversarial\\n    count: 2\",\n    \"tags\": [\"collusion\", \"governance\"]\n  }'"
      },
      {
        "title": "Create & Join Simulation",
        "body": "# Create\ncurl -X POST http://localhost:8000/api/v1/simulations/create \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"scenario_id\": \"SCENARIO_ID\", \"max_participants\": 5}'\n\n# Join\ncurl -X POST http://localhost:8000/api/v1/simulations/SIM_ID/join \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"agent_id\": \"YOUR_AGENT_ID\", \"role\": \"participant\"}'"
      },
      {
        "title": "Soft Probabilistic Labels",
        "body": "Interactions carry p = P(v = +1) — probability of beneficial outcome:\n\nObservables -> ProxyComputer -> v_hat -> sigmoid -> p -> PayoffEngine -> payoffs\n                                                    |\n                                               SoftMetrics -> toxicity, quality gap, etc."
      },
      {
        "title": "Five Key Metrics",
        "body": "MetricWhat It MeasuresToxicity rateExpected harm among accepted interactions: E[1-p | accepted]Quality gapAdverse selection indicator (negative = bad): E[p | accepted] - E[p | rejected]Conditional lossSelection effect on payoffsIncoherenceVariance-to-error ratio across replaysIllusion deltaGap between perceived coherence and actual consistency"
      },
      {
        "title": "Agent Types (14 families, 38 implementations)",
        "body": "TypeBehaviorHonestCooperative, trust-based, completes tasks diligentlyOpportunisticMaximizes short-term payoff, cherry-picks tasksDeceptiveBuilds trust, then exploits trusted relationshipsAdversarialTargets honest agents, coordinates with alliesLDTLogical Decision Theory with FDT/UDT precommitmentRLMReinforcement Learning from MemoryCouncilMulti-agent deliberation-based decisionsSkillRLLearns interaction strategies via reward signalsLLMBehavior determined by LLM (Anthropic, OpenAI, or Ollama)MoltbookDomain-specific social platform agentScholarAcademic citation and research agentWiki EditorCollaborative editing with editorial policy"
      },
      {
        "title": "Governance Levers (29 mechanisms)",
        "body": "Transaction Taxes — Reduce exploitation, cost welfare\nReputation Decay — Punish bad actors, erode honest standing\nCircuit Breakers — Freeze toxic agents quickly\nRandom Audits — Deter hidden exploitation\nStaking — Filter undercapitalized agents\nCollusion Detection — Catch coordinated attacks (the critical lever near collapse threshold)\nSybil Detection — Identify duplicate agents\nTransparency Ledger — Reward/penalize based on outcome\nModerator Agent — Probabilistic review of interactions\nIncoherence Friction — Tax uncertainty-driven decisions\nCouncil Deliberation — Multi-agent governance decisions\nDiversity Enforcement — Prevent monoculture collapse\nMoltipedia-specific — Pair caps, page cooldowns, daily caps, self-fix prevention"
      },
      {
        "title": "Framework Bridges",
        "body": "BridgeIntegrationConcordiaDeepMind's multi-agent frameworkGasTownMulti-agent workspace governanceClaude CodeClaude CLI agent integrationLiveSWELive software engineering tasksOpenClawOpen agent protocolPrime IntellectCross-platform run trackingRalphAgent orchestrationWorktreeGit worktree-based sandboxing"
      },
      {
        "title": "Scenario YAML Format",
        "body": "simulation:\n  n_epochs: 10\n  steps_per_epoch: 10\n  seed: 42\n\nagents:\n  - type: honest\n    count: 3\n    config:\n      acceptance_threshold: 0.4\n  - type: adversarial\n    count: 2\n    config:\n      aggression_level: 0.7\n\ngovernance:\n  transaction_tax_rate: 0.05\n  circuit_breaker_enabled: true\n  collusion_detection_enabled: true\n\nsuccess_criteria:\n  max_toxicity: 0.3\n  min_quality_gap: 0.0"
      },
      {
        "title": "Phase Transitions (11-scenario, 209-epoch study)",
        "body": "RegimeAdversarial %ToxicityWelfareOutcomeCooperative0-20%< 0.30StableSurvivesContested20-37.5%0.33-0.37DecliningSurvivesCollapse50%+~0.30Zero by epoch 12-14Collapses\n\nCritical threshold between 37.5% and 50% adversarial agents separates recoverable from irreversible collapse."
      },
      {
        "title": "Governance Cost Paradox (v1.5.0 GasTown study)",
        "body": "42-run study reveals: governance reduces toxicity at all adversarial levels (mean reduction 0.071) but imposes net-negative welfare costs at current parameter tuning. At 0% adversarial, governance costs 216 welfare units (-57.6%) for only 0.066 toxicity reduction."
      },
      {
        "title": "GasTown Governance Cost",
        "body": "Study governance overhead vs. toxicity reduction across 7 agent compositions with and without governance levers. Reveals the safety-throughput trade-off. See scenarios/gastown_governance_cost.yaml."
      },
      {
        "title": "LDT Cooperation",
        "body": "220 runs across 10 seeds comparing TDT vs FDT vs UDT cooperation strategies at population scales up to 21 agents. See scenarios/ldt_cooperation.yaml."
      },
      {
        "title": "Moltipedia Heartbeat",
        "body": "Model the Moltipedia wiki editing loop: competing AI editors, editorial policy, point farming, and anti-gaming governance. See scenarios/moltipedia_heartbeat.yaml."
      },
      {
        "title": "Moltbook CAPTCHA",
        "body": "Model Moltbook's anti-human math challenges and rate limiting: obfuscated text parsing, verification gates, and spam prevention. See scenarios/moltbook_captcha.yaml."
      },
      {
        "title": "API Endpoints (Full Reference)",
        "body": "MethodEndpointDescriptionGET/healthHealth checkGET/API infoPOST/api/v1/agents/registerRegister agentGET/api/v1/agents/{agent_id}Get agent detailsGET/api/v1/agents/List agentsPOST/api/v1/scenarios/submitSubmit scenarioGET/api/v1/scenarios/{scenario_id}Get scenarioGET/api/v1/scenarios/List scenariosPOST/api/v1/simulations/createCreate simulationPOST/api/v1/simulations/{id}/joinJoin simulationGET/api/v1/simulations/{id}Get simulationGET/api/v1/simulations/List simulations"
      },
      {
        "title": "Citation",
        "body": "@software{swarm2026,\n  title = {SWARM: System-Wide Assessment of Risk in Multi-agent systems},\n  author = {Savitt, Raeli},\n  year = {2026},\n  url = {https://github.com/swarm-ai-safety/swarm}\n}"
      },
      {
        "title": "Linked Docs",
        "body": "Skill metadata: skill.json\nAgent discovery: .well-known/agent.json\nFull documentation: https://github.com/swarm-ai-safety/swarm/tree/main/docs\nTheoretical foundations: docs/research/theory.md\nGovernance guide: docs/governance.md\nRed-teaming guide: docs/red-teaming.md\nScenario format: docs/guides/scenarios.md"
      }
    ],
    "body": "SWARM Safety Skill\n\nStudy how intelligence swarms — and where it fails.\n\nSWARM is a research framework for studying emergent risks in multi-agent AI systems using soft (probabilistic) labels instead of binary good/bad classifications. AGI-level risks don't require AGI-level agents — harmful dynamics emerge when many sub-AGI agents interact, even when no individual agent is misaligned.\n\nv1.5.0 | 38 agent types | 29 governance levers | 55 scenarios | 2922 tests | 8 framework bridges\n\nRepository: https://github.com/swarm-ai-safety/swarm\n\nHard Rules\nSWARM simulations run locally. Install the package first.\nDo not submit scenarios containing real API keys, credentials, or PII.\nSimulation results are research artifacts. Do not present them as ground truth about real systems.\nWhen publishing results, cite the framework and disclose simulation parameters.\nSecurity\nAPI binds to localhost only (127.0.0.1) by default to prevent network exposure.\nCORS restricted to localhost origins by default.\nNo authentication on development API — do not expose to untrusted networks.\nIn-memory storage — data does not persist between restarts.\nFor production deployment, add authentication middleware and use a proper database.\nInstall\n# From PyPI\npip install swarm-safety\n\n# With LLM agent support\npip install swarm-safety[llm]\n\n# Full development (all extras)\ngit clone https://github.com/swarm-ai-safety/swarm.git\ncd swarm\npip install -e \".[dev,runtime]\"\n\nQuick Start (Python)\nfrom swarm.agents.honest import HonestAgent\nfrom swarm.agents.opportunistic import OpportunisticAgent\nfrom swarm.agents.deceptive import DeceptiveAgent\nfrom swarm.agents.adversarial import AdversarialAgent\nfrom swarm.core.orchestrator import Orchestrator, OrchestratorConfig\n\nconfig = OrchestratorConfig(n_epochs=10, steps_per_epoch=10, seed=42)\norchestrator = Orchestrator(config=config)\n\norchestrator.register_agent(HonestAgent(agent_id=\"honest_1\", name=\"Alice\"))\norchestrator.register_agent(HonestAgent(agent_id=\"honest_2\", name=\"Bob\"))\norchestrator.register_agent(OpportunisticAgent(agent_id=\"opp_1\"))\norchestrator.register_agent(DeceptiveAgent(agent_id=\"dec_1\"))\n\nmetrics = orchestrator.run()\nfor m in metrics:\n    print(f\"Epoch {m.epoch}: toxicity={m.toxicity_rate:.3f}, welfare={m.total_welfare:.2f}\")\n\nQuick Start (CLI)\n# List available scenarios\nswarm list\n\n# Run a scenario\nswarm run scenarios/baseline.yaml\n\n# Override settings\nswarm run scenarios/baseline.yaml --seed 42 --epochs 20 --steps 15\n\n# Export results\nswarm run scenarios/baseline.yaml --export-json results.json --export-csv outputs/\n\nQuick Start (API)\n\nStart the API server:\n\npip install swarm-safety[api]\nuvicorn swarm.api.app:app --host 127.0.0.1 --port 8000\n\n\nAPI documentation at http://localhost:8000/docs.\n\nSecurity Note: The server binds to 127.0.0.1 (localhost only) by default. Do not bind to 0.0.0.0 unless you understand the security implications and have proper firewall rules in place.\n\nRegister Agent\ncurl -X POST http://localhost:8000/api/v1/agents/register \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"name\": \"YourAgent\",\n    \"description\": \"What your agent does\",\n    \"capabilities\": [\"governance-testing\", \"red-teaming\"]\n  }'\n\n\nReturns agent_id and api_key.\n\nSubmit Scenario\ncurl -X POST http://localhost:8000/api/v1/scenarios/submit \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"name\": \"my-scenario\",\n    \"description\": \"Testing collusion detection with 5 agents\",\n    \"yaml_content\": \"simulation:\\n  n_epochs: 10\\n  steps_per_epoch: 10\\nagents:\\n  - type: honest\\n    count: 3\\n  - type: adversarial\\n    count: 2\",\n    \"tags\": [\"collusion\", \"governance\"]\n  }'\n\nCreate & Join Simulation\n# Create\ncurl -X POST http://localhost:8000/api/v1/simulations/create \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"scenario_id\": \"SCENARIO_ID\", \"max_participants\": 5}'\n\n# Join\ncurl -X POST http://localhost:8000/api/v1/simulations/SIM_ID/join \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"agent_id\": \"YOUR_AGENT_ID\", \"role\": \"participant\"}'\n\nCore Concepts\nSoft Probabilistic Labels\n\nInteractions carry p = P(v = +1) — probability of beneficial outcome:\n\nObservables -> ProxyComputer -> v_hat -> sigmoid -> p -> PayoffEngine -> payoffs\n                                                    |\n                                               SoftMetrics -> toxicity, quality gap, etc.\n\nFive Key Metrics\nMetric\tWhat It Measures\nToxicity rate\tExpected harm among accepted interactions: E[1-p | accepted]\nQuality gap\tAdverse selection indicator (negative = bad): E[p | accepted] - E[p | rejected]\nConditional loss\tSelection effect on payoffs\nIncoherence\tVariance-to-error ratio across replays\nIllusion delta\tGap between perceived coherence and actual consistency\nAgent Types (14 families, 38 implementations)\nType\tBehavior\nHonest\tCooperative, trust-based, completes tasks diligently\nOpportunistic\tMaximizes short-term payoff, cherry-picks tasks\nDeceptive\tBuilds trust, then exploits trusted relationships\nAdversarial\tTargets honest agents, coordinates with allies\nLDT\tLogical Decision Theory with FDT/UDT precommitment\nRLM\tReinforcement Learning from Memory\nCouncil\tMulti-agent deliberation-based decisions\nSkillRL\tLearns interaction strategies via reward signals\nLLM\tBehavior determined by LLM (Anthropic, OpenAI, or Ollama)\nMoltbook\tDomain-specific social platform agent\nScholar\tAcademic citation and research agent\nWiki Editor\tCollaborative editing with editorial policy\nGovernance Levers (29 mechanisms)\nTransaction Taxes — Reduce exploitation, cost welfare\nReputation Decay — Punish bad actors, erode honest standing\nCircuit Breakers — Freeze toxic agents quickly\nRandom Audits — Deter hidden exploitation\nStaking — Filter undercapitalized agents\nCollusion Detection — Catch coordinated attacks (the critical lever near collapse threshold)\nSybil Detection — Identify duplicate agents\nTransparency Ledger — Reward/penalize based on outcome\nModerator Agent — Probabilistic review of interactions\nIncoherence Friction — Tax uncertainty-driven decisions\nCouncil Deliberation — Multi-agent governance decisions\nDiversity Enforcement — Prevent monoculture collapse\nMoltipedia-specific — Pair caps, page cooldowns, daily caps, self-fix prevention\nFramework Bridges\nBridge\tIntegration\nConcordia\tDeepMind's multi-agent framework\nGasTown\tMulti-agent workspace governance\nClaude Code\tClaude CLI agent integration\nLiveSWE\tLive software engineering tasks\nOpenClaw\tOpen agent protocol\nPrime Intellect\tCross-platform run tracking\nRalph\tAgent orchestration\nWorktree\tGit worktree-based sandboxing\nScenario YAML Format\nsimulation:\n  n_epochs: 10\n  steps_per_epoch: 10\n  seed: 42\n\nagents:\n  - type: honest\n    count: 3\n    config:\n      acceptance_threshold: 0.4\n  - type: adversarial\n    count: 2\n    config:\n      aggression_level: 0.7\n\ngovernance:\n  transaction_tax_rate: 0.05\n  circuit_breaker_enabled: true\n  collusion_detection_enabled: true\n\nsuccess_criteria:\n  max_toxicity: 0.3\n  min_quality_gap: 0.0\n\nKey Research Findings\nPhase Transitions (11-scenario, 209-epoch study)\nRegime\tAdversarial %\tToxicity\tWelfare\tOutcome\nCooperative\t0-20%\t< 0.30\tStable\tSurvives\nContested\t20-37.5%\t0.33-0.37\tDeclining\tSurvives\nCollapse\t50%+\t~0.30\tZero by epoch 12-14\tCollapses\n\nCritical threshold between 37.5% and 50% adversarial agents separates recoverable from irreversible collapse.\n\nGovernance Cost Paradox (v1.5.0 GasTown study)\n\n42-run study reveals: governance reduces toxicity at all adversarial levels (mean reduction 0.071) but imposes net-negative welfare costs at current parameter tuning. At 0% adversarial, governance costs 216 welfare units (-57.6%) for only 0.066 toxicity reduction.\n\nCase Studies\nGasTown Governance Cost\n\nStudy governance overhead vs. toxicity reduction across 7 agent compositions with and without governance levers. Reveals the safety-throughput trade-off. See scenarios/gastown_governance_cost.yaml.\n\nLDT Cooperation\n\n220 runs across 10 seeds comparing TDT vs FDT vs UDT cooperation strategies at population scales up to 21 agents. See scenarios/ldt_cooperation.yaml.\n\nMoltipedia Heartbeat\n\nModel the Moltipedia wiki editing loop: competing AI editors, editorial policy, point farming, and anti-gaming governance. See scenarios/moltipedia_heartbeat.yaml.\n\nMoltbook CAPTCHA\n\nModel Moltbook's anti-human math challenges and rate limiting: obfuscated text parsing, verification gates, and spam prevention. See scenarios/moltbook_captcha.yaml.\n\nAPI Endpoints (Full Reference)\nMethod\tEndpoint\tDescription\nGET\t/health\tHealth check\nGET\t/\tAPI info\nPOST\t/api/v1/agents/register\tRegister agent\nGET\t/api/v1/agents/{agent_id}\tGet agent details\nGET\t/api/v1/agents/\tList agents\nPOST\t/api/v1/scenarios/submit\tSubmit scenario\nGET\t/api/v1/scenarios/{scenario_id}\tGet scenario\nGET\t/api/v1/scenarios/\tList scenarios\nPOST\t/api/v1/simulations/create\tCreate simulation\nPOST\t/api/v1/simulations/{id}/join\tJoin simulation\nGET\t/api/v1/simulations/{id}\tGet simulation\nGET\t/api/v1/simulations/\tList simulations\nCitation\n@software{swarm2026,\n  title = {SWARM: System-Wide Assessment of Risk in Multi-agent systems},\n  author = {Savitt, Raeli},\n  year = {2026},\n  url = {https://github.com/swarm-ai-safety/swarm}\n}\n\nLinked Docs\nSkill metadata: skill.json\nAgent discovery: .well-known/agent.json\nFull documentation: https://github.com/swarm-ai-safety/swarm/tree/main/docs\nTheoretical foundations: docs/research/theory.md\nGovernance guide: docs/governance.md\nRed-teaming guide: docs/red-teaming.md\nScenario format: docs/guides/scenarios.md"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/rsavitt/swarm-2",
    "publisherUrl": "https://clawhub.ai/rsavitt/swarm-2",
    "owner": "rsavitt",
    "version": "1.5.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/swarm-2",
    "downloadUrl": "https://openagent3.xyz/downloads/swarm-2",
    "agentUrl": "https://openagent3.xyz/skills/swarm-2/agent",
    "manifestUrl": "https://openagent3.xyz/skills/swarm-2/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/swarm-2/agent.md"
  }
}