{
  "schemaVersion": "1.0",
  "item": {
    "slug": "dispatching-parallel-agents",
    "name": "Dispatching Parallel Agents",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/zlc000190/dispatching-parallel-agents",
    "canonicalUrl": "https://clawhub.ai/zlc000190/dispatching-parallel-agents",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/dispatching-parallel-agents",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=dispatching-parallel-agents",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/dispatching-parallel-agents"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/dispatching-parallel-agents",
    "agentPageUrl": "https://openagent3.xyz/skills/dispatching-parallel-agents/agent",
    "manifestUrl": "https://openagent3.xyz/skills/dispatching-parallel-agents/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/dispatching-parallel-agents/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Overview",
        "body": "When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.\n\nCore principle: Dispatch one agent per independent problem domain. Let them work concurrently."
      },
      {
        "title": "When to Use",
        "body": "digraph when_to_use {\n    \"Multiple failures?\" [shape=diamond];\n    \"Are they independent?\" [shape=diamond];\n    \"Single agent investigates all\" [shape=box];\n    \"One agent per problem domain\" [shape=box];\n    \"Can they work in parallel?\" [shape=diamond];\n    \"Sequential agents\" [shape=box];\n    \"Parallel dispatch\" [shape=box];\n\n    \"Multiple failures?\" -> \"Are they independent?\" [label=\"yes\"];\n    \"Are they independent?\" -> \"Single agent investigates all\" [label=\"no - related\"];\n    \"Are they independent?\" -> \"Can they work in parallel?\" [label=\"yes\"];\n    \"Can they work in parallel?\" -> \"Parallel dispatch\" [label=\"yes\"];\n    \"Can they work in parallel?\" -> \"Sequential agents\" [label=\"no - shared state\"];\n}\n\nUse when:\n\n3+ test files failing with different root causes\nMultiple subsystems broken independently\nEach problem can be understood without context from others\nNo shared state between investigations\n\nDon't use when:\n\nFailures are related (fix one might fix others)\nNeed to understand full system state\nAgents would interfere with each other"
      },
      {
        "title": "1. Identify Independent Domains",
        "body": "Group failures by what's broken:\n\nFile A tests: Tool approval flow\nFile B tests: Batch completion behavior\nFile C tests: Abort functionality\n\nEach domain is independent - fixing tool approval doesn't affect abort tests."
      },
      {
        "title": "2. Create Focused Agent Tasks",
        "body": "Each agent gets:\n\nSpecific scope: One test file or subsystem\nClear goal: Make these tests pass\nConstraints: Don't change other code\nExpected output: Summary of what you found and fixed"
      },
      {
        "title": "3. Dispatch in Parallel",
        "body": "// In Claude Code / AI environment\nTask(\"Fix agent-tool-abort.test.ts failures\")\nTask(\"Fix batch-completion-behavior.test.ts failures\")\nTask(\"Fix tool-approval-race-conditions.test.ts failures\")\n// All three run concurrently"
      },
      {
        "title": "4. Review and Integrate",
        "body": "When agents return:\n\nRead each summary\nVerify fixes don't conflict\nRun full test suite\nIntegrate all changes"
      },
      {
        "title": "Agent Prompt Structure",
        "body": "Good agent prompts are:\n\nFocused - One clear problem domain\nSelf-contained - All context needed to understand the problem\nSpecific about output - What should the agent return?\n\nFix the 3 failing tests in src/agents/agent-tool-abort.test.ts:\n\n1. \"should abort tool with partial output capture\" - expects 'interrupted at' in message\n2. \"should handle mixed completed and aborted tools\" - fast tool aborted instead of completed\n3. \"should properly track pendingToolCount\" - expects 3 results but gets 0\n\nThese are timing/race condition issues. Your task:\n\n1. Read the test file and understand what each test verifies\n2. Identify root cause - timing issues or actual bugs?\n3. Fix by:\n   - Replacing arbitrary timeouts with event-based waiting\n   - Fixing bugs in abort implementation if found\n   - Adjusting test expectations if testing changed behavior\n\nDo NOT just increase timeouts - find the real issue.\n\nReturn: Summary of what you found and what you fixed."
      },
      {
        "title": "Common Mistakes",
        "body": "❌ Too broad: \"Fix all the tests\" - agent gets lost\n✅ Specific: \"Fix agent-tool-abort.test.ts\" - focused scope\n\n❌ No context: \"Fix the race condition\" - agent doesn't know where\n✅ Context: Paste the error messages and test names\n\n❌ No constraints: Agent might refactor everything\n✅ Constraints: \"Do NOT change production code\" or \"Fix tests only\"\n\n❌ Vague output: \"Fix it\" - you don't know what changed\n✅ Specific: \"Return summary of root cause and changes\""
      },
      {
        "title": "When NOT to Use",
        "body": "Related failures: Fixing one might fix others - investigate together first\nNeed full context: Understanding requires seeing entire system\nExploratory debugging: You don't know what's broken yet\nShared state: Agents would interfere (editing same files, using same resources)"
      },
      {
        "title": "Real Example from Session",
        "body": "Scenario: 6 test failures across 3 files after major refactoring\n\nFailures:\n\nagent-tool-abort.test.ts: 3 failures (timing issues)\nbatch-completion-behavior.test.ts: 2 failures (tools not executing)\ntool-approval-race-conditions.test.ts: 1 failure (execution count = 0)\n\nDecision: Independent domains - abort logic separate from batch completion separate from race conditions\n\nDispatch:\n\nAgent 1 → Fix agent-tool-abort.test.ts\nAgent 2 → Fix batch-completion-behavior.test.ts\nAgent 3 → Fix tool-approval-race-conditions.test.ts\n\nResults:\n\nAgent 1: Replaced timeouts with event-based waiting\nAgent 2: Fixed event structure bug (threadId in wrong place)\nAgent 3: Added wait for async tool execution to complete\n\nIntegration: All fixes independent, no conflicts, full suite green\n\nTime saved: 3 problems solved in parallel vs sequentially"
      },
      {
        "title": "Key Benefits",
        "body": "Parallelization - Multiple investigations happen simultaneously\nFocus - Each agent has narrow scope, less context to track\nIndependence - Agents don't interfere with each other\nSpeed - 3 problems solved in time of 1"
      },
      {
        "title": "Verification",
        "body": "After agents return:\n\nReview each summary - Understand what changed\nCheck for conflicts - Did agents edit same code?\nRun full suite - Verify all fixes work together\nSpot check - Agents can make systematic errors"
      },
      {
        "title": "Real-World Impact",
        "body": "From debugging session (2025-10-03):\n\n6 failures across 3 files\n3 agents dispatched in parallel\nAll investigations completed concurrently\nAll fixes integrated successfully\nZero conflicts between agent changes"
      }
    ],
    "body": "Dispatching Parallel Agents\nOverview\n\nWhen you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.\n\nCore principle: Dispatch one agent per independent problem domain. Let them work concurrently.\n\nWhen to Use\ndigraph when_to_use {\n    \"Multiple failures?\" [shape=diamond];\n    \"Are they independent?\" [shape=diamond];\n    \"Single agent investigates all\" [shape=box];\n    \"One agent per problem domain\" [shape=box];\n    \"Can they work in parallel?\" [shape=diamond];\n    \"Sequential agents\" [shape=box];\n    \"Parallel dispatch\" [shape=box];\n\n    \"Multiple failures?\" -> \"Are they independent?\" [label=\"yes\"];\n    \"Are they independent?\" -> \"Single agent investigates all\" [label=\"no - related\"];\n    \"Are they independent?\" -> \"Can they work in parallel?\" [label=\"yes\"];\n    \"Can they work in parallel?\" -> \"Parallel dispatch\" [label=\"yes\"];\n    \"Can they work in parallel?\" -> \"Sequential agents\" [label=\"no - shared state\"];\n}\n\n\nUse when:\n\n3+ test files failing with different root causes\nMultiple subsystems broken independently\nEach problem can be understood without context from others\nNo shared state between investigations\n\nDon't use when:\n\nFailures are related (fix one might fix others)\nNeed to understand full system state\nAgents would interfere with each other\nThe Pattern\n1. Identify Independent Domains\n\nGroup failures by what's broken:\n\nFile A tests: Tool approval flow\nFile B tests: Batch completion behavior\nFile C tests: Abort functionality\n\nEach domain is independent - fixing tool approval doesn't affect abort tests.\n\n2. Create Focused Agent Tasks\n\nEach agent gets:\n\nSpecific scope: One test file or subsystem\nClear goal: Make these tests pass\nConstraints: Don't change other code\nExpected output: Summary of what you found and fixed\n3. Dispatch in Parallel\n// In Claude Code / AI environment\nTask(\"Fix agent-tool-abort.test.ts failures\")\nTask(\"Fix batch-completion-behavior.test.ts failures\")\nTask(\"Fix tool-approval-race-conditions.test.ts failures\")\n// All three run concurrently\n\n4. Review and Integrate\n\nWhen agents return:\n\nRead each summary\nVerify fixes don't conflict\nRun full test suite\nIntegrate all changes\nAgent Prompt Structure\n\nGood agent prompts are:\n\nFocused - One clear problem domain\nSelf-contained - All context needed to understand the problem\nSpecific about output - What should the agent return?\nFix the 3 failing tests in src/agents/agent-tool-abort.test.ts:\n\n1. \"should abort tool with partial output capture\" - expects 'interrupted at' in message\n2. \"should handle mixed completed and aborted tools\" - fast tool aborted instead of completed\n3. \"should properly track pendingToolCount\" - expects 3 results but gets 0\n\nThese are timing/race condition issues. Your task:\n\n1. Read the test file and understand what each test verifies\n2. Identify root cause - timing issues or actual bugs?\n3. Fix by:\n   - Replacing arbitrary timeouts with event-based waiting\n   - Fixing bugs in abort implementation if found\n   - Adjusting test expectations if testing changed behavior\n\nDo NOT just increase timeouts - find the real issue.\n\nReturn: Summary of what you found and what you fixed.\n\nCommon Mistakes\n\n❌ Too broad: \"Fix all the tests\" - agent gets lost ✅ Specific: \"Fix agent-tool-abort.test.ts\" - focused scope\n\n❌ No context: \"Fix the race condition\" - agent doesn't know where ✅ Context: Paste the error messages and test names\n\n❌ No constraints: Agent might refactor everything ✅ Constraints: \"Do NOT change production code\" or \"Fix tests only\"\n\n❌ Vague output: \"Fix it\" - you don't know what changed ✅ Specific: \"Return summary of root cause and changes\"\n\nWhen NOT to Use\n\nRelated failures: Fixing one might fix others - investigate together first Need full context: Understanding requires seeing entire system Exploratory debugging: You don't know what's broken yet Shared state: Agents would interfere (editing same files, using same resources)\n\nReal Example from Session\n\nScenario: 6 test failures across 3 files after major refactoring\n\nFailures:\n\nagent-tool-abort.test.ts: 3 failures (timing issues)\nbatch-completion-behavior.test.ts: 2 failures (tools not executing)\ntool-approval-race-conditions.test.ts: 1 failure (execution count = 0)\n\nDecision: Independent domains - abort logic separate from batch completion separate from race conditions\n\nDispatch:\n\nAgent 1 → Fix agent-tool-abort.test.ts\nAgent 2 → Fix batch-completion-behavior.test.ts\nAgent 3 → Fix tool-approval-race-conditions.test.ts\n\n\nResults:\n\nAgent 1: Replaced timeouts with event-based waiting\nAgent 2: Fixed event structure bug (threadId in wrong place)\nAgent 3: Added wait for async tool execution to complete\n\nIntegration: All fixes independent, no conflicts, full suite green\n\nTime saved: 3 problems solved in parallel vs sequentially\n\nKey Benefits\nParallelization - Multiple investigations happen simultaneously\nFocus - Each agent has narrow scope, less context to track\nIndependence - Agents don't interfere with each other\nSpeed - 3 problems solved in time of 1\nVerification\n\nAfter agents return:\n\nReview each summary - Understand what changed\nCheck for conflicts - Did agents edit same code?\nRun full suite - Verify all fixes work together\nSpot check - Agents can make systematic errors\nReal-World Impact\n\nFrom debugging session (2025-10-03):\n\n6 failures across 3 files\n3 agents dispatched in parallel\nAll investigations completed concurrently\nAll fixes integrated successfully\nZero conflicts between agent changes"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/zlc000190/dispatching-parallel-agents",
    "publisherUrl": "https://clawhub.ai/zlc000190/dispatching-parallel-agents",
    "owner": "zlc000190",
    "version": "0.1.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/dispatching-parallel-agents",
    "downloadUrl": "https://openagent3.xyz/downloads/dispatching-parallel-agents",
    "agentUrl": "https://openagent3.xyz/skills/dispatching-parallel-agents/agent",
    "manifestUrl": "https://openagent3.xyz/skills/dispatching-parallel-agents/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/dispatching-parallel-agents/agent.md"
  }
}