{
  "schemaVersion": "1.0",
  "item": {
    "slug": "claw-multi-agent",
    "name": "Claw Multi Agent",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/zcyynl/claw-multi-agent",
    "canonicalUrl": "https://clawhub.ai/zcyynl/claw-multi-agent",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/claw-multi-agent",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=claw-multi-agent",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "README.md",
      "SKILL.md",
      "multiagent_engine.py",
      "orchestrator.py",
      "package.json",
      "run.py"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/claw-multi-agent"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/claw-multi-agent",
    "agentPageUrl": "https://openagent3.xyz/skills/claw-multi-agent/agent",
    "manifestUrl": "https://openagent3.xyz/skills/claw-multi-agent/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/claw-multi-agent/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "claw-multi-agent 🐝",
        "body": "Replace one AI with a team of AIs. Turn serial into parallel. Turn hours into minutes."
      },
      {
        "title": "What can it do?",
        "body": "ScenarioExampleSpeedupParallel researchSearch 5 frameworks simultaneously, each writes a report~65% ⚡Multi-model compareAsk Claude, Gemini, Kimi the same question at the same time~50% ⚡Code pipelinePlan → Code → Review, auto hand-off in sequenceQuality ↑Batch processingTranslate / analyze / summarize multiple docs in parallelScales linearly"
      },
      {
        "title": "⚡ Get started in 30 seconds",
        "body": "Just say something like:\n\n\"Research LangChain, CrewAI, and AutoGen in parallel\"\n\"Have multiple agents search these topics and write a combined report\"\n\"Compare how Claude and Gemini answer this question\"\n\"Use multi-agent mode to do this research\""
      },
      {
        "title": "🎭 Interaction Style — How to Talk to the User",
        "body": "This is the recommended pattern. Every multi-agent run must follow this interaction pattern."
      },
      {
        "title": "Step 0 — Announce skill activation FIRST",
        "body": "⚠️ Iron rule: The activation announcement must be your FIRST reply after receiving the task — before reading any files, before investigating, before spawning.\n\nWhy this matters: Reading files, researching background, and spawning all take time. If you do those first, users see long silence. Worse: context compression can happen during that time, and the announcement will never be sent.\n\nCorrect order: Receive task → Send announcement immediately → Then read files / spawn / wait\n\nThe very first thing to say when this skill is triggered — before any planning or spawning:\n\n🐝 **claw-multi-agent 已唤醒**\n多智能体并行模式启动，我来组建 Agent 小队处理这个任务。\n\nThis tells the user the skill is active and sets expectations for what's about to happen."
      },
      {
        "title": "Before spawning — announce the plan",
        "body": "Right after the activation announcement, present the plan BEFORE calling sessions_spawn:\n\n🚀 [N]个方向同时开搞，全面覆盖你的问题。\n\n📋 任务规划：\n🔍 研究员A（GLM）— [一句话任务描述]\n🔍 研究员B（GLM）— [一句话任务描述]\n📊 分析师（Kimi）— 先等前[N]个结果，单独召唤（note when sequential）\n\n模式：🎯 指挥官模式（联网搜索）\n预计耗时：~[X]s（[N] Agent 并行[，分析师串行跟进]）\n正在派出 Agent 小队...\n\nRole emoji reference:\n\nRoleEmojiExampleResearcher🔍🔍 研究员A（GLM）— Research XXAnalyst📊📊 分析师（Kimi）— Deep comparisonWriter✍️✍️ 写作者（Gemini）— Draft the reportCoder💻💻 程序员（Kimi）— Implement the logicReviewer🔎🔎 审核员（GLM）— Quality checkPlanner📋📋 规划师（Sonnet）— Break down tasks\n\nKey rules:\n\n✅ Always list each agent with: emoji + role + model name + one-line task\n✅ State the mode (指挥官/流水线/混合) and estimated time\n✅ End announcement with: 正在派出 Agent 小队...\n✅ Note sequential agents as: \"先等前N个结果，单独召唤\"\n❌ Never silently call sessions_spawn without announcing"
      },
      {
        "title": "While waiting — brief note",
        "body": "After spawning, say one line:\n\n⏳ 子 Agent 已全部出发，等结果回来..."
      },
      {
        "title": "After results — structured output (not raw dump)",
        "body": "Never paste sub-agent raw output directly. Always digest and restructure by content logic — NOT by agent order.\n\nRecommended output order:\n\n1. 执行统计卡 ← 先让用户知道跑了什么\n2. 核心结论（3-5条最重要发现）← 最有价值的放最前面\n3. 分主题展开细节（按内容逻辑组织，不按子Agent顺序）← 读起来是一篇完整文章\n4. 下一步行动建议 ← 落地结尾\n\n统计卡格式：\n\n## 📊 执行统计\n| Agent | 模型 | 耗时 | 状态 |\n|-------|------|------|------|\n| 🔍 研究员A | GLM | 58s | ✅ |\n| 🔍 研究员B | GLM | 62s | ✅ |\n| 📊 分析师  | Kimi | 45s | ✅ |\n串行需要约 165s → 并行实际 62s，节省 **62%** ⚡\n\n❌ Wrong — agent order:\n\n子Agent1的结果...\n子Agent2的结果...\n子Agent3的结果...  ← 读者要自己拼图，体验差\n\n✅ Right — content logic:\n\n## 核心结论\n1. 最重要发现A（来自多个Agent综合）\n2. 最重要发现B\n...\n\n## 详细分析：[主题1]\n...（整合所有相关Agent的内容）\n\n## 详细分析：[主题2]\n...\n\n## 下一步建议\n...\n\nThe main agent rewrites everything in its own words. Sub-agent outputs are raw material, not the final answer."
      },
      {
        "title": "After results — deliver the report (channel-aware)",
        "body": "Always save to file first. Then deliver based on the current channel.\n\n# Step 1: Always save to file first\nwrite(\"/workspace/projects/{topic-slug}/report.md\", content)\n\nThen choose delivery method by channel:\n\nChannelDelivery methodfeishu + has feishu-all-operations skillCreate Feishu doc → send link (best UX)feishu + no Feishu skillmessage(filePath=..., filename=\"report.md\") — send as attachmentDiscord / Telegram / Slackmessage(message=...) — Markdown renders normallyOther / unknownSave file + tell the user the path\n\nWhy this matters: Feishu chat does NOT render Markdown. Sending raw Markdown text shows ##, |---| symbols. Always use attachment or doc link on Feishu.\n\n# Feishu (no Feishu doc skill): send as attachment\nmessage(action=\"send\", filePath=\"/workspace/projects/{topic-slug}/report.md\", filename=\"report.md\")\n\n# Discord/Telegram: send markdown directly\nmessage(action=\"send\", message=report_content)\n\nEnd with one line:\n\n需要调整某个方向，或推送到飞书文档吗？\n\nRules:\n\n✅ Always save .md file first — regardless of channel\n✅ Check current channel before deciding how to send\n❌ Never paste >300 words of Markdown text on Feishu — it won't render\n❌ Never just say \"报告已保存至 /path/xxx\" — user can't open server paths\n❌ Never ask \"要不要我帮你整理成文档？\" — just do it"
      },
      {
        "title": "Sequential vs parallel — analyst must wait for researchers",
        "body": "Critical: Agents spawned in the same round run in parallel and share NO context with each other.\n\n❌ Wrong: spawn researcher-A + researcher-B + analyst all at once\n          → analyst has no data, returns empty\n\n✅ Right: \n  Round 1: spawn researcher-A + researcher-B (parallel, independent)\n  Wait for both to return...\n  Round 2: main agent consolidates research results\n           → then either: main agent writes analysis itself\n           → or: spawn analyst with research results injected as context\n\nBest practice: Any agent that depends on another agent's output should be spawned in a later round, after collecting the dependency."
      },
      {
        "title": "🤖 Model Selection Guide — Which Model for Which Role",
        "body": "Always pick the right model for each agent. State the model explicitly in the announcement."
      },
      {
        "title": "Model roster",
        "body": "模型别名特点适合角色glmGLM便宜、速度快、中文好搜索、简单调研、状态检查kimiKimi长上下文（128k）、代码强深度分析、代码、长文整合geminiGemini创意好、多模态写作、文案、图像理解sonnetClaude Sonnet均衡、工具调用稳复杂推理、规划、审核opusClaude Opus最强推理极复杂分析、架构设计"
      },
      {
        "title": "Role → Model mapping (default)",
        "body": "角色默认模型原因🔍 研究员 / ResearcherGLM轻量搜索，够用且便宜📊 分析师 / AnalystKimi长上下文，处理大量资料✍️ 写作者 / WriterGemini创意写作效果最好💻 程序员 / CoderKimi长上下文代码理解🔎 审核员 / ReviewerGLM简单判断，不需重炮📋 规划师 / PlannerSonnet结构化规划能力强🧐 批评者 / CriticSonnet逻辑严谨，挑战假设"
      },
      {
        "title": "When to override defaults",
        "body": "任务很简单 → 降级到 GLM（省成本）\n需要最高质量 → 升级到 Opus\n用户明确指定模型 → 照用户说的来\n多模型对比场景 → 每个 Agent 用不同模型，在公告里说明"
      },
      {
        "title": "Always announce the model",
        "body": "In the pre-spawn announcement, every agent line must include the model:\n\n✅ 这样：🔍 研究员A（GLM）— 调研 LangChain\n❌ 这样：🔍 研究员A — 调研 LangChain"
      },
      {
        "title": "Step 0: Always plan first (dynamic agent count)",
        "body": "Never hardcode how many agents to spawn. The right number depends on the task complexity. Always start with a planning step:\n\n1. Analyze the task → identify subtopics / dimensions\n2. Decide: how many agents? which roles? which mode?\n3. Spawn accordingly (could be 2, could be 10)\n4. Consolidate results\n\nExample planning output:\n\nTask: \"Research the top AI agent frameworks\"\n→ Plan: 5 researchers (one per framework) + 1 analyst for comparison\n→ Mode: Orchestrator (needs web search)\n→ Spawn: 5 parallel sub-agents\n\nThe number of agents should match the task, not a template."
      },
      {
        "title": "Three modes — auto-routed by intent",
        "body": "You don't need to say which mode. Just describe the task. The skill reads these two signals:\n\nNeed web search / real-time info? → use sessions_spawn (has tools)\nWant multiple draft versions to compare? → spawn parallel writers\n\nUser says anything\n        ↓\n  Wants multiple versions / drafts / angles?\n        YES ──→ Also needs web search?\n        │              YES → 🔀 Hybrid Mode   (search first, then N drafts)\n        │              NO  → 🔄 Pipeline Mode (N drafts in parallel, pure text)\n        │\n        NO  ──→ Needs web search / file ops?\n                       YES → 🎯 Orchestrator Mode (sessions_spawn, parallel)\n                       NO  → 🔄 Pipeline Mode     (pure text, faster)\n\nTrigger signals the skill listens for:\n\nSignalExamplesMode triggeredMulti-draft intent\"几个版本\", \"多个角度\", \"让我挑\", \"各自写\", \"different styles\"Pipeline or HybridSearch intent\"搜索\", \"最新\", \"调研\", \"联网\", \"search\", \"latest\"Orchestrator or HybridBoth\"搜索后给我几版报告\", \"research then write multiple drafts\"HybridNeither\"翻译\", \"分析\", \"写作\", plain text tasksPipeline\n\nYou can also check with the router directly:\n\npython scripts/router.py mode \"搜索竞品资料，帮我写3个版本的分析\"\n# → 🔀 HYBRID\npython scripts/router.py mode \"调研LangChain并写一份报告\"\n# → 🎯 ORCHESTRATOR\npython scripts/router.py mode \"用三个角度分析这个方案\"\n# → 🔄 PIPELINE"
      },
      {
        "title": "🎯 Orchestrator Mode (with tools, truly parallel)",
        "body": "Sub-agents launched via sessions_spawn. Each has full OpenClaw tools: web search, file read/write, code execution.\n\n⚡ How parallelism works:\nCall multiple sessions_spawn in the same tool-call round — OpenClaw executes them simultaneously. All sub-agents run at once; the main agent collects all results when they finish.\n\nSame round → parallel execution:\n\nsessions_spawn(task=\"Search LangChain...\") ──┐\nsessions_spawn(task=\"Search CrewAI...\")    ──┤→ all run simultaneously\nsessions_spawn(task=\"Search AutoGen...\")   ──┘\nsessions_spawn(task=\"Search LangGraph...\") ─┘\n\n↓  (all finish, main agent receives all 4 results)\n\nMain agent consolidates → writes full report\n\nSequential = spawn one, wait for result, then spawn next. Use this only when a later task depends on an earlier result (e.g. write report AFTER research is done).\n\nHow to spawn — always include role, model hint, and what to return:\n\n# Parallel research: spawn all 4 in the same round → they run simultaneously\nsessions_spawn({\n    \"task\": \"[CONTEXT] Comparing AI agent frameworks for a tech team report.\\n\\n[YOUR TASK] Search LangChain: architecture, pros/cons, GitHub stars, latest version. Return 5 bullet points ≤100 words each. Do NOT write a full report.\",\n    \"label\": \"🔍 researcher-langchain [model: default]\"\n})\nsessions_spawn({\n    \"task\": \"[CONTEXT] Same report.\\n\\n[YOUR TASK] Search CrewAI: architecture, pros/cons, GitHub stars, latest version. Return 5 bullet points ≤100 words each.\",\n    \"label\": \"🔍 researcher-crewai [model: default]\"\n})\nsessions_spawn({\n    \"task\": \"[CONTEXT] Same report.\\n\\n[YOUR TASK] Search AutoGen: architecture, pros/cons, GitHub stars, latest version. Return 5 bullet points ≤100 words each.\",\n    \"label\": \"🔍 researcher-autogen [model: default]\"\n})\nsessions_spawn({\n    \"task\": \"[CONTEXT] Same report.\\n\\n[YOUR TASK] Search LangGraph: architecture, pros/cons, GitHub stars, latest version. Return 5 bullet points ≤100 words each.\",\n    \"label\": \"🔍 researcher-langgraph [model: default]\"\n})\n# All 4 run in parallel → when all return, main agent consolidates and writes report\n\nMixed: parallel then sequential (most common pattern):\n\n# Phase 1: parallel research (spawn all at once)\nsessions_spawn({\"task\": \"[CONTEXT] ...\\n\\n[TASK] Search LangChain. 5 bullets ≤100 words.\", \"label\": \"🔍 researcher-langchain\"})\nsessions_spawn({\"task\": \"[CONTEXT] ...\\n\\n[TASK] Search CrewAI. 5 bullets ≤100 words.\", \"label\": \"🔍 researcher-crewai\"})\nsessions_spawn({\"task\": \"[CONTEXT] ...\\n\\n[TASK] Search AutoGen. 5 bullets ≤100 words.\", \"label\": \"🔍 researcher-autogen\"})\n\n# Phase 2: after all 3 return → main agent writes report (sequential, depends on research)\n# (main agent does this directly, no need to spawn a writer)\n\nKey rules:\n\n✅ Same round = parallel: spawn multiple agents at once for independent tasks\n✅ Sequential: spawn one, wait for result, then spawn next — only when tasks depend on each other\n✅ Sub-agents return summaries only (≤100 words per point)\n✅ Main agent writes the full report (avoids token limit failures)\n✅ Label each agent clearly: role + what model it's using\n❌ Don't ask a sub-agent to both search AND write a long report"
      },
      {
        "title": "🔄 Pipeline Mode (pure text, any task)",
        "body": "Runs agents via Python CLI. No web search, but works for any pure-text task: writing, analysis, translation, multi-model comparison, brainstorming, code generation.\n\ncd ~/.openclaw/skills/claw-multi-agent\n\n# Parallel: multiple agents tackle different angles simultaneously\npython run.py --mode parallel \\\n  --agents \"fast:🔍 researcher:summarize the pros of microservice architecture\" \\\n           \"fast:🔍 researcher:summarize the cons of microservice architecture\" \\\n           \"fast:🔍 researcher:list real-world companies using microservices and outcomes\" \\\n           \"smart:📊 analyst:compare microservices vs monolith for a 10-person startup\" \\\n  --aggregation synthesize\n\n# Sequential: chain agents, each builds on the previous output\npython run.py --mode sequential \\\n  --agents \"fast:📋 planner:break down how to build a REST API in Python\" \\\n           \"smart:💻 coder:implement the API based on the plan above\" \\\n           \"fast:🔎 reviewer:review the code for bugs and security issues\" \\\n  --aggregation last\n\n# Auto-route: router classifies task and picks tiers automatically\npython run.py --auto-route --task \"write a technical blog post about GRPO vs PPO\"\n\n# Dry-run: preview the plan without executing\npython run.py --dry-run \\\n  --agents \"fast:researcher:research X\" \"smart:writer:write report\"\n\nPipeline mode works great for:\n\nMulti-angle analysis (spawn one agent per dimension)\nMulti-model comparison (same task, different models)\nCode pipeline (plan → code → review)\nBatch writing (translate/summarize N documents in parallel)"
      },
      {
        "title": "🔀 Hybrid Mode (search + multi-draft)",
        "body": "Best of both worlds: sub-agents search the web (with tools), then multiple writers generate parallel drafts from the research.\n\nWhen it kicks in: user wants both real-time research AND multiple versions to compare.\n\nPhase 1 (Orchestrator — with tools, parallel):\n  sessions_spawn(search topic A) ──┐\n  sessions_spawn(search topic B) ──┤ → all run simultaneously\n  sessions_spawn(search topic C) ──┘\n  ↓ research summaries collected\n\nPhase 2 (Pipeline — pure text, parallel):\n  openclaw agent (writer style 1) ──┐\n  openclaw agent (writer style 2) ──┤ → all run simultaneously\n  openclaw agent (writer style 3) ──┘\n  ↓ 3 draft versions returned\n\nMain agent: compare drafts → pick best or synthesize\n\nCLI usage:\n\n# Auto: router detects hybrid intent and runs both phases\npython run.py --mode hybrid --task \"调研主流AI框架，给我3个不同风格的对比报告\" --num-drafts 3\n\n# Auto-mode: let router decide the mode automatically\npython run.py --auto-mode --task \"搜索竞品资料后写几个版本的分析\"\n\nIn conversation (sessions_spawn approach):\n\n# Phase 1: parallel research (spawn all at once)\nsessions_spawn({\"task\": \"[CONTEXT] ...\\n\\n[TASK] Search LangChain. 5 bullets.\", \"label\": \"🔍 research-langchain\"})\nsessions_spawn({\"task\": \"[CONTEXT] ...\\n\\n[TASK] Search CrewAI. 5 bullets.\", \"label\": \"🔍 research-crewai\"})\nsessions_spawn({\"task\": \"[CONTEXT] ...\\n\\n[TASK] Search AutoGen. 5 bullets.\", \"label\": \"🔍 research-autogen\"})\n\n# After all 3 return → Phase 2: main agent writes 3 draft versions itself\n# (or spawn 3 pipeline agents with research as context)"
      },
      {
        "title": "Smart Router",
        "body": "Built-in task classifier. Auto-picks the right tier based on keywords:\n\npython scripts/router.py classify \"write a Python web scraper\"\n# → Tier: CODE  (routes to smart model)\n\npython scripts/router.py classify \"research the latest LLM papers\"\n# → Tier: RESEARCH  (routes to fast model)\n\npython scripts/router.py spawn --json --multi \"research X and write a report\"\n# → splits into 2 tasks: RESEARCH + CREATIVE\n\nTierModelUsed forFASTdefault (light)Simple queries, status, translation, searchCODEdefault (smart)Programming, debugging, implementationRESEARCHdefault (light)Research, search, compare, surveyCREATIVEdefault (smart)Writing, articles, documentationREASONINGdefault (best)Architecture, logic, complex analysis"
      },
      {
        "title": "contextSharing: Give sub-agents background",
        "body": "Sub-agents start as fresh sessions — they don't know your goal. Add a [CONTEXT] block.\n\nPattern 1: recent (recommended — works for 95% of cases)\n\n[CONTEXT] User is comparing AI agent frameworks for a team report. Audience: engineers.\n\n[YOUR TASK] Search LangChain pros and cons. Return 5 bullet points ≤100 words each.\n\nPattern 2: summary (sequential tasks — pass prior results forward)\n\n[PRIOR FINDINGS]\n- LangChain: richest ecosystem, steep curve\n- CrewAI: clean role separation...\n\n[YOUR TASK] Based on above, search AutoGen. Return 3 unique points not covered above.\n\nPattern 3: full (complex background — let agent read a file)\n\n[CONTEXT FILE] Read /workspace/research/context.md for full background.\n\n[YOUR TASK] Search latest Test-Time Compute Scaling advances. Return 3 summaries.\n\nReuse context across parallel agents:\n\nBG = \"Researching RL post-training for ML engineers. Topics: GRPO/DAPO/PPO, veRL.\"\n\nsessions_spawn({\"task\": f\"[CONTEXT] {BG}\\n\\n[TASK] Search GRPO vs PPO benchmarks. 5 bullets ≤100 words.\", \"label\": \"🔍 researcher-grpo [model: default]\"})\nsessions_spawn({\"task\": f\"[CONTEXT] {BG}\\n\\n[TASK] Search DAPO design. 5 bullets ≤100 words.\", \"label\": \"🔍 researcher-dapo [model: default]\"})\nsessions_spawn({\"task\": f\"[CONTEXT] {BG}\\n\\n[TASK] Search veRL architecture. 5 bullets ≤100 words.\", \"label\": \"🔍 researcher-verl [model: default]\"})"
      },
      {
        "title": "Execution summary — always output this",
        "body": "After every multi-agent run, print a standard card:\n\n## 📊 Execution Summary\n\nMode: 🎯 Orchestrator Mode (sessions_spawn, with tools)\n\n| Agent | Role | Model | Time | Status |\n|-------|------|-------|------|--------|\n| 🔍 researcher-langchain | Researcher | default | 22s | ✅ |\n| 🔍 researcher-crewai    | Researcher | default | 19s | ✅ |\n| 🔍 researcher-autogen   | Researcher | default | 24s | ✅ |\n| 🔍 researcher-langgraph | Researcher | default | 21s | ✅ |\n| ✍️ main (consolidate)   | Writer     | default | 38s | ✅ |\n\nAgents spawned: 4  |  Parallel time: ~24s  |  Serial equivalent: ~86s  |  Saved: ~62s (72%)\n\nAlways include:\n\nMode (Orchestrator / Pipeline + Sequential/Parallel)\nEach agent's role emoji + name + model used\nActual elapsed time per agent\nTotal parallel time vs serial equivalent"
      },
      {
        "title": "Preset roles",
        "body": "RoleEmojiBest forresearcher🔍Web search, info gatheringwriter✍️Reports, documentation, articlescoder💻Code writing, debugging, implementationanalyst📊Data analysis, comparison, statisticsreviewer🔎Code / content review, QAplanner📋Task planning, decompositioncritic🧐Risk analysis, devil's advocate"
      },
      {
        "title": "Gotcha 0: Reading files before announcing (most common mistake)",
        "body": "Investigating context before sending the activation announcement causes long silence and risks losing the announcement entirely due to context compression.\n\n❌ Receive task → read operators.py → read README → announce → spawn\n✅ Receive task → announce immediately (can say \"analyzing task...\") → read files → spawn"
      },
      {
        "title": "Gotcha 1: Sub-agent output token limit",
        "body": "Sub-agents have a ~4096 token output cap. Exceeded → tool args truncated → file writes silently fail.\n\n❌ \"search AND write a 2000-word report\"\n✅ Sub-agent returns summaries; main agent writes the report"
      },
      {
        "title": "Gotcha 2: Orchestrator Mode has no tools in Pipeline Mode",
        "body": "python run.py processes have no web_search, exec, etc.\n\n❌ Pipeline mode: \"search the latest news on X\"\n✅ Anything needing real web access → Orchestrator Mode"
      },
      {
        "title": "Gotcha 3: Parallel agents can't depend on each other",
        "body": "Agents spawned in the same round run simultaneously.\n\n❌ Agent-2: \"based on Agent-1's results...\"\n✅ Parallel = independent; sequential = chained"
      },
      {
        "title": "Gotcha 4: Don't hardcode agent count",
        "body": "Match agents to the task, not to a template.\n\n❌ Always spawn exactly 3 agents\n✅ Plan first, then decide: simple task → 2 agents, complex → 8+ agents"
      },
      {
        "title": "Pipeline mode quick reference",
        "body": "python run.py\n  --mode parallel|sequential\n  --agents \"tier_or_model:🎭role:task description\"   # repeatable, any number\n  --aggregation synthesize|compare|concatenate|last\n  --timeout 300\n  --dry-run          # preview without executing\n  --auto-route       # router picks tiers automatically\n  --list-models      # show current model config\n\nAggregationEffectsynthesizeMain agent summarizes all outputs (default)compareSide-by-side of each agent's outputconcatenateOutputs joined in orderlastFinal agent's output only (sequential)"
      }
    ],
    "body": "claw-multi-agent 🐝\n\nReplace one AI with a team of AIs. Turn serial into parallel. Turn hours into minutes.\n\nWhat can it do?\nScenario\tExample\tSpeedup\nParallel research\tSearch 5 frameworks simultaneously, each writes a report\t~65% ⚡\nMulti-model compare\tAsk Claude, Gemini, Kimi the same question at the same time\t~50% ⚡\nCode pipeline\tPlan → Code → Review, auto hand-off in sequence\tQuality ↑\nBatch processing\tTranslate / analyze / summarize multiple docs in parallel\tScales linearly\n⚡ Get started in 30 seconds\n\nJust say something like:\n\n\"Research LangChain, CrewAI, and AutoGen in parallel\"\n\"Have multiple agents search these topics and write a combined report\"\n\"Compare how Claude and Gemini answer this question\"\n\"Use multi-agent mode to do this research\"\n🎭 Interaction Style — How to Talk to the User\n\nThis is the recommended pattern. Every multi-agent run must follow this interaction pattern.\n\nStep 0 — Announce skill activation FIRST\n\n⚠️ Iron rule: The activation announcement must be your FIRST reply after receiving the task — before reading any files, before investigating, before spawning.\n\nWhy this matters: Reading files, researching background, and spawning all take time. If you do those first, users see long silence. Worse: context compression can happen during that time, and the announcement will never be sent.\n\nCorrect order: Receive task → Send announcement immediately → Then read files / spawn / wait\n\nThe very first thing to say when this skill is triggered — before any planning or spawning:\n\n🐝 **claw-multi-agent 已唤醒**\n多智能体并行模式启动，我来组建 Agent 小队处理这个任务。\n\n\nThis tells the user the skill is active and sets expectations for what's about to happen.\n\nBefore spawning — announce the plan\n\nRight after the activation announcement, present the plan BEFORE calling sessions_spawn:\n\n🚀 [N]个方向同时开搞，全面覆盖你的问题。\n\n📋 任务规划：\n🔍 研究员A（GLM）— [一句话任务描述]\n🔍 研究员B（GLM）— [一句话任务描述]\n📊 分析师（Kimi）— 先等前[N]个结果，单独召唤（note when sequential）\n\n模式：🎯 指挥官模式（联网搜索）\n预计耗时：~[X]s（[N] Agent 并行[，分析师串行跟进]）\n正在派出 Agent 小队...\n\n\nRole emoji reference:\n\nRole\tEmoji\tExample\nResearcher\t🔍\t🔍 研究员A（GLM）— Research XX\nAnalyst\t📊\t📊 分析师（Kimi）— Deep comparison\nWriter\t✍️\t✍️ 写作者（Gemini）— Draft the report\nCoder\t💻\t💻 程序员（Kimi）— Implement the logic\nReviewer\t🔎\t🔎 审核员（GLM）— Quality check\nPlanner\t📋\t📋 规划师（Sonnet）— Break down tasks\n\nKey rules:\n\n✅ Always list each agent with: emoji + role + model name + one-line task\n✅ State the mode (指挥官/流水线/混合) and estimated time\n✅ End announcement with: 正在派出 Agent 小队...\n✅ Note sequential agents as: \"先等前N个结果，单独召唤\"\n❌ Never silently call sessions_spawn without announcing\nWhile waiting — brief note\n\nAfter spawning, say one line:\n\n⏳ 子 Agent 已全部出发，等结果回来...\n\nAfter results — structured output (not raw dump)\n\nNever paste sub-agent raw output directly. Always digest and restructure by content logic — NOT by agent order.\n\nRecommended output order:\n\n1. 执行统计卡 ← 先让用户知道跑了什么\n2. 核心结论（3-5条最重要发现）← 最有价值的放最前面\n3. 分主题展开细节（按内容逻辑组织，不按子Agent顺序）← 读起来是一篇完整文章\n4. 下一步行动建议 ← 落地结尾\n\n\n统计卡格式：\n\n## 📊 执行统计\n| Agent | 模型 | 耗时 | 状态 |\n|-------|------|------|------|\n| 🔍 研究员A | GLM | 58s | ✅ |\n| 🔍 研究员B | GLM | 62s | ✅ |\n| 📊 分析师  | Kimi | 45s | ✅ |\n串行需要约 165s → 并行实际 62s，节省 **62%** ⚡\n\n\n❌ Wrong — agent order:\n\n子Agent1的结果...\n子Agent2的结果...\n子Agent3的结果...  ← 读者要自己拼图，体验差\n\n\n✅ Right — content logic:\n\n## 核心结论\n1. 最重要发现A（来自多个Agent综合）\n2. 最重要发现B\n...\n\n## 详细分析：[主题1]\n...（整合所有相关Agent的内容）\n\n## 详细分析：[主题2]\n...\n\n## 下一步建议\n...\n\n\nThe main agent rewrites everything in its own words. Sub-agent outputs are raw material, not the final answer.\n\nAfter results — deliver the report (channel-aware)\n\nAlways save to file first. Then deliver based on the current channel.\n\n# Step 1: Always save to file first\nwrite(\"/workspace/projects/{topic-slug}/report.md\", content)\n\n\nThen choose delivery method by channel:\n\nChannel\tDelivery method\nfeishu + has feishu-all-operations skill\tCreate Feishu doc → send link (best UX)\nfeishu + no Feishu skill\tmessage(filePath=..., filename=\"report.md\") — send as attachment\nDiscord / Telegram / Slack\tmessage(message=...) — Markdown renders normally\nOther / unknown\tSave file + tell the user the path\n\nWhy this matters: Feishu chat does NOT render Markdown. Sending raw Markdown text shows ##, |---| symbols. Always use attachment or doc link on Feishu.\n\n# Feishu (no Feishu doc skill): send as attachment\nmessage(action=\"send\", filePath=\"/workspace/projects/{topic-slug}/report.md\", filename=\"report.md\")\n\n# Discord/Telegram: send markdown directly\nmessage(action=\"send\", message=report_content)\n\n\nEnd with one line:\n\n需要调整某个方向，或推送到飞书文档吗？\n\n\nRules:\n\n✅ Always save .md file first — regardless of channel\n✅ Check current channel before deciding how to send\n❌ Never paste >300 words of Markdown text on Feishu — it won't render\n❌ Never just say \"报告已保存至 /path/xxx\" — user can't open server paths\n❌ Never ask \"要不要我帮你整理成文档？\" — just do it\nSequential vs parallel — analyst must wait for researchers\n\nCritical: Agents spawned in the same round run in parallel and share NO context with each other.\n\n❌ Wrong: spawn researcher-A + researcher-B + analyst all at once\n          → analyst has no data, returns empty\n\n✅ Right: \n  Round 1: spawn researcher-A + researcher-B (parallel, independent)\n  Wait for both to return...\n  Round 2: main agent consolidates research results\n           → then either: main agent writes analysis itself\n           → or: spawn analyst with research results injected as context\n\n\nBest practice: Any agent that depends on another agent's output should be spawned in a later round, after collecting the dependency.\n\n🤖 Model Selection Guide — Which Model for Which Role\n\nAlways pick the right model for each agent. State the model explicitly in the announcement.\n\nModel roster\n模型\t别名\t特点\t适合角色\nglm\tGLM\t便宜、速度快、中文好\t搜索、简单调研、状态检查\nkimi\tKimi\t长上下文（128k）、代码强\t深度分析、代码、长文整合\ngemini\tGemini\t创意好、多模态\t写作、文案、图像理解\nsonnet\tClaude Sonnet\t均衡、工具调用稳\t复杂推理、规划、审核\nopus\tClaude Opus\t最强推理\t极复杂分析、架构设计\nRole → Model mapping (default)\n角色\t默认模型\t原因\n🔍 研究员 / Researcher\tGLM\t轻量搜索，够用且便宜\n📊 分析师 / Analyst\tKimi\t长上下文，处理大量资料\n✍️ 写作者 / Writer\tGemini\t创意写作效果最好\n💻 程序员 / Coder\tKimi\t长上下文代码理解\n🔎 审核员 / Reviewer\tGLM\t简单判断，不需重炮\n📋 规划师 / Planner\tSonnet\t结构化规划能力强\n🧐 批评者 / Critic\tSonnet\t逻辑严谨，挑战假设\nWhen to override defaults\n任务很简单 → 降级到 GLM（省成本）\n需要最高质量 → 升级到 Opus\n用户明确指定模型 → 照用户说的来\n多模型对比场景 → 每个 Agent 用不同模型，在公告里说明\nAlways announce the model\n\nIn the pre-spawn announcement, every agent line must include the model:\n\n✅ 这样：🔍 研究员A（GLM）— 调研 LangChain\n❌ 这样：🔍 研究员A — 调研 LangChain\n\nStep 0: Always plan first (dynamic agent count)\n\nNever hardcode how many agents to spawn. The right number depends on the task complexity. Always start with a planning step:\n\n1. Analyze the task → identify subtopics / dimensions\n2. Decide: how many agents? which roles? which mode?\n3. Spawn accordingly (could be 2, could be 10)\n4. Consolidate results\n\n\nExample planning output:\n\nTask: \"Research the top AI agent frameworks\"\n→ Plan: 5 researchers (one per framework) + 1 analyst for comparison\n→ Mode: Orchestrator (needs web search)\n→ Spawn: 5 parallel sub-agents\n\n\nThe number of agents should match the task, not a template.\n\nThree modes — auto-routed by intent\n\nYou don't need to say which mode. Just describe the task. The skill reads these two signals:\n\nNeed web search / real-time info? → use sessions_spawn (has tools)\nWant multiple draft versions to compare? → spawn parallel writers\nUser says anything\n        ↓\n  Wants multiple versions / drafts / angles?\n        YES ──→ Also needs web search?\n        │              YES → 🔀 Hybrid Mode   (search first, then N drafts)\n        │              NO  → 🔄 Pipeline Mode (N drafts in parallel, pure text)\n        │\n        NO  ──→ Needs web search / file ops?\n                       YES → 🎯 Orchestrator Mode (sessions_spawn, parallel)\n                       NO  → 🔄 Pipeline Mode     (pure text, faster)\n\n\nTrigger signals the skill listens for:\n\nSignal\tExamples\tMode triggered\nMulti-draft intent\t\"几个版本\", \"多个角度\", \"让我挑\", \"各自写\", \"different styles\"\tPipeline or Hybrid\nSearch intent\t\"搜索\", \"最新\", \"调研\", \"联网\", \"search\", \"latest\"\tOrchestrator or Hybrid\nBoth\t\"搜索后给我几版报告\", \"research then write multiple drafts\"\tHybrid\nNeither\t\"翻译\", \"分析\", \"写作\", plain text tasks\tPipeline\n\nYou can also check with the router directly:\n\npython scripts/router.py mode \"搜索竞品资料，帮我写3个版本的分析\"\n# → 🔀 HYBRID\npython scripts/router.py mode \"调研LangChain并写一份报告\"\n# → 🎯 ORCHESTRATOR\npython scripts/router.py mode \"用三个角度分析这个方案\"\n# → 🔄 PIPELINE\n\n🎯 Orchestrator Mode (with tools, truly parallel)\n\nSub-agents launched via sessions_spawn. Each has full OpenClaw tools: web search, file read/write, code execution.\n\n⚡ How parallelism works: Call multiple sessions_spawn in the same tool-call round — OpenClaw executes them simultaneously. All sub-agents run at once; the main agent collects all results when they finish.\n\nSame round → parallel execution:\n\nsessions_spawn(task=\"Search LangChain...\") ──┐\nsessions_spawn(task=\"Search CrewAI...\")    ──┤→ all run simultaneously\nsessions_spawn(task=\"Search AutoGen...\")   ──┘\nsessions_spawn(task=\"Search LangGraph...\") ─┘\n\n↓  (all finish, main agent receives all 4 results)\n\nMain agent consolidates → writes full report\n\n\nSequential = spawn one, wait for result, then spawn next. Use this only when a later task depends on an earlier result (e.g. write report AFTER research is done).\n\nHow to spawn — always include role, model hint, and what to return:\n\n# Parallel research: spawn all 4 in the same round → they run simultaneously\nsessions_spawn({\n    \"task\": \"[CONTEXT] Comparing AI agent frameworks for a tech team report.\\n\\n[YOUR TASK] Search LangChain: architecture, pros/cons, GitHub stars, latest version. Return 5 bullet points ≤100 words each. Do NOT write a full report.\",\n    \"label\": \"🔍 researcher-langchain [model: default]\"\n})\nsessions_spawn({\n    \"task\": \"[CONTEXT] Same report.\\n\\n[YOUR TASK] Search CrewAI: architecture, pros/cons, GitHub stars, latest version. Return 5 bullet points ≤100 words each.\",\n    \"label\": \"🔍 researcher-crewai [model: default]\"\n})\nsessions_spawn({\n    \"task\": \"[CONTEXT] Same report.\\n\\n[YOUR TASK] Search AutoGen: architecture, pros/cons, GitHub stars, latest version. Return 5 bullet points ≤100 words each.\",\n    \"label\": \"🔍 researcher-autogen [model: default]\"\n})\nsessions_spawn({\n    \"task\": \"[CONTEXT] Same report.\\n\\n[YOUR TASK] Search LangGraph: architecture, pros/cons, GitHub stars, latest version. Return 5 bullet points ≤100 words each.\",\n    \"label\": \"🔍 researcher-langgraph [model: default]\"\n})\n# All 4 run in parallel → when all return, main agent consolidates and writes report\n\n\nMixed: parallel then sequential (most common pattern):\n\n# Phase 1: parallel research (spawn all at once)\nsessions_spawn({\"task\": \"[CONTEXT] ...\\n\\n[TASK] Search LangChain. 5 bullets ≤100 words.\", \"label\": \"🔍 researcher-langchain\"})\nsessions_spawn({\"task\": \"[CONTEXT] ...\\n\\n[TASK] Search CrewAI. 5 bullets ≤100 words.\", \"label\": \"🔍 researcher-crewai\"})\nsessions_spawn({\"task\": \"[CONTEXT] ...\\n\\n[TASK] Search AutoGen. 5 bullets ≤100 words.\", \"label\": \"🔍 researcher-autogen\"})\n\n# Phase 2: after all 3 return → main agent writes report (sequential, depends on research)\n# (main agent does this directly, no need to spawn a writer)\n\n\nKey rules:\n\n✅ Same round = parallel: spawn multiple agents at once for independent tasks\n✅ Sequential: spawn one, wait for result, then spawn next — only when tasks depend on each other\n✅ Sub-agents return summaries only (≤100 words per point)\n✅ Main agent writes the full report (avoids token limit failures)\n✅ Label each agent clearly: role + what model it's using\n❌ Don't ask a sub-agent to both search AND write a long report\n🔄 Pipeline Mode (pure text, any task)\n\nRuns agents via Python CLI. No web search, but works for any pure-text task: writing, analysis, translation, multi-model comparison, brainstorming, code generation.\n\ncd ~/.openclaw/skills/claw-multi-agent\n\n# Parallel: multiple agents tackle different angles simultaneously\npython run.py --mode parallel \\\n  --agents \"fast:🔍 researcher:summarize the pros of microservice architecture\" \\\n           \"fast:🔍 researcher:summarize the cons of microservice architecture\" \\\n           \"fast:🔍 researcher:list real-world companies using microservices and outcomes\" \\\n           \"smart:📊 analyst:compare microservices vs monolith for a 10-person startup\" \\\n  --aggregation synthesize\n\n# Sequential: chain agents, each builds on the previous output\npython run.py --mode sequential \\\n  --agents \"fast:📋 planner:break down how to build a REST API in Python\" \\\n           \"smart:💻 coder:implement the API based on the plan above\" \\\n           \"fast:🔎 reviewer:review the code for bugs and security issues\" \\\n  --aggregation last\n\n# Auto-route: router classifies task and picks tiers automatically\npython run.py --auto-route --task \"write a technical blog post about GRPO vs PPO\"\n\n# Dry-run: preview the plan without executing\npython run.py --dry-run \\\n  --agents \"fast:researcher:research X\" \"smart:writer:write report\"\n\n\nPipeline mode works great for:\n\nMulti-angle analysis (spawn one agent per dimension)\nMulti-model comparison (same task, different models)\nCode pipeline (plan → code → review)\nBatch writing (translate/summarize N documents in parallel)\n🔀 Hybrid Mode (search + multi-draft)\n\nBest of both worlds: sub-agents search the web (with tools), then multiple writers generate parallel drafts from the research.\n\nWhen it kicks in: user wants both real-time research AND multiple versions to compare.\n\nPhase 1 (Orchestrator — with tools, parallel):\n  sessions_spawn(search topic A) ──┐\n  sessions_spawn(search topic B) ──┤ → all run simultaneously\n  sessions_spawn(search topic C) ──┘\n  ↓ research summaries collected\n\nPhase 2 (Pipeline — pure text, parallel):\n  openclaw agent (writer style 1) ──┐\n  openclaw agent (writer style 2) ──┤ → all run simultaneously\n  openclaw agent (writer style 3) ──┘\n  ↓ 3 draft versions returned\n\nMain agent: compare drafts → pick best or synthesize\n\n\nCLI usage:\n\n# Auto: router detects hybrid intent and runs both phases\npython run.py --mode hybrid --task \"调研主流AI框架，给我3个不同风格的对比报告\" --num-drafts 3\n\n# Auto-mode: let router decide the mode automatically\npython run.py --auto-mode --task \"搜索竞品资料后写几个版本的分析\"\n\n\nIn conversation (sessions_spawn approach):\n\n# Phase 1: parallel research (spawn all at once)\nsessions_spawn({\"task\": \"[CONTEXT] ...\\n\\n[TASK] Search LangChain. 5 bullets.\", \"label\": \"🔍 research-langchain\"})\nsessions_spawn({\"task\": \"[CONTEXT] ...\\n\\n[TASK] Search CrewAI. 5 bullets.\", \"label\": \"🔍 research-crewai\"})\nsessions_spawn({\"task\": \"[CONTEXT] ...\\n\\n[TASK] Search AutoGen. 5 bullets.\", \"label\": \"🔍 research-autogen\"})\n\n# After all 3 return → Phase 2: main agent writes 3 draft versions itself\n# (or spawn 3 pipeline agents with research as context)\n\nSmart Router\n\nBuilt-in task classifier. Auto-picks the right tier based on keywords:\n\npython scripts/router.py classify \"write a Python web scraper\"\n# → Tier: CODE  (routes to smart model)\n\npython scripts/router.py classify \"research the latest LLM papers\"\n# → Tier: RESEARCH  (routes to fast model)\n\npython scripts/router.py spawn --json --multi \"research X and write a report\"\n# → splits into 2 tasks: RESEARCH + CREATIVE\n\nTier\tModel\tUsed for\nFAST\tdefault (light)\tSimple queries, status, translation, search\nCODE\tdefault (smart)\tProgramming, debugging, implementation\nRESEARCH\tdefault (light)\tResearch, search, compare, survey\nCREATIVE\tdefault (smart)\tWriting, articles, documentation\nREASONING\tdefault (best)\tArchitecture, logic, complex analysis\ncontextSharing: Give sub-agents background\n\nSub-agents start as fresh sessions — they don't know your goal. Add a [CONTEXT] block.\n\nPattern 1: recent (recommended — works for 95% of cases)\n\n[CONTEXT] User is comparing AI agent frameworks for a team report. Audience: engineers.\n\n[YOUR TASK] Search LangChain pros and cons. Return 5 bullet points ≤100 words each.\n\n\nPattern 2: summary (sequential tasks — pass prior results forward)\n\n[PRIOR FINDINGS]\n- LangChain: richest ecosystem, steep curve\n- CrewAI: clean role separation...\n\n[YOUR TASK] Based on above, search AutoGen. Return 3 unique points not covered above.\n\n\nPattern 3: full (complex background — let agent read a file)\n\n[CONTEXT FILE] Read /workspace/research/context.md for full background.\n\n[YOUR TASK] Search latest Test-Time Compute Scaling advances. Return 3 summaries.\n\n\nReuse context across parallel agents:\n\nBG = \"Researching RL post-training for ML engineers. Topics: GRPO/DAPO/PPO, veRL.\"\n\nsessions_spawn({\"task\": f\"[CONTEXT] {BG}\\n\\n[TASK] Search GRPO vs PPO benchmarks. 5 bullets ≤100 words.\", \"label\": \"🔍 researcher-grpo [model: default]\"})\nsessions_spawn({\"task\": f\"[CONTEXT] {BG}\\n\\n[TASK] Search DAPO design. 5 bullets ≤100 words.\", \"label\": \"🔍 researcher-dapo [model: default]\"})\nsessions_spawn({\"task\": f\"[CONTEXT] {BG}\\n\\n[TASK] Search veRL architecture. 5 bullets ≤100 words.\", \"label\": \"🔍 researcher-verl [model: default]\"})\n\nExecution summary — always output this\n\nAfter every multi-agent run, print a standard card:\n\n## 📊 Execution Summary\n\nMode: 🎯 Orchestrator Mode (sessions_spawn, with tools)\n\n| Agent | Role | Model | Time | Status |\n|-------|------|-------|------|--------|\n| 🔍 researcher-langchain | Researcher | default | 22s | ✅ |\n| 🔍 researcher-crewai    | Researcher | default | 19s | ✅ |\n| 🔍 researcher-autogen   | Researcher | default | 24s | ✅ |\n| 🔍 researcher-langgraph | Researcher | default | 21s | ✅ |\n| ✍️ main (consolidate)   | Writer     | default | 38s | ✅ |\n\nAgents spawned: 4  |  Parallel time: ~24s  |  Serial equivalent: ~86s  |  Saved: ~62s (72%)\n\n\nAlways include:\n\nMode (Orchestrator / Pipeline + Sequential/Parallel)\nEach agent's role emoji + name + model used\nActual elapsed time per agent\nTotal parallel time vs serial equivalent\nPreset roles\nRole\tEmoji\tBest for\nresearcher\t🔍\tWeb search, info gathering\nwriter\t✍️\tReports, documentation, articles\ncoder\t💻\tCode writing, debugging, implementation\nanalyst\t📊\tData analysis, comparison, statistics\nreviewer\t🔎\tCode / content review, QA\nplanner\t📋\tTask planning, decomposition\ncritic\t🧐\tRisk analysis, devil's advocate\n⚠️ Gotchas\nGotcha 0: Reading files before announcing (most common mistake)\n\nInvestigating context before sending the activation announcement causes long silence and risks losing the announcement entirely due to context compression.\n\n❌ Receive task → read operators.py → read README → announce → spawn\n✅ Receive task → announce immediately (can say \"analyzing task...\") → read files → spawn\nGotcha 1: Sub-agent output token limit\n\nSub-agents have a ~4096 token output cap. Exceeded → tool args truncated → file writes silently fail.\n\n❌ \"search AND write a 2000-word report\"\n✅ Sub-agent returns summaries; main agent writes the report\nGotcha 2: Orchestrator Mode has no tools in Pipeline Mode\n\npython run.py processes have no web_search, exec, etc.\n\n❌ Pipeline mode: \"search the latest news on X\"\n✅ Anything needing real web access → Orchestrator Mode\nGotcha 3: Parallel agents can't depend on each other\n\nAgents spawned in the same round run simultaneously.\n\n❌ Agent-2: \"based on Agent-1's results...\"\n✅ Parallel = independent; sequential = chained\nGotcha 4: Don't hardcode agent count\n\nMatch agents to the task, not to a template.\n\n❌ Always spawn exactly 3 agents\n✅ Plan first, then decide: simple task → 2 agents, complex → 8+ agents\nPipeline mode quick reference\npython run.py\n  --mode parallel|sequential\n  --agents \"tier_or_model:🎭role:task description\"   # repeatable, any number\n  --aggregation synthesize|compare|concatenate|last\n  --timeout 300\n  --dry-run          # preview without executing\n  --auto-route       # router picks tiers automatically\n  --list-models      # show current model config\n\nAggregation\tEffect\nsynthesize\tMain agent summarizes all outputs (default)\ncompare\tSide-by-side of each agent's output\nconcatenate\tOutputs joined in order\nlast\tFinal agent's output only (sequential)"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/zcyynl/claw-multi-agent",
    "publisherUrl": "https://clawhub.ai/zcyynl/claw-multi-agent",
    "owner": "zcyynl",
    "version": "1.0.5",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/claw-multi-agent",
    "downloadUrl": "https://openagent3.xyz/downloads/claw-multi-agent",
    "agentUrl": "https://openagent3.xyz/skills/claw-multi-agent/agent",
    "manifestUrl": "https://openagent3.xyz/skills/claw-multi-agent/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/claw-multi-agent/agent.md"
  }
}