{
  "schemaVersion": "1.0",
  "item": {
    "slug": "roundtable-adaptive",
    "name": "Roundtable Adaptive",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/JimmyClanker/roundtable-adaptive",
    "canonicalUrl": "https://clawhub.ai/JimmyClanker/roundtable-adaptive",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/roundtable-adaptive",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=roundtable-adaptive",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "README.md",
      "SKILL.md",
      "examples/debate-ai-developer-2026-02-23.md",
      "examples/priorityA-checklist.md",
      "panels.json",
      "prompts/final-synthesis.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/roundtable-adaptive"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/roundtable-adaptive",
    "agentPageUrl": "https://openagent3.xyz/skills/roundtable-adaptive/agent",
    "manifestUrl": "https://openagent3.xyz/skills/roundtable-adaptive/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/roundtable-adaptive/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Roundtable v2 — Adaptive Multi-Model Orchestrator",
        "body": "Trigger: roundtable [--mode] [prompt] from any channel your agent monitors.\nOutput: Posted to your configured output channel (set ROUNDTABLE_OUTPUT_CHANNEL in your OpenClaw config, or results are posted back to the triggering channel).\nPanel agents: Persistent sessions (mode=\"session\", thread=true) — stay alive in the Discord thread for follow-up questions. Meta-panel analysts and synthesis agent are one-shot (mode=\"run\").\n\nThe orchestrator = COORDINATOR ONLY. Uses your default model unless overridden in panels.json. Never argues a position, never joins the panel.\n\nCore principle: the Meta-Panel (4 premium models) designs the optimal WORKFLOW for the task — parallel debate, sequential pipeline, or hybrid — then the right agents execute it."
      },
      {
        "title": "Configuration",
        "body": "Before using, set your output channel in panels.json (or the triggering channel is used):\n\n{\n  \"output\": {\n    \"channel\": \"discord\",\n    \"target\": \"YOUR_CHANNEL_ID_HERE\"\n  }\n}\n\nIf using Discord threads (optional — creates one thread per roundtable for clean organization):\n\n{\n  \"output\": {\n    \"channel\": \"discord\",\n    \"target\": \"YOUR_CHANNEL_ID_HERE\",\n    \"useThreads\": true\n  }\n}\n\nWithout this config, results are posted directly to the channel where the command was issued."
      },
      {
        "title": "Cost transparency",
        "body": "ComponentCost per full runClaude Opus (OAuth)FreeGPT-5.3 Codex (OAuth)FreeGemini 3.1 Pro (Blockrun)~$0.05Grok 4 (Blockrun)~$0.08Total (full panel)~$0.13–$0.50Degraded mode (Claude only)Free\n\n--quick flag halves cost (1 round only)."
      },
      {
        "title": "Setup",
        "body": "Minimum (degraded mode — free):\n\nConfigure anthropic provider in openclaw.json (OAuth or API key)\nOptionally add openai-codex for GPT-5.3 Codex\nDone — Grok/Gemini slots fall back to Claude Sonnet\n\nFull panel (adds Grok 4 + Gemini 3.1 Pro via Blockrun):\n\nInstall Blockrun: openclaw plugins install @blockrun/clawrouter then openclaw gateway restart\nFund the Blockrun wallet with USDC on Base (~$5-10). Address shown during install.\nFull panel costs ~$0.13–$0.50/run; Claude and GPT slots remain free via OAuth.\n\nResults are saved to {workspace}/memory/roundtables/YYYY-MM-DD-slug.json (created automatically)."
      },
      {
        "title": "Optional: auto-trigger a dedicated channel",
        "body": "You can configure a Discord channel as a roundtable-only channel in your AGENTS.md:\n\nAny message in channel [YOUR_CHANNEL_ID] → treat as a roundtable topic automatically.\nNo prefix needed. Message → auto-detect mode → create thread → spawn orchestrator.\n\nThis is entirely optional — the explicit roundtable command works from any channel."
      },
      {
        "title": "Explicit trigger (any channel)",
        "body": "roundtable [prompt] — auto-detect mode, full flow\nroundtable --debate [prompt] — force parallel debate mode\nroundtable --build [prompt] — force build/coding mode\nroundtable --redteam [prompt] — force adversarial mode\nroundtable --vote [prompt] — force decision mode\nroundtable --quick [prompt] — skip meta-panel, use default panel for mode, 1 round only\nroundtable --panel model1,model2,model3 [prompt] — manual panel override, skip meta-panel\nroundtable --validate [prompt] — add Round 3 agent validation of synthesis\nroundtable --no-search [prompt] — skip web search (use only for purely theoretical/abstract topics)"
      },
      {
        "title": "Step -1: Create a Thread (FIRST ACTION)",
        "body": "Before anything else, create a thread in your configured channel and save the thread ID."
      },
      {
        "title": "-1a) Dedup check (REQUIRED)",
        "body": "Avoid double-spawn if the same topic is triggered twice.\n\nNormalize topic string:\n\nlowercase\ntrim\ncollapse multiple spaces\nremove trailing punctuation\n\n\nList recent threads in the target channel:\n\nmessage(action='thread-list', channel='discord', channelId='[CHANNEL_ID]', limit=25)\n\nIf an existing active thread title matches normalized topic (+ same mode tag like [[DEBATE]]) created in last 24h:\n\nreuse that thread (THREAD_ID = existing_thread_id)\npost: ♻️ Duplicate topic detected — reusing existing thread.\ndo NOT spawn a new orchestrator/panel\n\n\nIf no match: create a new thread."
      },
      {
        "title": "-1b) Create thread (if no dedup hit)",
        "body": "message(\n  action = 'thread-create',\n  channel = '[your configured channel]',\n  channelId = '[CHANNEL_ID from user config]',\n  threadName = '🎯 [topic — max 8 words] [[MODE]]',\n  message = '**Panel:** [model list]\\n**Mode:** [mode] | **Rounds:** [N]\\n⏳ Analysis in progress...'\n)\n\nSave the returned thread ID as THREAD_ID.\n\nAll subsequent message() calls use target = THREAD_ID, NOT the channel ID.\n\nIf thread creation fails or channel is not configured: fall back to posting directly in the active channel."
      },
      {
        "title": "Step 0: Web Search Grounding (always first)",
        "body": "Run a web search on the topic before anything else — meta-panel and all agents will have current context.\n\nweb_search(query = prompt, count = 5)\n\nTimeout policy: If web_search returns no result or errors within ~10s, do NOT block — continue immediately with CURRENT_CONTEXT = \"No real-time data available (search failed or timed out).\". The roundtable proceeds on model knowledge only.\n\nCaching: If re-running the same topic within the same session, reuse the prior CURRENT_CONTEXT block — do not re-search.\n\nSummarize results into a CURRENT_CONTEXT block (max 250 words):\n\nKey facts, recent developments, relevant data points\nDate of search\nIf no useful results found: note \"No relevant real-time data found\" and continue\n\nThis block is injected into:\n\nThe meta-panel prompt (so they design the workflow with current context)\nEvery Round 1 agent prompt (so all panelists argue from the same updated baseline)"
      },
      {
        "title": "Step 0b: Meta-Panel — Workflow Design",
        "body": "Skip if: --panel flag used, OR --quick flag used."
      },
      {
        "title": "Spawn 4 premium meta-analysts in parallel",
        "body": "Read panels.json → meta.models. For each:\n\nsessions_spawn(\n  task = filled prompts/meta-panel.md,\n  model = model_id,\n  mode = \"run\",\n  label = \"rt-meta-[A/B/C/D]\",\n  runTimeoutSeconds = 90\n)"
      },
      {
        "title": "0b. Synthesize workflow from 4 recommendations",
        "body": "After collecting all meta responses, the orchestrator synthesizes the final workflow:\n\nWorkflow type: majority vote among 4 recommendations\n\nTie → prefer hybrid (more flexible)\n\n\n\nStage composition: tally model recommendations per stage\n\nFor each stage position, pick the most-recommended model\nIf a model is not in agents.defaults.models allowlist → skip, use next\nIf a model is your orchestrator's model → skip (reserved for the orchestrator, never a panelist)\n\n\n\nRounds: median of recommendations (round up if tie) — hard cap at 3 max, always\n\n\nSynthesis model: most-recommended premium model not on the main panel\n\n\nLog the decision (include in output header):\n\n\"Meta-panel designed workflow: [type]. Stages: [N]. Panel: [models]. Synthesis: [model].\""
      },
      {
        "title": "0c. Workflow types explained",
        "body": "parallel_debate — classic roundtable\n\nAll agents in Stage 1 work independently, same prompt\nRound 2: cross-critique\nBest for: debates, opinions, risk analysis, decision-making\n\nsequential — output chains between stages\n\nStage 1 agents produce outputs (drafts, code, research)\nStage 2 agents receive Stage 1 outputs and review/validate/improve\nBest for: coding (write → review), research (collect → synthesize), creative (draft → refine)\nRound 2 within Stage 1 still possible; Stage 2 is a separate pass\n\nhybrid — parallel within stages, sequential between\n\nStage 1: N agents work in parallel on different aspects\nStage 2: 1-2 premium agents receive ALL Stage 1 outputs and produce integrated output\nBest for: complex analysis (parallel research → premium synthesis)"
      },
      {
        "title": "0d. Panel degradation rule",
        "body": "If any agent fails and fallback is SAME MODEL FAMILY → log:\n⚠️ PANEL DEGRADED — [role] substituted [original] with [fallback] (same family: [family])\n\nAlways surface this in META section of final output with actionable guidance:\n\nIf degraded due to missing blockrun → \"Action: Start Blockrun at localhost:8402 for full panel, or use --panel budget for stable 2-model run\"\nIf degraded due to model not in allowlist → \"Action: Add [model] to agents.defaults.models in openclaw.json\"\nIf degraded due to API error → \"Action: Check provider API key / quota, then retry\""
      },
      {
        "title": "Step 1: Detect Mode (if no flag given)",
        "body": "ModeKeywordsdebatepros/cons, tradeoff, should we, ethics, compare, opinion, betterbuildimplement, code, architecture, build, design, develop, createredteamattack, vulnerability, failure, risk, break, threat, exploitvotechoose, decide, which one, best option, select, recommend betweendefaultanything else"
      },
      {
        "title": "parallel_debate (standard)",
        "body": "Round 1: Spawn all panel agents in parallel as persistent thread-bound sessions.\n\nsessions_spawn(\n  task = filled prompts/round1.md,\n  model = model_id,\n  mode = \"session\",        ← persistent — stays alive in the thread\n  label = \"rt-[role]\",\n  thread = true            ← bound to the thread from Step -1\n)\n\nSave session keys: { \"attacker\": sessionKey, \"defender\": sessionKey, ... }\nEach agent writes their full response + SELF-DIGEST (last section)\nCollect all self-digests\n⚠️ Agents stay alive — users can address them directly for follow-up questions\n\nRound 2 (if rounds ≥ 2): Send cross-critique prompt to each existing session via sessions_send.\n\nDo NOT re-spawn — reuse session keys from Round 1\n[SELF_DIGEST] = this agent's own digest from Round 1\n[PEER_DIGESTS] = other agents' digests (labeled with role)\nExtract AGREEMENT SCORES from each response\n\nRound 3 (if --validate): See Step 4."
      },
      {
        "title": "sequential",
        "body": "Stage 1: Spawn agents in parallel as persistent sessions (mode=\"session\", thread=true).\n\nUse standard prompts/round1.md.\nRound 2 cross-critique via sessions_send to existing sessions (no re-spawn).\nCollect full Stage 1 outputs for Stage 2.\n\nStage 2: Spawn new persistent sessions (mode=\"session\", thread=true).\n\nBuild prompt: prompts/round1.md base + prepend Stage 1 outputs as context\nLabel: \"STAGE 1 OUTPUT from [Role]: [full output]\"\nStage 2 agents review/validate/improve Stage 1 work and write SELF-DIGESTs"
      },
      {
        "title": "hybrid",
        "body": "Stage 1: Parallel persistent sessions (mode=\"session\", thread=true), each with a different sub-task.\n\nCustomize Round 1 prompt to specify each agent's sub-task:\n\n\"Your specific task for this stage: [task from workflow design]\"\n\n\nAgents write SELF-DIGESTs\n\nStage 2: 1-2 new persistent sessions (mode=\"session\", thread=true) with all Stage 1 outputs embedded.\n\nBuild prompt: prompts/round1.md base + \"You are integrating and synthesizing the work of multiple agents. Their outputs: [all Stage 1 outputs]\"\nStage 2 produces the integrated output"
      },
      {
        "title": "Step 3: Consensus Scoring",
        "body": "After Round 2 (parallel_debate) or Stage 2 (sequential/hybrid):\n\nExtract AGREEMENT SCORES from each agent's Round 2 response.\nBuild score matrix: { agent_role: { peer_role: score_1_to_5 } }\nConsensus % = (sum of all scores / (n_scores × 5)) × 100\nIf no Round 2 scores (quick mode / sequential): omit consensus %, mark as \"N/A\"\n\nNote on Round 3: Round 3 validation uses ACCURATE/PARTIALLY/INACCURATE — this is a separate metric from consensus %. Round 3 checks synthesis fidelity, not inter-agent agreement. Do NOT mix these two metrics. Consensus % comes only from Round 2 scores; Round 3 result appears separately in the META block as Validated: yes/no/partial."
      },
      {
        "title": "Step 4: Round 3 — Validation (--validate flag only)",
        "body": "When to recommend --validate to the user:\n\nConsensus % < 40% (high disagreement — synthesis risks distortion)\nRedteam mode (adversarial stakes — synthesis must be bulletproof)\nBuild mode with 3+ Stage 2 models (complex integration, easy to misrepresent)\nUser explicitly mentions \"high-stakes\", \"final decision\", or \"publishing this\"\n\nWhen NOT to use it: Quick mode, debate on subjective topics, or when time matters more than precision.\n\nDraft synthesis first (Step 5 below), but do NOT post.\n\nSpawn validation agents:\n\nsessions_spawn(\n  task = filled prompts/round3-validation.md,\n  model = original agent model,\n  label = \"rt-r3-validate-[role]\",\n  runTimeoutSeconds = 60\n)\n\nTally:\n\n2+ INACCURATE → rewrite synthesis incorporating corrections\n1 INACCURATE → note in META: ⚠️ [Role] flagged misrepresentation: [correction summary]\nAll ACCURATE/PARTIAL → mark Validated: yes or Validated: partial in META"
      },
      {
        "title": "Step 5: Synthesis — Spawned Neutral Model",
        "body": "Never write synthesis yourself.\n\nsessions_spawn(\n  task = filled prompts/final-synthesis.md,\n  model = [synthesis model from meta-panel recommendation, or anthropic/claude-opus-4-6 as default],\n  label = \"rt-synthesis\",\n  mode = \"run\",\n  runTimeoutSeconds = 180\n)\n\nFill prompts/final-synthesis.md placeholders:\n\n[ROUND1_SUMMARIES] → all self-digests: \"[ROLE] ([model]): [digest]\"\n[ROUND2_SUMMARIES] → critiques: \"[ROLE] criticized [peer]'s [claim] because [reason]\"\n[CONSENSUS_SCORES] → full score matrix + calculated %\n[DISCORD_THREAD_ID] → the THREAD_ID from Step -1 (synthesis agent posts here)\n\nPost to Discord using THREAD_ID from Step -1 (not the channel ID). All round outputs and the final synthesis go into the same thread."
      },
      {
        "title": "Step 6: Persist Results",
        "body": "Save to {workspace}/memory/roundtables/YYYY-MM-DD-[topic-slug].json:\n\n{\n  \"date\": \"YYYY-MM-DD\",\n  \"topic\": \"[prompt]\",\n  \"mode\": \"[mode]\",\n  \"workflow_type\": \"parallel_debate|sequential|hybrid\",\n  \"stages\": [{ \"model\": \"...\", \"role\": \"...\", \"task\": \"...\" }],\n  \"meta_panel_recommendation\": \"[summary of meta votes]\",\n  \"panel_degraded\": false,\n  \"panel_degradation_notes\": \"\",\n  \"consensus_pct\": \"XX% or N/A\",\n  \"synthesis_model\": \"[model]\",\n  \"validated\": \"yes|no|partial\",\n  \"elapsed_time_sec\": 0,\n  \"synthesis\": \"[final synthesis text]\"\n}\n\nAlso append one JSONL line to {workspace}/memory/roundtables/scorecard.jsonl with:\nts, topic, mode, workflow_type, elapsed_time_sec, consensus_pct, validated, panel_degraded."
      },
      {
        "title": "Edge Cases",
        "body": "SituationActionWeb search failsContinue with note \"No real-time context available\" in all prompts--no-search flagSkip Step 0 web search entirelyMeta-panel all failUse default panel for detected mode, log warning--quickSkip meta-panel + round 2. Always uses parallel_debate workflow. Spawns default panel for detected mode (3 models). Synthesizes after round 1 only.--panel overrideSkip meta-panel, use specified models, default to parallel_debateFallback = same familyContinue + log PANEL DEGRADED warning in METABoth model and fallback failSkip agent, note in META — do not wait, do not blockNo blockrun configuredWarn user: \"Blockrun not available. Using budget panel. Full panel requires Blockrun at localhost:8402.\" Auto-switch to budget profile from panels.json.Agent timeout (any round)FAIL-CONTINUE: treat as absent, mark [TIMEOUT] in META, proceed with surviving agentsAgent fails mid-Round 2Use its Round 1 digest as final position, omit its scores from consensus calculationSynthesis agent failsOrchestrator writes synthesis, note: \"Synthesis by orchestrator (bias risk — no neutral model available)\"Stage 2 agent failsNote in META, synthesize with Stage 1 only0 agents respondReport failure, suggest retry1 agent respondsSkip Round 2 (no peers), synthesize from Round 1 only, mark consensus \"N/A\"--context-from SLUGLoad {workspace}/memory/roundtables/[slug].json, extract synthesis field, prepend to CURRENT_CONTEXT as \"PRIOR ROUNDTABLE CONTEXT: [synthesis]\". If file not found: warn and continue without prior context."
      },
      {
        "title": "Placeholder Contract",
        "body": "When filling prompt templates, apply this rule for every [PLACEHOLDER]:\n\nPlaceholderIf missing/failedAction[CURRENT_CONTEXT]Web search failedInsert: \"No real-time context available.\"[SELF_DIGEST]Agent timed out R1Skip agent entirely from R2[PEER_DIGESTS]All peers failedSkip R2, go to synthesis directly[ROUND1_SUMMARIES]No R1 outputsAbort with error: \"0 agents responded\"[ROUND2_SUMMARIES]Quick mode / no R2Insert: \"No cross-critique (quick mode or single round)\"[CONSENSUS_SCORES]No scores extractedInsert: \"N/A — scores not available\"[SYNTHESIS_DRAFT]Synthesis failedSkip R3, note in META\n\nNever leave a [PLACEHOLDER] unfilled in a prompt. Unfilled placeholders confuse models and produce garbage output."
      },
      {
        "title": "Score Parsing (Round 2)",
        "body": "Agents write scores in free text. Extract scores with this heuristic:\n\nLook for the SCORES: block\nMatch pattern: - [Role]: X/5 — extract integer X (1–5)\nIf no clean integer found, scan for digit 1–5 nearest to the role name\nIf still ambiguous → assign 3 (neutral) and note [SCORE INFERRED] in META\nDo NOT crash the workflow on a malformed score block."
      },
      {
        "title": "Quick Reference: Default Panels (fallback if meta-panel fails)",
        "body": "debate:  [opus-4.6, gpt-5.3-codex, gemini-3.1-pro, grok-4] → Advocate / Devil's Advocate / Analyst / Contrarian\nbuild:   [opus-4.6, gemini-3.1-pro, grok-4, gpt-5.3-codex] → Architect / Reviewer / Engineer / Implementer\nredteam: [opus-4.6, gemini-3.1-pro, grok-4, gpt-5.3-codex] → Defender / Analyst / Attacker / Red Teamer\nvote:    [opus-4.6, gemini-3.1-pro, grok-4, gpt-5.3-codex]  → 4-way vote panel\n(all via blockrun/ prefix — see panels.json for exact model IDs and fallbacks)"
      }
    ],
    "body": "Roundtable v2 — Adaptive Multi-Model Orchestrator\n\nTrigger: roundtable [--mode] [prompt] from any channel your agent monitors. Output: Posted to your configured output channel (set ROUNDTABLE_OUTPUT_CHANNEL in your OpenClaw config, or results are posted back to the triggering channel). Panel agents: Persistent sessions (mode=\"session\", thread=true) — stay alive in the Discord thread for follow-up questions. Meta-panel analysts and synthesis agent are one-shot (mode=\"run\").\n\nThe orchestrator = COORDINATOR ONLY. Uses your default model unless overridden in panels.json. Never argues a position, never joins the panel.\n\nCore principle: the Meta-Panel (4 premium models) designs the optimal WORKFLOW for the task — parallel debate, sequential pipeline, or hybrid — then the right agents execute it.\n\nConfiguration\n\nBefore using, set your output channel in panels.json (or the triggering channel is used):\n\n{\n  \"output\": {\n    \"channel\": \"discord\",\n    \"target\": \"YOUR_CHANNEL_ID_HERE\"\n  }\n}\n\n\nIf using Discord threads (optional — creates one thread per roundtable for clean organization):\n\n{\n  \"output\": {\n    \"channel\": \"discord\",\n    \"target\": \"YOUR_CHANNEL_ID_HERE\",\n    \"useThreads\": true\n  }\n}\n\n\nWithout this config, results are posted directly to the channel where the command was issued.\n\nCost transparency\nComponent\tCost per full run\nClaude Opus (OAuth)\tFree\nGPT-5.3 Codex (OAuth)\tFree\nGemini 3.1 Pro (Blockrun)\t~$0.05\nGrok 4 (Blockrun)\t~$0.08\nTotal (full panel)\t~$0.13–$0.50\nDegraded mode (Claude only)\tFree\n\n--quick flag halves cost (1 round only).\n\nSetup\n\nMinimum (degraded mode — free):\n\nConfigure anthropic provider in openclaw.json (OAuth or API key)\nOptionally add openai-codex for GPT-5.3 Codex\nDone — Grok/Gemini slots fall back to Claude Sonnet\n\nFull panel (adds Grok 4 + Gemini 3.1 Pro via Blockrun):\n\nInstall Blockrun: openclaw plugins install @blockrun/clawrouter then openclaw gateway restart\nFund the Blockrun wallet with USDC on Base (~$5-10). Address shown during install.\nFull panel costs ~$0.13–$0.50/run; Claude and GPT slots remain free via OAuth.\n\nResults are saved to {workspace}/memory/roundtables/YYYY-MM-DD-slug.json (created automatically).\n\nTrigger Patterns\nOptional: auto-trigger a dedicated channel\n\nYou can configure a Discord channel as a roundtable-only channel in your AGENTS.md:\n\nAny message in channel [YOUR_CHANNEL_ID] → treat as a roundtable topic automatically.\nNo prefix needed. Message → auto-detect mode → create thread → spawn orchestrator.\n\n\nThis is entirely optional — the explicit roundtable command works from any channel.\n\nExplicit trigger (any channel)\nExplicit trigger (any channel)\nroundtable [prompt] — auto-detect mode, full flow\nroundtable --debate [prompt] — force parallel debate mode\nroundtable --build [prompt] — force build/coding mode\nroundtable --redteam [prompt] — force adversarial mode\nroundtable --vote [prompt] — force decision mode\nroundtable --quick [prompt] — skip meta-panel, use default panel for mode, 1 round only\nroundtable --panel model1,model2,model3 [prompt] — manual panel override, skip meta-panel\nroundtable --validate [prompt] — add Round 3 agent validation of synthesis\nroundtable --no-search [prompt] — skip web search (use only for purely theoretical/abstract topics)\nStep -1: Create a Thread (FIRST ACTION)\n\nBefore anything else, create a thread in your configured channel and save the thread ID.\n\n-1a) Dedup check (REQUIRED)\n\nAvoid double-spawn if the same topic is triggered twice.\n\nNormalize topic string:\nlowercase\ntrim\ncollapse multiple spaces\nremove trailing punctuation\nList recent threads in the target channel:\nmessage(action='thread-list', channel='discord', channelId='[CHANNEL_ID]', limit=25)\n\nIf an existing active thread title matches normalized topic (+ same mode tag like [[DEBATE]]) created in last 24h:\nreuse that thread (THREAD_ID = existing_thread_id)\npost: ♻️ Duplicate topic detected — reusing existing thread.\ndo NOT spawn a new orchestrator/panel\nIf no match: create a new thread.\n-1b) Create thread (if no dedup hit)\nmessage(\n  action = 'thread-create',\n  channel = '[your configured channel]',\n  channelId = '[CHANNEL_ID from user config]',\n  threadName = '🎯 [topic — max 8 words] [[MODE]]',\n  message = '**Panel:** [model list]\\n**Mode:** [mode] | **Rounds:** [N]\\n⏳ Analysis in progress...'\n)\n\n\nSave the returned thread ID as THREAD_ID.\n\nAll subsequent message() calls use target = THREAD_ID, NOT the channel ID.\n\nIf thread creation fails or channel is not configured: fall back to posting directly in the active channel.\n\nStep 0: Web Search Grounding (always first)\n\nRun a web search on the topic before anything else — meta-panel and all agents will have current context.\n\nweb_search(query = prompt, count = 5)\n\n\nTimeout policy: If web_search returns no result or errors within ~10s, do NOT block — continue immediately with CURRENT_CONTEXT = \"No real-time data available (search failed or timed out).\". The roundtable proceeds on model knowledge only.\n\nCaching: If re-running the same topic within the same session, reuse the prior CURRENT_CONTEXT block — do not re-search.\n\nSummarize results into a CURRENT_CONTEXT block (max 250 words):\n\nKey facts, recent developments, relevant data points\nDate of search\nIf no useful results found: note \"No relevant real-time data found\" and continue\n\nThis block is injected into:\n\nThe meta-panel prompt (so they design the workflow with current context)\nEvery Round 1 agent prompt (so all panelists argue from the same updated baseline)\nStep 0b: Meta-Panel — Workflow Design\n\nSkip if: --panel flag used, OR --quick flag used.\n\nSpawn 4 premium meta-analysts in parallel\n\nRead panels.json → meta.models. For each:\n\nsessions_spawn(\n  task = filled prompts/meta-panel.md,\n  model = model_id,\n  mode = \"run\",\n  label = \"rt-meta-[A/B/C/D]\",\n  runTimeoutSeconds = 90\n)\n\n0b. Synthesize workflow from 4 recommendations\n\nAfter collecting all meta responses, the orchestrator synthesizes the final workflow:\n\nWorkflow type: majority vote among 4 recommendations\n\nTie → prefer hybrid (more flexible)\n\nStage composition: tally model recommendations per stage\n\nFor each stage position, pick the most-recommended model\nIf a model is not in agents.defaults.models allowlist → skip, use next\nIf a model is your orchestrator's model → skip (reserved for the orchestrator, never a panelist)\n\nRounds: median of recommendations (round up if tie) — hard cap at 3 max, always\n\nSynthesis model: most-recommended premium model not on the main panel\n\nLog the decision (include in output header):\n\n\"Meta-panel designed workflow: [type]. Stages: [N]. Panel: [models]. Synthesis: [model].\"\n\n0c. Workflow types explained\n\nparallel_debate — classic roundtable\n\nAll agents in Stage 1 work independently, same prompt\nRound 2: cross-critique\nBest for: debates, opinions, risk analysis, decision-making\n\nsequential — output chains between stages\n\nStage 1 agents produce outputs (drafts, code, research)\nStage 2 agents receive Stage 1 outputs and review/validate/improve\nBest for: coding (write → review), research (collect → synthesize), creative (draft → refine)\nRound 2 within Stage 1 still possible; Stage 2 is a separate pass\n\nhybrid — parallel within stages, sequential between\n\nStage 1: N agents work in parallel on different aspects\nStage 2: 1-2 premium agents receive ALL Stage 1 outputs and produce integrated output\nBest for: complex analysis (parallel research → premium synthesis)\n0d. Panel degradation rule\n\nIf any agent fails and fallback is SAME MODEL FAMILY → log: ⚠️ PANEL DEGRADED — [role] substituted [original] with [fallback] (same family: [family])\n\nAlways surface this in META section of final output with actionable guidance:\n\nIf degraded due to missing blockrun → \"Action: Start Blockrun at localhost:8402 for full panel, or use --panel budget for stable 2-model run\"\nIf degraded due to model not in allowlist → \"Action: Add [model] to agents.defaults.models in openclaw.json\"\nIf degraded due to API error → \"Action: Check provider API key / quota, then retry\"\nStep 1: Detect Mode (if no flag given)\nMode\tKeywords\ndebate\tpros/cons, tradeoff, should we, ethics, compare, opinion, better\nbuild\timplement, code, architecture, build, design, develop, create\nredteam\tattack, vulnerability, failure, risk, break, threat, exploit\nvote\tchoose, decide, which one, best option, select, recommend between\ndefault\tanything else\nStep 2: Execute Workflow\nparallel_debate (standard)\n\nRound 1: Spawn all panel agents in parallel as persistent thread-bound sessions.\n\nsessions_spawn(\n  task = filled prompts/round1.md,\n  model = model_id,\n  mode = \"session\",        ← persistent — stays alive in the thread\n  label = \"rt-[role]\",\n  thread = true            ← bound to the thread from Step -1\n)\n\nSave session keys: { \"attacker\": sessionKey, \"defender\": sessionKey, ... }\nEach agent writes their full response + SELF-DIGEST (last section)\nCollect all self-digests\n⚠️ Agents stay alive — users can address them directly for follow-up questions\n\nRound 2 (if rounds ≥ 2): Send cross-critique prompt to each existing session via sessions_send.\n\nDo NOT re-spawn — reuse session keys from Round 1\n[SELF_DIGEST] = this agent's own digest from Round 1\n[PEER_DIGESTS] = other agents' digests (labeled with role)\nExtract AGREEMENT SCORES from each response\n\nRound 3 (if --validate): See Step 4.\n\nsequential\n\nStage 1: Spawn agents in parallel as persistent sessions (mode=\"session\", thread=true).\n\nUse standard prompts/round1.md.\nRound 2 cross-critique via sessions_send to existing sessions (no re-spawn).\nCollect full Stage 1 outputs for Stage 2.\n\nStage 2: Spawn new persistent sessions (mode=\"session\", thread=true).\n\nBuild prompt: prompts/round1.md base + prepend Stage 1 outputs as context\nLabel: \"STAGE 1 OUTPUT from [Role]: [full output]\"\nStage 2 agents review/validate/improve Stage 1 work and write SELF-DIGESTs\nhybrid\n\nStage 1: Parallel persistent sessions (mode=\"session\", thread=true), each with a different sub-task.\n\nCustomize Round 1 prompt to specify each agent's sub-task:\n\n\"Your specific task for this stage: [task from workflow design]\"\n\nAgents write SELF-DIGESTs\n\nStage 2: 1-2 new persistent sessions (mode=\"session\", thread=true) with all Stage 1 outputs embedded.\n\nBuild prompt: prompts/round1.md base + \"You are integrating and synthesizing the work of multiple agents. Their outputs: [all Stage 1 outputs]\"\nStage 2 produces the integrated output\nStep 3: Consensus Scoring\n\nAfter Round 2 (parallel_debate) or Stage 2 (sequential/hybrid):\n\nExtract AGREEMENT SCORES from each agent's Round 2 response. Build score matrix: { agent_role: { peer_role: score_1_to_5 } } Consensus % = (sum of all scores / (n_scores × 5)) × 100 If no Round 2 scores (quick mode / sequential): omit consensus %, mark as \"N/A\"\n\nNote on Round 3: Round 3 validation uses ACCURATE/PARTIALLY/INACCURATE — this is a separate metric from consensus %. Round 3 checks synthesis fidelity, not inter-agent agreement. Do NOT mix these two metrics. Consensus % comes only from Round 2 scores; Round 3 result appears separately in the META block as Validated: yes/no/partial.\n\nStep 4: Round 3 — Validation (--validate flag only)\n\nWhen to recommend --validate to the user:\n\nConsensus % < 40% (high disagreement — synthesis risks distortion)\nRedteam mode (adversarial stakes — synthesis must be bulletproof)\nBuild mode with 3+ Stage 2 models (complex integration, easy to misrepresent)\nUser explicitly mentions \"high-stakes\", \"final decision\", or \"publishing this\"\n\nWhen NOT to use it: Quick mode, debate on subjective topics, or when time matters more than precision.\n\nDraft synthesis first (Step 5 below), but do NOT post.\n\nSpawn validation agents:\n\nsessions_spawn(\n  task = filled prompts/round3-validation.md,\n  model = original agent model,\n  label = \"rt-r3-validate-[role]\",\n  runTimeoutSeconds = 60\n)\n\n\nTally:\n\n2+ INACCURATE → rewrite synthesis incorporating corrections\n1 INACCURATE → note in META: ⚠️ [Role] flagged misrepresentation: [correction summary]\nAll ACCURATE/PARTIAL → mark Validated: yes or Validated: partial in META\nStep 5: Synthesis — Spawned Neutral Model\n\nNever write synthesis yourself.\n\nsessions_spawn(\n  task = filled prompts/final-synthesis.md,\n  model = [synthesis model from meta-panel recommendation, or anthropic/claude-opus-4-6 as default],\n  label = \"rt-synthesis\",\n  mode = \"run\",\n  runTimeoutSeconds = 180\n)\n\n\nFill prompts/final-synthesis.md placeholders:\n\n[ROUND1_SUMMARIES] → all self-digests: \"[ROLE] ([model]): [digest]\"\n[ROUND2_SUMMARIES] → critiques: \"[ROLE] criticized [peer]'s [claim] because [reason]\"\n[CONSENSUS_SCORES] → full score matrix + calculated %\n[DISCORD_THREAD_ID] → the THREAD_ID from Step -1 (synthesis agent posts here)\n\nPost to Discord using THREAD_ID from Step -1 (not the channel ID). All round outputs and the final synthesis go into the same thread.\n\nStep 6: Persist Results\n\nSave to {workspace}/memory/roundtables/YYYY-MM-DD-[topic-slug].json:\n\n{\n  \"date\": \"YYYY-MM-DD\",\n  \"topic\": \"[prompt]\",\n  \"mode\": \"[mode]\",\n  \"workflow_type\": \"parallel_debate|sequential|hybrid\",\n  \"stages\": [{ \"model\": \"...\", \"role\": \"...\", \"task\": \"...\" }],\n  \"meta_panel_recommendation\": \"[summary of meta votes]\",\n  \"panel_degraded\": false,\n  \"panel_degradation_notes\": \"\",\n  \"consensus_pct\": \"XX% or N/A\",\n  \"synthesis_model\": \"[model]\",\n  \"validated\": \"yes|no|partial\",\n  \"elapsed_time_sec\": 0,\n  \"synthesis\": \"[final synthesis text]\"\n}\n\n\nAlso append one JSONL line to {workspace}/memory/roundtables/scorecard.jsonl with: ts, topic, mode, workflow_type, elapsed_time_sec, consensus_pct, validated, panel_degraded.\n\nEdge Cases\nSituation\tAction\nWeb search fails\tContinue with note \"No real-time context available\" in all prompts\n--no-search flag\tSkip Step 0 web search entirely\nMeta-panel all fail\tUse default panel for detected mode, log warning\n--quick\tSkip meta-panel + round 2. Always uses parallel_debate workflow. Spawns default panel for detected mode (3 models). Synthesizes after round 1 only.\n--panel override\tSkip meta-panel, use specified models, default to parallel_debate\nFallback = same family\tContinue + log PANEL DEGRADED warning in META\nBoth model and fallback fail\tSkip agent, note in META — do not wait, do not block\nNo blockrun configured\tWarn user: \"Blockrun not available. Using budget panel. Full panel requires Blockrun at localhost:8402.\" Auto-switch to budget profile from panels.json.\nAgent timeout (any round)\tFAIL-CONTINUE: treat as absent, mark [TIMEOUT] in META, proceed with surviving agents\nAgent fails mid-Round 2\tUse its Round 1 digest as final position, omit its scores from consensus calculation\nSynthesis agent fails\tOrchestrator writes synthesis, note: \"Synthesis by orchestrator (bias risk — no neutral model available)\"\nStage 2 agent fails\tNote in META, synthesize with Stage 1 only\n0 agents respond\tReport failure, suggest retry\n1 agent responds\tSkip Round 2 (no peers), synthesize from Round 1 only, mark consensus \"N/A\"\n--context-from SLUG\tLoad {workspace}/memory/roundtables/[slug].json, extract synthesis field, prepend to CURRENT_CONTEXT as \"PRIOR ROUNDTABLE CONTEXT: [synthesis]\". If file not found: warn and continue without prior context.\nPlaceholder Contract\n\nWhen filling prompt templates, apply this rule for every [PLACEHOLDER]:\n\nPlaceholder\tIf missing/failed\tAction\n[CURRENT_CONTEXT]\tWeb search failed\tInsert: \"No real-time context available.\"\n[SELF_DIGEST]\tAgent timed out R1\tSkip agent entirely from R2\n[PEER_DIGESTS]\tAll peers failed\tSkip R2, go to synthesis directly\n[ROUND1_SUMMARIES]\tNo R1 outputs\tAbort with error: \"0 agents responded\"\n[ROUND2_SUMMARIES]\tQuick mode / no R2\tInsert: \"No cross-critique (quick mode or single round)\"\n[CONSENSUS_SCORES]\tNo scores extracted\tInsert: \"N/A — scores not available\"\n[SYNTHESIS_DRAFT]\tSynthesis failed\tSkip R3, note in META\n\nNever leave a [PLACEHOLDER] unfilled in a prompt. Unfilled placeholders confuse models and produce garbage output.\n\nScore Parsing (Round 2)\n\nAgents write scores in free text. Extract scores with this heuristic:\n\nLook for the SCORES: block\nMatch pattern: - [Role]: X/5 — extract integer X (1–5)\nIf no clean integer found, scan for digit 1–5 nearest to the role name\nIf still ambiguous → assign 3 (neutral) and note [SCORE INFERRED] in META Do NOT crash the workflow on a malformed score block.\nQuick Reference: Default Panels (fallback if meta-panel fails)\ndebate:  [opus-4.6, gpt-5.3-codex, gemini-3.1-pro, grok-4] → Advocate / Devil's Advocate / Analyst / Contrarian\nbuild:   [opus-4.6, gemini-3.1-pro, grok-4, gpt-5.3-codex] → Architect / Reviewer / Engineer / Implementer\nredteam: [opus-4.6, gemini-3.1-pro, grok-4, gpt-5.3-codex] → Defender / Analyst / Attacker / Red Teamer\nvote:    [opus-4.6, gemini-3.1-pro, grok-4, gpt-5.3-codex]  → 4-way vote panel\n(all via blockrun/ prefix — see panels.json for exact model IDs and fallbacks)"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/JimmyClanker/roundtable-adaptive",
    "publisherUrl": "https://clawhub.ai/JimmyClanker/roundtable-adaptive",
    "owner": "JimmyClanker",
    "version": "2.9.1",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/roundtable-adaptive",
    "downloadUrl": "https://openagent3.xyz/downloads/roundtable-adaptive",
    "agentUrl": "https://openagent3.xyz/skills/roundtable-adaptive/agent",
    "manifestUrl": "https://openagent3.xyz/skills/roundtable-adaptive/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/roundtable-adaptive/agent.md"
  }
}