{
  "schemaVersion": "1.0",
  "item": {
    "slug": "moa",
    "name": "Mixture of Agents",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/jscianna/moa",
    "canonicalUrl": "https://clawhub.ai/jscianna/moa",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/moa",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=moa",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "manifest.json",
      "scripts/moa-paid.js",
      "scripts/moa.js"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "slug": "moa",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-02T10:36:55.062Z",
      "expiresAt": "2026-05-09T10:36:55.062Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=moa",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=moa",
        "contentDisposition": "attachment; filename=\"moa-1.2.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "moa"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/moa"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/moa",
    "agentPageUrl": "https://openagent3.xyz/skills/moa/agent",
    "manifestUrl": "https://openagent3.xyz/skills/moa/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/moa/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Mixture of Agents (MoA)",
        "body": "TL;DR: Make 3 AI models argue with each other. Get an answer better than any single model. Cost: ~$0.03."
      },
      {
        "title": "A. Standalone CLI (Node.js)",
        "body": "export OPENROUTER_API_KEY=\"your-key\"\nnode scripts/moa.js \"Your complex question\""
      },
      {
        "title": "B. OpenClaw Skill (Agent-orchestrated)",
        "body": "# Install\nclawhub install moa\n\n# Or copy to ~/clawd/skills/moa/\n\nThe agent can then invoke MoA for complex analysis tasks."
      },
      {
        "title": "Origin Story",
        "body": "The concept of \"Mixture of Agents\" comes from research showing LLMs can improve each other's outputs through collaboration. I built this for VC deal analysis—when evaluating startups, you want multiple perspectives, not one model's opinion.\n\nThe journey:\n\nStarted with 5 free OpenRouter models (Llama, Gemini, Mistral, Qwen, Nemotron)\nRate limits killed me at 2am during peak hours\nSwitched to 3 paid frontier specialists\nResult: ~$0.03/query, answers better than any single model"
      },
      {
        "title": "When to Use",
        "body": "Complex analysis — due diligence, market research, technical evaluation\nBrainstorming — get diverse ideas, synthesize the best\nFact-checking — cross-reference across models with different training data\nHigh-stakes decisions — when one model's blind spots could hurt you\nContrarian thinking — different models have different biases\n\nWhen NOT to use:\n\nQuick Q&A (too slow, 30-90s latency)\nReal-time chat (not designed for streaming)\nSimple lookups (overkill)"
      },
      {
        "title": "Paid Tier (Default) — Recommended",
        "body": "RoleModel~LatencyStrengthProposer 1moonshotai/kimi-k2.523sLong context, strong reasoningProposer 2z-ai/glm-536sTechnical depth, different training corpusProposer 3minimax/minimax-m2.564sNuance catching, thorough analysisAggregatormoonshotai/kimi-k2.515sFast synthesis\n\nWhy these models?\n\nFrontier-class but less congested than GPT-4/Claude\nDifferent training data = genuinely different perspectives\nChinese models excel at certain reasoning tasks\nCombined cost still cheaper than single Opus call\n\nCost breakdown:\n\n3 proposers × ~$0.008 = $0.024\n1 aggregator × ~$0.005 = $0.005\n─────────────────────────────\nTotal: ~$0.029/query"
      },
      {
        "title": "Free Tier (Fallback)",
        "body": "5 models: Llama 3.3 70B, Gemini 2.0 Flash, Mistral Small, Nemotron 70B, Qwen 2.5 72B\n\n⚠️ Warning: Free tier hits rate limits during peak hours. Use --free flag only for testing."
      },
      {
        "title": "How It Works",
        "body": "┌─────────────┐\n        │   PROMPT    │\n        └──────┬──────┘\n               │\n    ┌──────────┼──────────┐\n    ▼          ▼          ▼\n┌────────┐ ┌────────┐ ┌────────┐\n│Kimi 2.5│ │ GLM 5  │ │MiniMax │  ← Parallel (they \"argue\")\n│(reason)│ │(depth) │ │(nuance)│\n└───┬────┘ └───┬────┘ └───┬────┘\n    │          │          │\n    └──────────┼──────────┘\n               ▼\n       ┌──────────────┐\n       │  AGGREGATOR  │\n       │  (Kimi 2.5)  │\n       │              │\n       │ • Best of 3  │\n       │ • Resolve    │\n       │   conflicts  │\n       │ • Synthesize │\n       └──────┬───────┘\n              ▼\n       ┌──────────────┐\n       │ FINAL ANSWER │\n       │ (Synthesized)│\n       └──────────────┘"
      },
      {
        "title": "Function Signature",
        "body": "interface MoAOptions {\n  prompt: string;           // Required: The question to analyze\n  tier?: 'paid' | 'free';   // Default: 'paid'\n}\n\ninterface MoAResult {\n  synthesis: string;        // The final aggregated answer\n}\n\n// Throws on complete failure (all models down, invalid key)\n// Returns partial synthesis if 1-2 models fail\nasync function handle(options: MoAOptions): Promise<string>"
      },
      {
        "title": "CLI Usage",
        "body": "# Paid tier (default)\nnode scripts/moa.js \"Your complex question\"\n\n# Free tier\nnode scripts/moa.js \"Your question\" --free"
      },
      {
        "title": "Programmatic Usage",
        "body": "const { handle } = require('./scripts/moa.js');\n\nconst synthesis = await handle({ \n  prompt: \"Analyze the competitive moats in AI code generation\",\n  tier: 'paid'\n});\n\nconsole.log(synthesis);"
      },
      {
        "title": "Failure Modes",
        "body": "ScenarioBehavior1 proposer failsSynthesis from remaining 2 models2 proposers failSynthesis from 1 model (degraded)All proposers failReturns error messageInvalid API keyImmediate error with setup instructionsRate limit (free tier)Returns rate limit error\n\nThe system is designed to degrade gracefully. A 2/3 response is still valuable."
      },
      {
        "title": "VC Due Diligence",
        "body": "node scripts/moa.js \"Analyze the competitive landscape for AI code generation. \\\nWho has defensible moats? Who's likely to be commoditized? Be specific.\""
      },
      {
        "title": "Technical Evaluation",
        "body": "node scripts/moa.js \"Compare RLHF vs DPO vs RLAIF for LLM alignment. \\\nWhich scales better? What are the failure modes of each?\""
      },
      {
        "title": "Market Research",
        "body": "node scripts/moa.js \"What are the emerging use cases for embodied AI in 2026? \\\nFocus on robotics, drones, and autonomous systems. Include specific companies.\""
      },
      {
        "title": "Performance Expectations",
        "body": "MetricPaid TierFree TierP50 Latency~45s~60sP95 Latency~90s~120s+Success Rate>99%~80% (rate limits)Cost/Query~$0.03$0.00"
      },
      {
        "title": "Tips",
        "body": "Be specific — Vague prompts get vague synthesis\nAsk for structure — \"Give me pros/cons\" or \"List top 5\" helps the aggregator\nUse for analysis, not chat — MoA shines for complex reasoning\nBatch your queries — 30-90s per query, so plan accordingly"
      },
      {
        "title": "Via ClawHub (Recommended)",
        "body": "clawhub install moa"
      },
      {
        "title": "Manual",
        "body": "Copy skills/moa/ to your ~/clawd/skills/ directory\nSet OPENROUTER_API_KEY in your environment\nThe agent can now invoke MoA for complex queries"
      },
      {
        "title": "Environment Variables",
        "body": "VariableRequiredDescriptionOPENROUTER_API_KEYYesYour OpenRouter API key\n\nGet your key at: https://openrouter.ai/keys"
      },
      {
        "title": "Credits",
        "body": "MoA concept: Together AI Research\nImplementation: @Scianna\nBuilt for: OpenClaw"
      }
    ],
    "body": "Mixture of Agents (MoA)\n\nTL;DR: Make 3 AI models argue with each other. Get an answer better than any single model. Cost: ~$0.03.\n\nTwo Usage Modes\nA. Standalone CLI (Node.js)\nexport OPENROUTER_API_KEY=\"your-key\"\nnode scripts/moa.js \"Your complex question\"\n\nB. OpenClaw Skill (Agent-orchestrated)\n# Install\nclawhub install moa\n\n# Or copy to ~/clawd/skills/moa/\n\n\nThe agent can then invoke MoA for complex analysis tasks.\n\nOrigin Story\n\nThe concept of \"Mixture of Agents\" comes from research showing LLMs can improve each other's outputs through collaboration. I built this for VC deal analysis—when evaluating startups, you want multiple perspectives, not one model's opinion.\n\nThe journey:\n\nStarted with 5 free OpenRouter models (Llama, Gemini, Mistral, Qwen, Nemotron)\nRate limits killed me at 2am during peak hours\nSwitched to 3 paid frontier specialists\nResult: ~$0.03/query, answers better than any single model\nWhen to Use\nComplex analysis — due diligence, market research, technical evaluation\nBrainstorming — get diverse ideas, synthesize the best\nFact-checking — cross-reference across models with different training data\nHigh-stakes decisions — when one model's blind spots could hurt you\nContrarian thinking — different models have different biases\n\nWhen NOT to use:\n\nQuick Q&A (too slow, 30-90s latency)\nReal-time chat (not designed for streaming)\nSimple lookups (overkill)\nModel Configuration\nPaid Tier (Default) — Recommended\nRole\tModel\t~Latency\tStrength\nProposer 1\tmoonshotai/kimi-k2.5\t23s\tLong context, strong reasoning\nProposer 2\tz-ai/glm-5\t36s\tTechnical depth, different training corpus\nProposer 3\tminimax/minimax-m2.5\t64s\tNuance catching, thorough analysis\nAggregator\tmoonshotai/kimi-k2.5\t15s\tFast synthesis\n\nWhy these models?\n\nFrontier-class but less congested than GPT-4/Claude\nDifferent training data = genuinely different perspectives\nChinese models excel at certain reasoning tasks\nCombined cost still cheaper than single Opus call\n\nCost breakdown:\n\n3 proposers × ~$0.008 = $0.024\n1 aggregator × ~$0.005 = $0.005\n─────────────────────────────\nTotal: ~$0.029/query\n\nFree Tier (Fallback)\n\n5 models: Llama 3.3 70B, Gemini 2.0 Flash, Mistral Small, Nemotron 70B, Qwen 2.5 72B\n\n⚠️ Warning: Free tier hits rate limits during peak hours. Use --free flag only for testing.\n\nHow It Works\n        ┌─────────────┐\n        │   PROMPT    │\n        └──────┬──────┘\n               │\n    ┌──────────┼──────────┐\n    ▼          ▼          ▼\n┌────────┐ ┌────────┐ ┌────────┐\n│Kimi 2.5│ │ GLM 5  │ │MiniMax │  ← Parallel (they \"argue\")\n│(reason)│ │(depth) │ │(nuance)│\n└───┬────┘ └───┬────┘ └───┬────┘\n    │          │          │\n    └──────────┼──────────┘\n               ▼\n       ┌──────────────┐\n       │  AGGREGATOR  │\n       │  (Kimi 2.5)  │\n       │              │\n       │ • Best of 3  │\n       │ • Resolve    │\n       │   conflicts  │\n       │ • Synthesize │\n       └──────┬───────┘\n              ▼\n       ┌──────────────┐\n       │ FINAL ANSWER │\n       │ (Synthesized)│\n       └──────────────┘\n\nAPI Reference\nFunction Signature\ninterface MoAOptions {\n  prompt: string;           // Required: The question to analyze\n  tier?: 'paid' | 'free';   // Default: 'paid'\n}\n\ninterface MoAResult {\n  synthesis: string;        // The final aggregated answer\n}\n\n// Throws on complete failure (all models down, invalid key)\n// Returns partial synthesis if 1-2 models fail\nasync function handle(options: MoAOptions): Promise<string>\n\nCLI Usage\n# Paid tier (default)\nnode scripts/moa.js \"Your complex question\"\n\n# Free tier\nnode scripts/moa.js \"Your question\" --free\n\nProgrammatic Usage\nconst { handle } = require('./scripts/moa.js');\n\nconst synthesis = await handle({ \n  prompt: \"Analyze the competitive moats in AI code generation\",\n  tier: 'paid'\n});\n\nconsole.log(synthesis);\n\nFailure Modes\nScenario\tBehavior\n1 proposer fails\tSynthesis from remaining 2 models\n2 proposers fail\tSynthesis from 1 model (degraded)\nAll proposers fail\tReturns error message\nInvalid API key\tImmediate error with setup instructions\nRate limit (free tier)\tReturns rate limit error\n\nThe system is designed to degrade gracefully. A 2/3 response is still valuable.\n\nExample Use Cases\nVC Due Diligence\nnode scripts/moa.js \"Analyze the competitive landscape for AI code generation. \\\nWho has defensible moats? Who's likely to be commoditized? Be specific.\"\n\nTechnical Evaluation\nnode scripts/moa.js \"Compare RLHF vs DPO vs RLAIF for LLM alignment. \\\nWhich scales better? What are the failure modes of each?\"\n\nMarket Research\nnode scripts/moa.js \"What are the emerging use cases for embodied AI in 2026? \\\nFocus on robotics, drones, and autonomous systems. Include specific companies.\"\n\nPerformance Expectations\nMetric\tPaid Tier\tFree Tier\nP50 Latency\t~45s\t~60s\nP95 Latency\t~90s\t~120s+\nSuccess Rate\t>99%\t~80% (rate limits)\nCost/Query\t~$0.03\t$0.00\nTips\nBe specific — Vague prompts get vague synthesis\nAsk for structure — \"Give me pros/cons\" or \"List top 5\" helps the aggregator\nUse for analysis, not chat — MoA shines for complex reasoning\nBatch your queries — 30-90s per query, so plan accordingly\nInstallation\nVia ClawHub (Recommended)\nclawhub install moa\n\nManual\nCopy skills/moa/ to your ~/clawd/skills/ directory\nSet OPENROUTER_API_KEY in your environment\nThe agent can now invoke MoA for complex queries\nEnvironment Variables\nVariable\tRequired\tDescription\nOPENROUTER_API_KEY\tYes\tYour OpenRouter API key\n\nGet your key at: https://openrouter.ai/keys\n\nCredits\nMoA concept: Together AI Research\nImplementation: @Scianna\nBuilt for: OpenClaw"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/jscianna/moa",
    "publisherUrl": "https://clawhub.ai/jscianna/moa",
    "owner": "jscianna",
    "version": "1.2.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/moa",
    "downloadUrl": "https://openagent3.xyz/downloads/moa",
    "agentUrl": "https://openagent3.xyz/skills/moa/agent",
    "manifestUrl": "https://openagent3.xyz/skills/moa/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/moa/agent.md"
  }
}