← All skills
Tencent SkillHub Β· AI

Mixture of Agents

Mixture of Agents: Make 3 frontier models argue, then synthesize their best insights into one superior answer. ~$0.03/query.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Mixture of Agents: Make 3 frontier models argue, then synthesize their best insights into one superior answer. ~$0.03/query.

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, manifest.json, scripts/moa-paid.js, scripts/moa.js

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.2.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 21 sections Open source page

Mixture of Agents (MoA)

TL;DR: Make 3 AI models argue with each other. Get an answer better than any single model. Cost: ~$0.03.

A. Standalone CLI (Node.js)

export OPENROUTER_API_KEY="your-key" node scripts/moa.js "Your complex question"

B. OpenClaw Skill (Agent-orchestrated)

# Install clawhub install moa # Or copy to ~/clawd/skills/moa/ The agent can then invoke MoA for complex analysis tasks.

Origin Story

The concept of "Mixture of Agents" comes from research showing LLMs can improve each other's outputs through collaboration. I built this for VC deal analysisβ€”when evaluating startups, you want multiple perspectives, not one model's opinion. The journey: Started with 5 free OpenRouter models (Llama, Gemini, Mistral, Qwen, Nemotron) Rate limits killed me at 2am during peak hours Switched to 3 paid frontier specialists Result: ~$0.03/query, answers better than any single model

When to Use

Complex analysis β€” due diligence, market research, technical evaluation Brainstorming β€” get diverse ideas, synthesize the best Fact-checking β€” cross-reference across models with different training data High-stakes decisions β€” when one model's blind spots could hurt you Contrarian thinking β€” different models have different biases When NOT to use: Quick Q&A (too slow, 30-90s latency) Real-time chat (not designed for streaming) Simple lookups (overkill)

Paid Tier (Default) β€” Recommended

RoleModel~LatencyStrengthProposer 1moonshotai/kimi-k2.523sLong context, strong reasoningProposer 2z-ai/glm-536sTechnical depth, different training corpusProposer 3minimax/minimax-m2.564sNuance catching, thorough analysisAggregatormoonshotai/kimi-k2.515sFast synthesis Why these models? Frontier-class but less congested than GPT-4/Claude Different training data = genuinely different perspectives Chinese models excel at certain reasoning tasks Combined cost still cheaper than single Opus call Cost breakdown: 3 proposers Γ— ~$0.008 = $0.024 1 aggregator Γ— ~$0.005 = $0.005 ───────────────────────────── Total: ~$0.029/query

Free Tier (Fallback)

5 models: Llama 3.3 70B, Gemini 2.0 Flash, Mistral Small, Nemotron 70B, Qwen 2.5 72B ⚠️ Warning: Free tier hits rate limits during peak hours. Use --free flag only for testing.

How It Works

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ PROMPT β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β–Ό β–Ό β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Kimi 2.5β”‚ β”‚ GLM 5 β”‚ β”‚MiniMax β”‚ ← Parallel (they "argue") β”‚(reason)β”‚ β”‚(depth) β”‚ β”‚(nuance)β”‚ β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ AGGREGATOR β”‚ β”‚ (Kimi 2.5) β”‚ β”‚ β”‚ β”‚ β€’ Best of 3 β”‚ β”‚ β€’ Resolve β”‚ β”‚ conflicts β”‚ β”‚ β€’ Synthesize β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ FINAL ANSWER β”‚ β”‚ (Synthesized)β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Function Signature

interface MoAOptions { prompt: string; // Required: The question to analyze tier?: 'paid' | 'free'; // Default: 'paid' } interface MoAResult { synthesis: string; // The final aggregated answer } // Throws on complete failure (all models down, invalid key) // Returns partial synthesis if 1-2 models fail async function handle(options: MoAOptions): Promise<string>

CLI Usage

# Paid tier (default) node scripts/moa.js "Your complex question" # Free tier node scripts/moa.js "Your question" --free

Programmatic Usage

const { handle } = require('./scripts/moa.js'); const synthesis = await handle({ prompt: "Analyze the competitive moats in AI code generation", tier: 'paid' }); console.log(synthesis);

Failure Modes

ScenarioBehavior1 proposer failsSynthesis from remaining 2 models2 proposers failSynthesis from 1 model (degraded)All proposers failReturns error messageInvalid API keyImmediate error with setup instructionsRate limit (free tier)Returns rate limit error The system is designed to degrade gracefully. A 2/3 response is still valuable.

VC Due Diligence

node scripts/moa.js "Analyze the competitive landscape for AI code generation. \ Who has defensible moats? Who's likely to be commoditized? Be specific."

Technical Evaluation

node scripts/moa.js "Compare RLHF vs DPO vs RLAIF for LLM alignment. \ Which scales better? What are the failure modes of each?"

Market Research

node scripts/moa.js "What are the emerging use cases for embodied AI in 2026? \ Focus on robotics, drones, and autonomous systems. Include specific companies."

Performance Expectations

MetricPaid TierFree TierP50 Latency~45s~60sP95 Latency~90s~120s+Success Rate>99%~80% (rate limits)Cost/Query~$0.03$0.00

Tips

Be specific β€” Vague prompts get vague synthesis Ask for structure β€” "Give me pros/cons" or "List top 5" helps the aggregator Use for analysis, not chat β€” MoA shines for complex reasoning Batch your queries β€” 30-90s per query, so plan accordingly

Via ClawHub (Recommended)

clawhub install moa

Manual

Copy skills/moa/ to your ~/clawd/skills/ directory Set OPENROUTER_API_KEY in your environment The agent can now invoke MoA for complex queries

Environment Variables

VariableRequiredDescriptionOPENROUTER_API_KEYYesYour OpenRouter API key Get your key at: https://openrouter.ai/keys

Credits

MoA concept: Together AI Research Implementation: @Scianna Built for: OpenClaw

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
2 Scripts1 Docs1 Config
  • SKILL.md Primary doc
  • scripts/moa-paid.js Scripts
  • scripts/moa.js Scripts
  • manifest.json Config