Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Automatically generates and saves alternative perspectives from a secondary LLM for every user message to enhance reasoning and response quality.
Automatically generates and saves alternative perspectives from a secondary LLM for every user message to enhance reasoning and response quality.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
What it does: Provides cognitive diversity by automatically generating perspectives from a secondary LLM (different from your primary model) for every user message.
Before responding to any user message, check for a dual-brain perspective: cat ~/.dual-brain/perspectives/{your-agent-id}-latest.md If the file exists and contains a recent perspective (check timestamp in comment), consider it alongside your own reasoning. The perspective may highlight: Angles you might miss Alternative approaches Important things to verify Integration: Perspectives are generated automatically by a background daemon No manual action required on your part Simply read the file before forming your response Synthesize both viewpoints (yours + the secondary LLM's) Example workflow: User sends message Daemon intercepts, sends to secondary LLM (e.g., Kimi, GPT-4, Llama) Secondary LLM generates 2-3 sentence perspective Written to ~/.dual-brain/perspectives/{agent-id}-latest.md You (primary agent) read it before responding You synthesize both perspectives into your answer
Setup: npm install -g openclaw-dual-brain dual-brain setup # Interactive configuration dual-brain start # Start daemon Providers: ollama - Local models (zero cost, requires Ollama) moonshot - Kimi/Moonshot API (Chinese LLM, fast) openai - GPT-4o, GPT-4-turbo, etc. groq - Fast inference with Llama models Commands: dual-brain setup - Configure provider, model, API key dual-brain start - Run daemon (foreground) dual-brain stop - Stop daemon dual-brain status - Check running status dual-brain logs - View recent activity dual-brain install-daemon - Install as system service Config location: ~/.dual-brain/config.json Perspectives location: ~/.dual-brain/perspectives/
User Message → OpenClaw Session (JSONL) ↓ Dual-Brain Daemon (polling) ↓ Secondary LLM Provider (ollama/moonshot/openai/groq) ↓ Perspective Generated (2-3 sentences) ↓ ~/.dual-brain/perspectives/{agent}-latest.md ↓ Primary Agent reads & synthesizes ↓ Response to User
Cognitive diversity - Two AI models = broader perspective Bias mitigation - Different training data/approaches Quality assurance - Second opinion catches issues Zero agent overhead - Runs in background, <1s latency Provider flexibility - Choose cost vs. quality tradeoff
If Engram (semantic memory) is running on localhost:3400, perspectives are also stored as memories for long-term recall. Source: https://github.com/yourusername/openclaw-dual-brain
Messaging, meetings, inboxes, CRM, and teammate communication surfaces.
Largest current source with strong distribution and engagement signals.