Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Self-governance protocol for autonomous agents: WAL (Write-Ahead Log), VBR (Verify Before Reporting), ADL (Anti-Divergence Limit), and VFM (Value-For-Money)....
Self-governance protocol for autonomous agents: WAL (Write-Ahead Log), VBR (Verify Before Reporting), ADL (Anti-Divergence Limit), and VFM (Value-For-Money)....
This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.
Use the source page and any available docs to guide the install because the item currently does not return a direct package file.
I tried to install a skill package from Yavira, but the item currently does not return a direct package file. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required.
I tried to upgrade a skill package from Yavira, but the item currently does not return a direct package file. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need.
Five protocols that prevent agent failure modes: losing context, false completion claims, persona drift, wasteful spending, and infrastructure amnesia.
Rule: Write before you respond. If something is worth remembering, WAL it first. TriggerAction TypeExampleUser corrects youcorrection"No, use Podman not Docker"Key decisiondecision"Using CogVideoX-2B for text-to-video"Important analysisanalysis"WAL patterns should be core infra not skills"State changestate_change"GPU server SSH key auth configured" # Write before responding python3 scripts/wal.py append <agent_id> correction "Use Podman not Docker" # Working buffer (batch, flush before compaction) python3 scripts/wal.py buffer-add <agent_id> decision "Some decision" python3 scripts/wal.py flush-buffer <agent_id> # Session start: replay lost context python3 scripts/wal.py replay <agent_id> # After incorporating a replayed entry python3 scripts/wal.py mark-applied <agent_id> <entry_id> # Maintenance python3 scripts/wal.py status <agent_id> python3 scripts/wal.py prune <agent_id> --keep 50
Session start β replay to recover lost context User correction β append BEFORE responding Pre-compaction flush β flush-buffer then write daily memory During conversation β buffer-add for less critical items
Rule: Don't say "done" until verified. Run a check before claiming completion. # Verify a file exists python3 scripts/vbr.py check task123 file_exists /path/to/output.py # Verify a file was recently modified python3 scripts/vbr.py check task123 file_changed /path/to/file.go # Verify a command succeeds python3 scripts/vbr.py check task123 command "cd /tmp/repo && go test ./..." # Verify git is pushed python3 scripts/vbr.py check task123 git_pushed /tmp/repo # Log verification result python3 scripts/vbr.py log <agent_id> task123 true "All tests pass" # View pass/fail stats python3 scripts/vbr.py stats <agent_id>
After code changes β check command "go test ./..." After file creation β check file_exists /path After git push β check git_pushed /repo After sub-agent task β verify the claimed output exists
Rule: Stay true to your persona. Track behavioral drift from SOUL.md. # Analyze a response for anti-patterns python3 scripts/adl.py analyze "Great question! I'd be happy to help you with that!" # Log a behavioral observation python3 scripts/adl.py log <agent_id> anti_sycophancy "Used 'Great question!' in response" python3 scripts/adl.py log <agent_id> persona_direct "Shipped fix without asking permission" # Calculate divergence score (0=aligned, 1=fully drifted) python3 scripts/adl.py score <agent_id> # Check against threshold python3 scripts/adl.py check <agent_id> --threshold 0.7 # Reset after recalibration python3 scripts/adl.py reset <agent_id>
Sycophancy β "Great question!", "I'd be happy to help!" Passivity β "Would you like me to", "Shall I", "Let me know if" Hedging β "I think maybe", "It might be possible" Verbosity β Response length exceeding expected bounds
Direct β "Done", "Fixed", "Ship", "Built" Opinionated β "I'd argue", "Better to", "The right call" Action-oriented β "Spawning", "On it", "Kicking off"
Rule: Track cost vs value. Don't burn premium tokens on budget tasks. # Log a completed task with cost python3 scripts/vfm.py log <agent_id> monitoring glm-4.7 37000 0.03 0.8 # Calculate VFM scores python3 scripts/vfm.py score <agent_id> # Cost breakdown by model and task python3 scripts/vfm.py report <agent_id> # Get optimization suggestions python3 scripts/vfm.py suggest <agent_id>
Task TypeRecommended TierModelsMonitoring, formatting, summarizationBudgetGLM, DeepSeek, HaikuCode generation, debugging, creativeStandardSonnet, Gemini ProArchitecture, complex analysisPremiumOpus, Sonnet+thinking
After spawning sub-agents β log cost and outcome During heartbeat β run suggest for optimization tips Weekly review β run report for cost breakdown
Rule: Log infrastructure facts immediately. When you discover hardware specs, service configs, or network topology, write it down BEFORE continuing.
Discovery TypeLog ToExampleHardware specsTOOLS.md"GPU server has 3 GPUs: RTX 3090 + 3080 + 2070 SUPER"Service configsTOOLS.md"ComfyUI runs on port 8188, uses /data/ai-stack"Network topologyTOOLS.md"Pi at 192.168.99.25, GPU server at 10.0.0.44"Credentials/authmemory/encrypted/"SSH key: ~/.ssh/id_ed25519_alexchen"API endpointsTOOLS.md or skill"Moltbook API: POST /api/v1/posts"
# Hardware discovery nvidia-smi --query-gpu=index,name,memory.total --format=csv lscpu | grep -E "Model name|CPU\(s\)|Thread" free -h df -h # Service discovery systemctl list-units --type=service --state=running docker ps # or podman ps ss -tlnp | grep LISTEN # Network discovery ip addr show cat /etc/hosts
SSH to new server β Run hardware/service discovery commands Before responding β Update TOOLS.md with specs New service discovered β Log port, path, config location Credentials obtained β Encrypt and store in memory/encrypted/
β "The GPU server has 3 GPUs" (only in conversation) β "The GPU server has 3 GPUs" β Update TOOLS.md β then continue Memory is limited. Files are permanent. IKL before you forget.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.