Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Verification gating for AI-generated artifacts. Policy checks and code verification to catch dangerous patterns before execution.
Verification gating for AI-generated artifacts. Policy checks and code verification to catch dangerous patterns before execution.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Moltblock provides verification gating for AI-generated artifacts. It runs policy checks to catch dangerous patterns before they reach production. What moltblock does: Generates artifacts via LLM API calls, then runs policy checks against the output Returns a structured verification result (pass/fail with evidence) Reads its own config files (moltblock.json, ~/.moltblock/moltblock.json) if present API keys are read from environment variables at runtime and sent only to the configured LLM provider endpoint No code execution occurs β moltblock only performs policy checks on generated artifacts
Use moltblock when the task involves: High risk (always verify): Destructive operations (rm -rf, DROP TABLE), privilege escalation (sudo, chmod 777), credential/key access (.ssh/, id_rsa, .env), system modification, raw disk writes Medium risk (verify when uncertain): Network requests (curl, wget, fetch), file writes, database modifications, subprocess spawning, dynamic code evaluation Low risk (skip verification): Simple text responses, math, reading public information, code that doesn't touch the filesystem or network
Verify a task before execution.
npx moltblock@0.11.4 "<task description>" --provider <provider> --json
ParameterRequiredDescriptiontaskYesThe task description to verify--providerNoLLM provider: openai, google, zai, local (auto-detected from env)--modelNoModel override--jsonNoOutput structured JSON result
Moltblock auto-detects the LLM provider from whichever API key is set. If no key is set, it falls back to a local LLM at localhost:1234. Set one of these for a cloud provider: OPENAI_API_KEY β OpenAI (primary) ANTHROPIC_API_KEY β Anthropic/Claude (optional) GOOGLE_API_KEY β Google/Gemini (optional) ZAI_API_KEY β ZAI (optional)
# Verify a task npx moltblock@0.11.4 "implement a function that validates email addresses" --json
{ "verification_passed": true, "verification_evidence": "All policy rules passed.", "authoritative_artifact": "...", "draft": "...", "critique": "...", "final_candidate": "..." }
Use directly with npx (recommended, no install needed): npx moltblock@0.11.4 "your task" --json Or install globally: npm install -g moltblock@0.11.4
No configuration file is required. Moltblock auto-detects your LLM provider from environment variables and falls back to sensible defaults. Optionally, place moltblock.json in your project root or ~/.moltblock/moltblock.json to customize model bindings: { "agent": { "bindings": { "generator": { "backend": "google", "model": "gemini-2.0-flash" }, "critic": { "backend": "google", "model": "gemini-2.0-flash" }, "judge": { "backend": "google", "model": "gemini-2.0-flash" } } } } See the full configuration docs for policy rules and advanced options.
Repository: github.com/moltblock/moltblock npm: npmjs.com/package/moltblock License: MIT
When used as a skill, moltblock performs policy checks only β no code is generated, written to disk, or executed. The tool analyzes task descriptions against configurable policy rules and returns a pass/fail verification result. The CLI additionally supports a --test flag for direct user invocation that executes code verification via vitest. This flag is not exposed to agents through this skill and should only be used directly by developers in sandboxed environments. See the CLI documentation for details.
Moltblock reduces risk but does not eliminate it. Verification is best-effort β policy rules and LLM-based checks can miss dangerous patterns. Always review generated artifacts before executing them. The authors and contributors are not responsible for any damage, data loss, or security incidents resulting from the use of this tool. Use at your own risk.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.