← All skills
Tencent SkillHub · AI

wreckit-ralph

Bulletproof AI code verification. The agent IS the engine — no external tools required. Spawns parallel verification workers that slop-scan, type-check, muta...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Bulletproof AI code verification. The agent IS the engine — no external tools required. Spawns parallel verification workers that slop-scan, type-check, muta...

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Item is unstable.

This item is timing out or returning errors right now. Review the source page and try again later.

Quick setup
  1. Wait for the source to recover or retry later.
  2. Review SKILL.md only after the source returns a real package.
  3. Do not rely on this source for automated install yet.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Manual review
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
GAMEPLAN.md, SKILL.md, TODO.md, assets/dashboard/index.html, assets/dashboard/server.mjs, references/gates/behavior-capture.md

Validation

  • Wait for the source to recover or retry later.
  • Review SKILL.md only after the download returns a real package.
  • Treat this source as transient until the upstream errors clear.

Install with your agent

Agent handoff

Use the source page and any available docs to guide the install because the item is currently unstable or timing out.

  1. Open the source page via Review source status.
  2. If you can obtain the package, extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the source page and extracted files.
New install

I tried to install a skill package from Yavira, but the item is currently unstable or timing out. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required.

Upgrade existing

I tried to upgrade a skill package from Yavira, but the item is currently unstable or timing out. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
2.4.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 13 sections Open source page

Reckit — Bulletproof AI Code Verification

Build it. Break it. Prove it works.

Philosophy

AI can't verify itself. Structure the pipeline so it can't silently agree with itself. Separate Builder/Tester/Breaker roles across fresh contexts. Use independent oracles. Full 14-step framework: references/verification-framework.md

Modes

Auto-detected from context: ModeTriggerDescription🟢 BUILDEmpty repo + PRDFull pipeline for greenfield🟡 REBUILDExisting code + migration specBUILD + behavior capture + replay🔴 FIXExisting code + bug reportFix, verify, check regressions🔵 AUDITExisting code, no changesVerify and report only

Gates

Read the gate file before executing it. Each contains: question, checks, pass/fail criteria. GateBUILDREBUILDFIXAUDITFileAI Slop Scan✅✅✅✅references/gates/slop-scan.mdType Check✅✅✅✅references/gates/type-check.mdRalph Loop✅✅✅❌references/gates/ralph-loop.mdTest Quality✅✅✅✅references/gates/test-quality.mdMutation Kill✅✅✅✅references/gates/mutation-kill.mdCross-Verify✅❌❌❌references/gates/cross-verify.mdBehavior Capture❌✅❌❌references/gates/behavior-capture.mdRegression❌✅✅❌references/gates/regression.mdSAST❌❌✅✅references/gates/sast.mdLLM-as-Judgeoptoptoptoptreferences/gates/llm-judge.mdDesign Review❌❌❌✅references/gates/design-review.mdCI Integration✅✅❌✅references/gates/ci-integration.mdProof Bundle✅✅✅✅references/gates/proof-bundle.md

Scripts

Deterministic helpers — run these, don't rewrite them: Core (all modes): scripts/project-type.sh [path] — classify project context + calibration profile (skip_gates, thresholds, tolerated warns) scripts/detect-stack.sh [path] — auto-detect language, framework, test runner → JSON scripts/check-deps.sh [path] — verify all deps exist in registries (hallucination check) scripts/slop-scan.sh [path] — semantic slop scan (tracked vs untracked debt, categorized output) → JSON scripts/type-check.sh [path] — run type checker (tsc/mypy/cargo/go vet) → JSON scripts/ralph-loop.sh [path] — validate IMPLEMENTATION_PLAN.md structure → JSON scripts/coverage-stats.sh [path] — extract raw coverage numbers from test runner scripts/mutation-test.sh [path] [test-cmd] — mutation testing (mutmut/cargo-mutants/Stryker/AI) scripts/mutation-test-stryker.sh [path] — Stryker-specific mutation testing → JSON scripts/red-team.sh [path] — SAST + 20+ vulnerability patterns → JSON scripts/regex-complexity.sh [path] [--context library|app] — targeted ReDoS analysis → JSON scripts/proof-bundle.sh [path] [mode] — corroboration-based aggregation + proof bundle writer scripts/run-all-gates.sh [path] [mode] [--log-file] — sequential gate runner with telemetry + adaptive skipping/tolerance Mode-specific: scripts/behavior-capture.sh [path] — capture golden fixtures before rebuild (REBUILD) scripts/design-review.sh [path] — dep graph, coupling, circular deps (AUDIT/REBUILD) → JSON scripts/ci-integration.sh [path] — CI config detection and scoring → JSON scripts/differential-test.sh [path] — oracle comparison, golden tests (BUILD/REBUILD) → JSON Extended verification: scripts/dynamic-analysis.sh [path] — memory leaks, race conditions, FD leaks → JSON scripts/perf-benchmark.sh [path] — benchmark detection + regression vs baseline → JSON scripts/property-test.sh [path] — property-based/fuzz testing, generates stubs → JSON Bootstrap: scripts/run-audit.sh [path] [mode] [--spawn] — generate orchestrator task + optional spawn

Swarm Architecture

For multi-gate parallel execution, read references/swarm/orchestrator.md. Quick overview: Main agent → wreckit orchestrator (depth 1) ├─ Planning: Architect worker ├─ Building: Sequential Implementer workers ├─ Verification: Parallel gate workers ├─ Sequential: Cross-verify / regression / judge └─ Decision: Proof bundle → Ship / Caution / Blocked Critical: Read references/swarm/collect.md before spawning workers. Never fabricate results. Wait for all workers to report back. Worker output format: references/swarm/handoff.md. Config required: { "agents.defaults.subagents": { "maxSpawnDepth": 2, "maxChildrenPerAgent": 8 } }

Decision Framework

VerdictCriteriaShip ✅No hard blocks; no corroborated multi-domain fail evidence above block thresholdCaution ⚠️Single non-hard fail, warning-only risk, or corroboration below block thresholdBlocked 🚫Any hard block OR corroborated non-hard failure pattern (multi-signal, multi-domain, high-confidence) Hard-block + corroboration rule details: references/gates/corroboration.md

Supported Languages & Stacks

LanguageGates AvailableNotesTypeScript/JS11/11Full support via Stryker, tsc, vitest/jestPython11/11Full support via mutmut, mypy/pyright, pytestRust11/11Full support via cargo-mutants, cargo check/testGo11/11Full support via go vet, go testSwift (SPM)9/11mutation = AI-estimated CAUTION, cross-verify = manualSwift (Xcode)7/11type-check = xcodebuild, mutation = AI-estimated, coverage = limitediOS apps7/11Same as Xcode projectsJava/Kotlin10/11Gradle/Maven, mutation via PIT (manual setup)Shell8/11shellcheck, limited mutation testing

Swift Notes

Mutation testing requires manual verification — no automated mutation testing tool exists for Swift as of 2026. The mutation gate uses AI-estimated analysis (counts mutation surface, compares to test count) and always outputs CAUTION, never SHIP. SPM projects get high-confidence type checking via swift build (the compiler IS the type checker). Xcode projects get medium-confidence type checking via xcodebuild with auto-detected schemes. Dependency checking lists SPM dependencies but notes that no automated CVE database exists for Swift packages — manual review is always recommended. CocoaPods projects: pod outdated is checked if Podfile present. Build systems detected: SPM, xcodebuild, CocoaPods, Carthage, mixed.

Running an Audit (Single-Agent, No Swarm)

For small projects or when swarm isn't needed, run gates sequentially: scripts/detect-stack.sh → know your target (language, test cmd, type checker) scripts/check-deps.sh → verify deps are real (not hallucinated) scripts/slop-scan.sh → find placeholders, template artifacts, empty stubs Run type checker (from detect-stack output) → references/gates/type-check.md Run tests + scripts/coverage-stats.sh → references/gates/test-quality.md scripts/mutation-test.sh → references/gates/mutation-kill.md (uses mutmut/cargo-mutants/Stryker if available) scripts/red-team.sh → references/gates/sast.md (20+ vulnerability patterns, JSON report) scripts/design-review.sh → references/gates/design-review.md (dep graph, circular deps, god modules) scripts/ci-integration.sh → references/gates/ci-integration.md (CI config detection + scoring) scripts/dynamic-analysis.sh → references/gates/dynamic-analysis.md (memory leaks, race conditions) scripts/perf-benchmark.sh → references/gates/performance.md (benchmark detection + regression) scripts/property-test.sh → references/gates/property-based.md (fuzzing, invariant checks) scripts/differential-test.sh → references/gates/differential.md (oracle comparison, metamorphic tests) echo '[...gate-results-json...]' | scripts/proof-bundle.sh [path] [mode] → writes .wreckit/proof.json, dashboard.json, decision.md

Quick Start

"Use wreckit to audit [project]. Don't change anything." "Use wreckit to build [project] from this PRD." "Use wreckit to fix [bug]. Prove nothing else breaks." "Use wreckit to rebuild [project] in [framework]."

Dashboard

assets/dashboard/ contains a local web dashboard for viewing proof bundles across repos. Run: node assets/dashboard/server.mjs (port 3939). Reads .wreckit/dashboard.json from projects.

Codex CLI Notes (2026-02-22)

When using Codex CLI to build/run projects: --full-auto sandbox blocks npm install network access (ENOTFOUND registry.npmjs.org) Fix: use --dangerously-bypass-approvals-and-sandbox flag instead Auth: echo "$OPENAI_API_KEY" | codex login --with-api-key stores credentials to ~/.codex/auth.json Config: ~/.codex/config.toml with model = "gpt-5.2-codex" and [shell_environment_policy] inherit = "all" gpt-5.3-codex is Copilot/VS Code only — not available via direct API. Use gpt-5.2-codex.

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
4 Docs1 Scripts1 Assets
  • SKILL.md Primary doc
  • GAMEPLAN.md Docs
  • references/gates/behavior-capture.md Docs
  • TODO.md Docs
  • assets/dashboard/server.mjs Scripts
  • assets/dashboard/index.html Assets