Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Run a structured quality control audit on any codebase. Use when asked to QC, audit, review, or check code quality for a project. Supports Python, TypeScript...
Run a structured quality control audit on any codebase. Use when asked to QC, audit, review, or check code quality for a project. Supports Python, TypeScript...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Structured quality control audit for codebases. Delegates static analysis to proper tools (ruff, eslint, gdlint) and focuses on what AI adds: semantic understanding, cross-module consistency, and dynamic smoke test generation.
Detect project type (read the profile for that language) Load .qc-config.yaml if present (for custom thresholds/exclusions) Run the 8-phase audit (or subset with --quick) Generate report with verdict Save baseline for future comparison
Optional project-level config for monorepos and custom settings: # .qc-config.yaml thresholds: test_failure_rate: 0.05 # >5% = FAIL, 0-5% = WARN, 0% = PASS lint_errors_max: 0 # Max lint errors before FAIL lint_warnings_max: 50 # Max warnings before WARN type_errors_max: 0 # Max type errors before FAIL (strict by default) exclude: dirs: [vendor, third_party, generated] files: ["*_generated.py", "*.pb.go"] changed_only: false # Only check git-changed files (CI mode) fail_fast: false # Stop on first failure quick_mode: false # Only run Phase 1, 3, 3.5, 6 languages: python: min_coverage: 80 ignore_rules: [T201] # Allow print in this project typescript: strict_mode: true # Require tsconfig strict: true ignore_rules: [] # eslint rules to ignore gdscript: godot_version: "4.2"
ModePhases RunUse CaseFull (default)All 8 phasesThorough audit--quick1, 3, 3.5, 6Fast sanity check--changed-onlyAll, filteredCI on pull requests--fail-fastAll, stops earlyFind first issue fast--fix3 with autofixApply automatic fixes
#PhaseWhatTools1Test SuiteRun existing tests + coveragepytest --cov / jest --coverage2Import IntegrityVerify all modules loadscripts/import_check.py3Static AnalysisLint with proper toolsruff / eslint / gdlint3.5Type CheckingStatic type verificationmypy / tsc --noEmit / (N/A for GDScript)4Smoke TestsVerify business logic worksAI-generated per project5UI/FrontendVerify UI components loadFramework-specific6File ConsistencySyntax + git statescripts/syntax_check.py + git7DocumentationDocstrings + docs accuracyscripts/docstring_check.py
Run the project's test suite with coverage. Auto-detect the test runner: pytest.ini / pyproject.toml [tool.pytest] โ pytest --cov package.json scripts.test โ npm test (or npx vitest --coverage) Cargo.toml โ cargo test project.godot โ (GUT if present, else manual) Record: total, passed, failed, errors, skipped, duration, coverage %. Verdict contribution: No tests found โ SKIP (not FAIL; project may be config-only) Failure rate = 0% โ PASS Failure rate โค threshold (default 5%) โ WARN Failure rate > threshold โ FAIL Coverage reporting (Python): pytest --cov=<package> --cov-report=term-missing --cov-report=json
Python: Run scripts/import_check.py against the project root. GDScript: Verify scene/preload references are valid (see gdscript-profile.md). Critical vs Optional Import Classification Use these heuristics to classify import failures: PatternClassificationRationale__init__.py, main.py, app.py, cli.pyCriticalCore entry pointsModule in src/, lib/, or top-level packageCriticalCore functionality*_test.py, test_*.py, conftest.pyOptionalTest infrastructureModules in examples/, scripts/, tools/OptionalAuxiliary codeImport error mentions cuml, triton, tensorrtOptionalHardware-specificImport error mentions missing system libOptionalEnvironment-specificDependency in [project.optional-dependencies]OptionalDeclared optional
Do NOT use grep. Use the language's standard linter. Standard Mode # Python ruff check --select E722,T201,B006,F401,F841,UP,I --statistics <project> # TypeScript npx eslint . --format json # GDScript gdlint <project> Fix Mode (--fix) When --fix is specified, apply automatic corrections: # Python โ safe auto-fixes ruff check --fix --select E,F,I,UP <project> ruff format <project> # TypeScript npx eslint . --fix # GDScript gdformat <project> Important: After --fix, re-run the check to report remaining issues that couldn't be auto-fixed.
Run static type analysis before proceeding to runtime checks. Python: mypy <package> --ignore-missing-imports --no-error-summary # or if pyproject.toml has [tool.pyright]: pyright <package> TypeScript: npx tsc --noEmit GDScript: Godot 4 has built-in static typing but no standalone checker. Estimate type coverage manually: # Find untyped declarations grep -rn "var \w\+ =" --include="*.gd" . # Untyped variables grep -rn "func \w\+(" --include="*.gd" . | grep -v ":" # Untyped functions Use the estimate_type_coverage() function from gdscript-profile.md to calculate coverage per file: # From gdscript-profile.md def estimate_type_coverage(gd_file: str) -> float: """Count typed vs untyped declarations.""" # See full implementation in gdscript-profile.md Also check for @warning_ignore annotations which may hide type issues. Record: Total errors, categorized by severity.
Test backend/core functionality โ NOT UI components (that's Phase 5). API Discovery Heuristics: Entry points: Look for main(), cli(), app, create_app(), __main__.py Service layer: Find classes/modules named *Service, *Manager, *Handler Public API: Check __all__ exports in __init__.py FastAPI/Flask: Find route decorators (@app.get, @router.post) CLI: Find typer/click @app.command() decorators SDK: Look for client classes, public methods without _ prefix For each discovered API, generate a minimal test: def smoke_test_user_service(): """Test UserService basic CRUD.""" from myproject.services.user import UserService svc = UserService(db=":memory:") user = svc.create(name="test") assert user.id is not None fetched = svc.get(user.id) assert fetched.name == "test" return "PASS" Guidelines: Import + instantiate + call one method with minimal valid input Use in-memory/temp resources (:memory:, tempdir) Each test < 5 seconds Catch exceptions, report clearly
Test UI components separately from business logic. FrameworkTest MethodGradiofrom project.ui import create_ui (no launch())Streamlitstreamlit run app.py --headless exits cleanlyPyQt/PySideSet QT_QPA_PLATFORM=offscreen, import widget modulesReactnpm run build succeedsVuenpm run build succeedsGodotScene files parse without error, required scripts existCLI--help on all subcommands returns 0 Boundary: Phase 4 tests "does the logic work?" Phase 5 tests "does the UI render?"
Run scripts/syntax_check.py โ compiles all source files to verify no syntax errors. Note: Phase 2 (Import Integrity) tests runtime import behavior including initialization code. Phase 6 tests static syntax correctness. Both are needed: a file can have valid syntax but fail to import (e.g., missing dependency), or vice versa (syntax error in a module that's never imported). Check git state: git status --short # Should be clean (or report uncommitted changes) git diff --check # No conflict markers
Run scripts/docstring_check.py (now checks __init__.py by default). Also verify: README exists and is non-empty Key docs (CHANGELOG, CONTRIBUTING) exist if referenced No stale TODO markers in docs claiming completion
# Calculate test failure rate failure_rate = test_failures / total_tests # Default thresholds (override in .qc-config.yaml) FAIL_THRESHOLD = 0.05 # 5% WARN_THRESHOLD = 0.00 # 0% TYPE_ERRORS_MAX = 0 # Default: strict (any type error = FAIL) # Verdict determination if any([ failure_rate > FAIL_THRESHOLD, critical_import_failure, type_check_errors > thresholds.type_errors_max, # Configurable threshold lint_errors > thresholds.lint_errors_max, ]): verdict = "FAIL" elif any([ 0 < failure_rate <= FAIL_THRESHOLD, optional_import_failures > 0, lint_warnings > thresholds.lint_warnings_max, missing_docstrings > 0, smoke_test_failures > 0, ]): verdict = "PASS WITH WARNINGS" else: verdict = "PASS"
Save results to .qc-baseline.json: { "timestamp": "2026-02-15T15:00:00Z", "commit": "abc123", "verdict": "PASS WITH WARNINGS", "config": { "mode": "full", "thresholds": {"test_failure_rate": 0.05} }, "phases": { "tests": {"total": 134, "passed": 134, "failed": 0, "coverage": 87.5}, "imports": {"total": 50, "failed": 0, "optional_failed": 1, "critical_failed": 0}, "types": {"errors": 0, "warnings": 5}, "lint": {"errors": 0, "warnings": 12, "fixed": 8}, "smoke": {"total": 14, "passed": 14}, "docs": {"missing_docstrings": 3} } } On subsequent runs, report delta: Tests: 134 โ 140 (+6 โ ) Coverage: 87% โ 91% (+4% โ ) Type errors: 0 โ 0 (โ ) Lint warnings: 12 โ 5 (-7 โ )
Generate in 3 formats: Markdown (qc-report.md) โ full detailed report for humans JSON (.qc-baseline.json) โ machine-readable for CI/comparison Summary (chat message) โ 10-line digest for Discord/Slack
Read the appropriate profile before running: Python: references/python-profile.md TypeScript: references/typescript-profile.md GDScript: references/gdscript-profile.md General (any language): references/general-profile.md
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.