{
  "schemaVersion": "1.0",
  "item": {
    "slug": "code-qc",
    "name": "Code QC",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/IsonaEi/code-qc",
    "canonicalUrl": "https://clawhub.ai/IsonaEi/code-qc",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/code-qc",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=code-qc",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "references/gdscript-profile.md",
      "references/general-profile.md",
      "references/python-profile.md",
      "references/ruff-rules.md",
      "references/typescript-profile.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "slug": "code-qc",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-01T22:22:13.833Z",
      "expiresAt": "2026-05-08T22:22:13.833Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=code-qc",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=code-qc",
        "contentDisposition": "attachment; filename=\"code-qc-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "code-qc"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/code-qc"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/code-qc",
    "agentPageUrl": "https://openagent3.xyz/skills/code-qc/agent",
    "manifestUrl": "https://openagent3.xyz/skills/code-qc/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/code-qc/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Code QC",
        "body": "Structured quality control audit for codebases. Delegates static analysis to proper tools (ruff, eslint, gdlint) and focuses on what AI adds: semantic understanding, cross-module consistency, and dynamic smoke test generation."
      },
      {
        "title": "Quick Start",
        "body": "Detect project type (read the profile for that language)\nLoad .qc-config.yaml if present (for custom thresholds/exclusions)\nRun the 8-phase audit (or subset with --quick)\nGenerate report with verdict\nSave baseline for future comparison"
      },
      {
        "title": "Configuration (.qc-config.yaml)",
        "body": "Optional project-level config for monorepos and custom settings:\n\n# .qc-config.yaml\nthresholds:\n  test_failure_rate: 0.05    # >5% = FAIL, 0-5% = WARN, 0% = PASS\n  lint_errors_max: 0         # Max lint errors before FAIL\n  lint_warnings_max: 50      # Max warnings before WARN\n  type_errors_max: 0         # Max type errors before FAIL (strict by default)\n\nexclude:\n  dirs: [vendor, third_party, generated]\n  files: [\"*_generated.py\", \"*.pb.go\"]\n\nchanged_only: false          # Only check git-changed files (CI mode)\nfail_fast: false             # Stop on first failure\nquick_mode: false            # Only run Phase 1, 3, 3.5, 6\n\nlanguages:\n  python:\n    min_coverage: 80\n    ignore_rules: [T201]     # Allow print in this project\n  typescript:\n    strict_mode: true        # Require tsconfig strict: true\n    ignore_rules: []         # eslint rules to ignore\n  gdscript:\n    godot_version: \"4.2\""
      },
      {
        "title": "Execution Modes",
        "body": "ModePhases RunUse CaseFull (default)All 8 phasesThorough audit--quick1, 3, 3.5, 6Fast sanity check--changed-onlyAll, filteredCI on pull requests--fail-fastAll, stops earlyFind first issue fast--fix3 with autofixApply automatic fixes"
      },
      {
        "title": "Phase Overview",
        "body": "#PhaseWhatTools1Test SuiteRun existing tests + coveragepytest --cov / jest --coverage2Import IntegrityVerify all modules loadscripts/import_check.py3Static AnalysisLint with proper toolsruff / eslint / gdlint3.5Type CheckingStatic type verificationmypy / tsc --noEmit / (N/A for GDScript)4Smoke TestsVerify business logic worksAI-generated per project5UI/FrontendVerify UI components loadFramework-specific6File ConsistencySyntax + git statescripts/syntax_check.py + git7DocumentationDocstrings + docs accuracyscripts/docstring_check.py"
      },
      {
        "title": "Phase 1: Test Suite",
        "body": "Run the project's test suite with coverage. Auto-detect the test runner:\n\npytest.ini / pyproject.toml [tool.pytest] → pytest --cov\npackage.json scripts.test → npm test (or npx vitest --coverage)\nCargo.toml → cargo test\nproject.godot → (GUT if present, else manual)\n\nRecord: total, passed, failed, errors, skipped, duration, coverage %.\n\nVerdict contribution:\n\nNo tests found → SKIP (not FAIL; project may be config-only)\nFailure rate = 0% → PASS\nFailure rate ≤ threshold (default 5%) → WARN\nFailure rate > threshold → FAIL\n\nCoverage reporting (Python):\n\npytest --cov=<package> --cov-report=term-missing --cov-report=json"
      },
      {
        "title": "Phase 2: Import Integrity (Python/GDScript)",
        "body": "Python: Run scripts/import_check.py against the project root.\n\nGDScript: Verify scene/preload references are valid (see gdscript-profile.md).\n\nCritical vs Optional Import Classification\n\nUse these heuristics to classify import failures:\n\nPatternClassificationRationale__init__.py, main.py, app.py, cli.pyCriticalCore entry pointsModule in src/, lib/, or top-level packageCriticalCore functionality*_test.py, test_*.py, conftest.pyOptionalTest infrastructureModules in examples/, scripts/, tools/OptionalAuxiliary codeImport error mentions cuml, triton, tensorrtOptionalHardware-specificImport error mentions missing system libOptionalEnvironment-specificDependency in [project.optional-dependencies]OptionalDeclared optional"
      },
      {
        "title": "Phase 3: Static Analysis",
        "body": "Do NOT use grep. Use the language's standard linter.\n\nStandard Mode\n\n# Python\nruff check --select E722,T201,B006,F401,F841,UP,I --statistics <project>\n\n# TypeScript  \nnpx eslint . --format json\n\n# GDScript\ngdlint <project>\n\nFix Mode (--fix)\n\nWhen --fix is specified, apply automatic corrections:\n\n# Python — safe auto-fixes\nruff check --fix --select E,F,I,UP <project>\nruff format <project>\n\n# TypeScript\nnpx eslint . --fix\n\n# GDScript\ngdformat <project>\n\nImportant: After --fix, re-run the check to report remaining issues that couldn't be auto-fixed."
      },
      {
        "title": "Phase 3.5: Type Checking (NEW)",
        "body": "Run static type analysis before proceeding to runtime checks.\n\nPython:\n\nmypy <package> --ignore-missing-imports --no-error-summary\n# or if pyproject.toml has [tool.pyright]:\npyright <package>\n\nTypeScript:\n\nnpx tsc --noEmit\n\nGDScript: Godot 4 has built-in static typing but no standalone checker. Estimate type coverage manually:\n\n# Find untyped declarations\ngrep -rn \"var \\w\\+ =\" --include=\"*.gd\" .       # Untyped variables\ngrep -rn \"func \\w\\+(\" --include=\"*.gd\" . | grep -v \":\"  # Untyped functions\n\nUse the estimate_type_coverage() function from gdscript-profile.md to calculate coverage per file:\n\n# From gdscript-profile.md\ndef estimate_type_coverage(gd_file: str) -> float:\n    \"\"\"Count typed vs untyped declarations.\"\"\"\n    # See full implementation in gdscript-profile.md\n\nAlso check for @warning_ignore annotations which may hide type issues.\n\nRecord: Total errors, categorized by severity."
      },
      {
        "title": "Phase 4: Smoke Tests (Business Logic)",
        "body": "Test backend/core functionality — NOT UI components (that's Phase 5).\n\nAPI Discovery Heuristics:\n\nEntry points: Look for main(), cli(), app, create_app(), __main__.py\nService layer: Find classes/modules named *Service, *Manager, *Handler\nPublic API: Check __all__ exports in __init__.py\nFastAPI/Flask: Find route decorators (@app.get, @router.post)\nCLI: Find typer/click @app.command() decorators\nSDK: Look for client classes, public methods without _ prefix\n\nFor each discovered API, generate a minimal test:\n\ndef smoke_test_user_service():\n    \"\"\"Test UserService basic CRUD.\"\"\"\n    from myproject.services.user import UserService\n    svc = UserService(db=\":memory:\")\n    user = svc.create(name=\"test\")\n    assert user.id is not None\n    fetched = svc.get(user.id)\n    assert fetched.name == \"test\"\n    return \"PASS\"\n\nGuidelines:\n\nImport + instantiate + call one method with minimal valid input\nUse in-memory/temp resources (:memory:, tempdir)\nEach test < 5 seconds\nCatch exceptions, report clearly"
      },
      {
        "title": "Phase 5: UI/Frontend Verification",
        "body": "Test UI components separately from business logic.\n\nFrameworkTest MethodGradiofrom project.ui import create_ui (no launch())Streamlitstreamlit run app.py --headless exits cleanlyPyQt/PySideSet QT_QPA_PLATFORM=offscreen, import widget modulesReactnpm run build succeedsVuenpm run build succeedsGodotScene files parse without error, required scripts existCLI--help on all subcommands returns 0\n\nBoundary: Phase 4 tests \"does the logic work?\" Phase 5 tests \"does the UI render?\""
      },
      {
        "title": "Phase 6: File Consistency",
        "body": "Run scripts/syntax_check.py — compiles all source files to verify no syntax errors.\n\nNote: Phase 2 (Import Integrity) tests runtime import behavior including initialization code. Phase 6 tests static syntax correctness. Both are needed: a file can have valid syntax but fail to import (e.g., missing dependency), or vice versa (syntax error in a module that's never imported).\n\nCheck git state:\n\ngit status --short      # Should be clean (or report uncommitted changes)\ngit diff --check        # No conflict markers"
      },
      {
        "title": "Phase 7: Documentation",
        "body": "Run scripts/docstring_check.py (now checks __init__.py by default).\n\nAlso verify:\n\nREADME exists and is non-empty\nKey docs (CHANGELOG, CONTRIBUTING) exist if referenced\nNo stale TODO markers in docs claiming completion"
      },
      {
        "title": "Verdict Logic",
        "body": "# Calculate test failure rate\nfailure_rate = test_failures / total_tests\n\n# Default thresholds (override in .qc-config.yaml)\nFAIL_THRESHOLD = 0.05  # 5%\nWARN_THRESHOLD = 0.00  # 0%\nTYPE_ERRORS_MAX = 0    # Default: strict (any type error = FAIL)\n\n# Verdict determination\nif any([\n    failure_rate > FAIL_THRESHOLD,\n    critical_import_failure,\n    type_check_errors > thresholds.type_errors_max,  # Configurable threshold\n    lint_errors > thresholds.lint_errors_max,\n]):\n    verdict = \"FAIL\"\nelif any([\n    0 < failure_rate <= FAIL_THRESHOLD,\n    optional_import_failures > 0,\n    lint_warnings > thresholds.lint_warnings_max,\n    missing_docstrings > 0,\n    smoke_test_failures > 0,\n]):\n    verdict = \"PASS WITH WARNINGS\"\nelse:\n    verdict = \"PASS\""
      },
      {
        "title": "Baseline Comparison",
        "body": "Save results to .qc-baseline.json:\n\n{\n  \"timestamp\": \"2026-02-15T15:00:00Z\",\n  \"commit\": \"abc123\",\n  \"verdict\": \"PASS WITH WARNINGS\",\n  \"config\": {\n    \"mode\": \"full\",\n    \"thresholds\": {\"test_failure_rate\": 0.05}\n  },\n  \"phases\": {\n    \"tests\": {\"total\": 134, \"passed\": 134, \"failed\": 0, \"coverage\": 87.5},\n    \"imports\": {\"total\": 50, \"failed\": 0, \"optional_failed\": 1, \"critical_failed\": 0},\n    \"types\": {\"errors\": 0, \"warnings\": 5},\n    \"lint\": {\"errors\": 0, \"warnings\": 12, \"fixed\": 8},\n    \"smoke\": {\"total\": 14, \"passed\": 14},\n    \"docs\": {\"missing_docstrings\": 3}\n  }\n}\n\nOn subsequent runs, report delta:\n\nTests:      134 → 140 (+6 ✅)\nCoverage:   87% → 91% (+4% ✅)\nType errors: 0 → 0 (✅)\nLint warnings: 12 → 5 (-7 ✅)"
      },
      {
        "title": "Report Output",
        "body": "Generate in 3 formats:\n\nMarkdown (qc-report.md) — full detailed report for humans\nJSON (.qc-baseline.json) — machine-readable for CI/comparison\nSummary (chat message) — 10-line digest for Discord/Slack"
      },
      {
        "title": "Summary Format Example",
        "body": "📊 QC Report: my-project @ abc123\nVerdict: ✅ PASS WITH WARNINGS\n\nTests:    134/134 passed (100%) | Coverage: 87%\nTypes:    0 errors\nLint:     0 errors, 12 warnings\nImports:  50/50 (1 optional failed)\nSmoke:    14/14 passed\n\n⚠️ Warnings:\n- 3 missing docstrings\n- 12 lint warnings (run with --fix)"
      },
      {
        "title": "Language-Specific Profiles",
        "body": "Read the appropriate profile before running:\n\nPython: references/python-profile.md\nTypeScript: references/typescript-profile.md\nGDScript: references/gdscript-profile.md\nGeneral (any language): references/general-profile.md"
      }
    ],
    "body": "Code QC\n\nStructured quality control audit for codebases. Delegates static analysis to proper tools (ruff, eslint, gdlint) and focuses on what AI adds: semantic understanding, cross-module consistency, and dynamic smoke test generation.\n\nQuick Start\nDetect project type (read the profile for that language)\nLoad .qc-config.yaml if present (for custom thresholds/exclusions)\nRun the 8-phase audit (or subset with --quick)\nGenerate report with verdict\nSave baseline for future comparison\nConfiguration (.qc-config.yaml)\n\nOptional project-level config for monorepos and custom settings:\n\n# .qc-config.yaml\nthresholds:\n  test_failure_rate: 0.05    # >5% = FAIL, 0-5% = WARN, 0% = PASS\n  lint_errors_max: 0         # Max lint errors before FAIL\n  lint_warnings_max: 50      # Max warnings before WARN\n  type_errors_max: 0         # Max type errors before FAIL (strict by default)\n\nexclude:\n  dirs: [vendor, third_party, generated]\n  files: [\"*_generated.py\", \"*.pb.go\"]\n\nchanged_only: false          # Only check git-changed files (CI mode)\nfail_fast: false             # Stop on first failure\nquick_mode: false            # Only run Phase 1, 3, 3.5, 6\n\nlanguages:\n  python:\n    min_coverage: 80\n    ignore_rules: [T201]     # Allow print in this project\n  typescript:\n    strict_mode: true        # Require tsconfig strict: true\n    ignore_rules: []         # eslint rules to ignore\n  gdscript:\n    godot_version: \"4.2\"\n\nExecution Modes\nMode\tPhases Run\tUse Case\nFull (default)\tAll 8 phases\tThorough audit\n--quick\t1, 3, 3.5, 6\tFast sanity check\n--changed-only\tAll, filtered\tCI on pull requests\n--fail-fast\tAll, stops early\tFind first issue fast\n--fix\t3 with autofix\tApply automatic fixes\nPhase Overview\n#\tPhase\tWhat\tTools\n1\tTest Suite\tRun existing tests + coverage\tpytest --cov / jest --coverage\n2\tImport Integrity\tVerify all modules load\tscripts/import_check.py\n3\tStatic Analysis\tLint with proper tools\truff / eslint / gdlint\n3.5\tType Checking\tStatic type verification\tmypy / tsc --noEmit / (N/A for GDScript)\n4\tSmoke Tests\tVerify business logic works\tAI-generated per project\n5\tUI/Frontend\tVerify UI components load\tFramework-specific\n6\tFile Consistency\tSyntax + git state\tscripts/syntax_check.py + git\n7\tDocumentation\tDocstrings + docs accuracy\tscripts/docstring_check.py\nPhase Details\nPhase 1: Test Suite\n\nRun the project's test suite with coverage. Auto-detect the test runner:\n\npytest.ini / pyproject.toml [tool.pytest] → pytest --cov\npackage.json scripts.test → npm test (or npx vitest --coverage)\nCargo.toml → cargo test\nproject.godot → (GUT if present, else manual)\n\n\nRecord: total, passed, failed, errors, skipped, duration, coverage %.\n\nVerdict contribution:\n\nNo tests found → SKIP (not FAIL; project may be config-only)\nFailure rate = 0% → PASS\nFailure rate ≤ threshold (default 5%) → WARN\nFailure rate > threshold → FAIL\n\nCoverage reporting (Python):\n\npytest --cov=<package> --cov-report=term-missing --cov-report=json\n\nPhase 2: Import Integrity (Python/GDScript)\n\nPython: Run scripts/import_check.py against the project root.\n\nGDScript: Verify scene/preload references are valid (see gdscript-profile.md).\n\nCritical vs Optional Import Classification\n\nUse these heuristics to classify import failures:\n\nPattern\tClassification\tRationale\n__init__.py, main.py, app.py, cli.py\tCritical\tCore entry points\nModule in src/, lib/, or top-level package\tCritical\tCore functionality\n*_test.py, test_*.py, conftest.py\tOptional\tTest infrastructure\nModules in examples/, scripts/, tools/\tOptional\tAuxiliary code\nImport error mentions cuml, triton, tensorrt\tOptional\tHardware-specific\nImport error mentions missing system lib\tOptional\tEnvironment-specific\nDependency in [project.optional-dependencies]\tOptional\tDeclared optional\nPhase 3: Static Analysis\n\nDo NOT use grep. Use the language's standard linter.\n\nStandard Mode\n# Python\nruff check --select E722,T201,B006,F401,F841,UP,I --statistics <project>\n\n# TypeScript  \nnpx eslint . --format json\n\n# GDScript\ngdlint <project>\n\nFix Mode (--fix)\n\nWhen --fix is specified, apply automatic corrections:\n\n# Python — safe auto-fixes\nruff check --fix --select E,F,I,UP <project>\nruff format <project>\n\n# TypeScript\nnpx eslint . --fix\n\n# GDScript\ngdformat <project>\n\n\nImportant: After --fix, re-run the check to report remaining issues that couldn't be auto-fixed.\n\nPhase 3.5: Type Checking (NEW)\n\nRun static type analysis before proceeding to runtime checks.\n\nPython:\n\nmypy <package> --ignore-missing-imports --no-error-summary\n# or if pyproject.toml has [tool.pyright]:\npyright <package>\n\n\nTypeScript:\n\nnpx tsc --noEmit\n\n\nGDScript: Godot 4 has built-in static typing but no standalone checker. Estimate type coverage manually:\n\n# Find untyped declarations\ngrep -rn \"var \\w\\+ =\" --include=\"*.gd\" .       # Untyped variables\ngrep -rn \"func \\w\\+(\" --include=\"*.gd\" . | grep -v \":\"  # Untyped functions\n\n\nUse the estimate_type_coverage() function from gdscript-profile.md to calculate coverage per file:\n\n# From gdscript-profile.md\ndef estimate_type_coverage(gd_file: str) -> float:\n    \"\"\"Count typed vs untyped declarations.\"\"\"\n    # See full implementation in gdscript-profile.md\n\n\nAlso check for @warning_ignore annotations which may hide type issues.\n\nRecord: Total errors, categorized by severity.\n\nPhase 4: Smoke Tests (Business Logic)\n\nTest backend/core functionality — NOT UI components (that's Phase 5).\n\nAPI Discovery Heuristics:\n\nEntry points: Look for main(), cli(), app, create_app(), __main__.py\nService layer: Find classes/modules named *Service, *Manager, *Handler\nPublic API: Check __all__ exports in __init__.py\nFastAPI/Flask: Find route decorators (@app.get, @router.post)\nCLI: Find typer/click @app.command() decorators\nSDK: Look for client classes, public methods without _ prefix\n\nFor each discovered API, generate a minimal test:\n\ndef smoke_test_user_service():\n    \"\"\"Test UserService basic CRUD.\"\"\"\n    from myproject.services.user import UserService\n    svc = UserService(db=\":memory:\")\n    user = svc.create(name=\"test\")\n    assert user.id is not None\n    fetched = svc.get(user.id)\n    assert fetched.name == \"test\"\n    return \"PASS\"\n\n\nGuidelines:\n\nImport + instantiate + call one method with minimal valid input\nUse in-memory/temp resources (:memory:, tempdir)\nEach test < 5 seconds\nCatch exceptions, report clearly\nPhase 5: UI/Frontend Verification\n\nTest UI components separately from business logic.\n\nFramework\tTest Method\nGradio\tfrom project.ui import create_ui (no launch())\nStreamlit\tstreamlit run app.py --headless exits cleanly\nPyQt/PySide\tSet QT_QPA_PLATFORM=offscreen, import widget modules\nReact\tnpm run build succeeds\nVue\tnpm run build succeeds\nGodot\tScene files parse without error, required scripts exist\nCLI\t--help on all subcommands returns 0\n\nBoundary: Phase 4 tests \"does the logic work?\" Phase 5 tests \"does the UI render?\"\n\nPhase 6: File Consistency\n\nRun scripts/syntax_check.py — compiles all source files to verify no syntax errors.\n\nNote: Phase 2 (Import Integrity) tests runtime import behavior including initialization code. Phase 6 tests static syntax correctness. Both are needed: a file can have valid syntax but fail to import (e.g., missing dependency), or vice versa (syntax error in a module that's never imported).\n\nCheck git state:\n\ngit status --short      # Should be clean (or report uncommitted changes)\ngit diff --check        # No conflict markers\n\nPhase 7: Documentation\n\nRun scripts/docstring_check.py (now checks __init__.py by default).\n\nAlso verify:\n\nREADME exists and is non-empty\nKey docs (CHANGELOG, CONTRIBUTING) exist if referenced\nNo stale TODO markers in docs claiming completion\nVerdict Logic\n# Calculate test failure rate\nfailure_rate = test_failures / total_tests\n\n# Default thresholds (override in .qc-config.yaml)\nFAIL_THRESHOLD = 0.05  # 5%\nWARN_THRESHOLD = 0.00  # 0%\nTYPE_ERRORS_MAX = 0    # Default: strict (any type error = FAIL)\n\n# Verdict determination\nif any([\n    failure_rate > FAIL_THRESHOLD,\n    critical_import_failure,\n    type_check_errors > thresholds.type_errors_max,  # Configurable threshold\n    lint_errors > thresholds.lint_errors_max,\n]):\n    verdict = \"FAIL\"\nelif any([\n    0 < failure_rate <= FAIL_THRESHOLD,\n    optional_import_failures > 0,\n    lint_warnings > thresholds.lint_warnings_max,\n    missing_docstrings > 0,\n    smoke_test_failures > 0,\n]):\n    verdict = \"PASS WITH WARNINGS\"\nelse:\n    verdict = \"PASS\"\n\nBaseline Comparison\n\nSave results to .qc-baseline.json:\n\n{\n  \"timestamp\": \"2026-02-15T15:00:00Z\",\n  \"commit\": \"abc123\",\n  \"verdict\": \"PASS WITH WARNINGS\",\n  \"config\": {\n    \"mode\": \"full\",\n    \"thresholds\": {\"test_failure_rate\": 0.05}\n  },\n  \"phases\": {\n    \"tests\": {\"total\": 134, \"passed\": 134, \"failed\": 0, \"coverage\": 87.5},\n    \"imports\": {\"total\": 50, \"failed\": 0, \"optional_failed\": 1, \"critical_failed\": 0},\n    \"types\": {\"errors\": 0, \"warnings\": 5},\n    \"lint\": {\"errors\": 0, \"warnings\": 12, \"fixed\": 8},\n    \"smoke\": {\"total\": 14, \"passed\": 14},\n    \"docs\": {\"missing_docstrings\": 3}\n  }\n}\n\n\nOn subsequent runs, report delta:\n\nTests:      134 → 140 (+6 ✅)\nCoverage:   87% → 91% (+4% ✅)\nType errors: 0 → 0 (✅)\nLint warnings: 12 → 5 (-7 ✅)\n\nReport Output\n\nGenerate in 3 formats:\n\nMarkdown (qc-report.md) — full detailed report for humans\nJSON (.qc-baseline.json) — machine-readable for CI/comparison\nSummary (chat message) — 10-line digest for Discord/Slack\nSummary Format Example\n📊 QC Report: my-project @ abc123\nVerdict: ✅ PASS WITH WARNINGS\n\nTests:    134/134 passed (100%) | Coverage: 87%\nTypes:    0 errors\nLint:     0 errors, 12 warnings\nImports:  50/50 (1 optional failed)\nSmoke:    14/14 passed\n\n⚠️ Warnings:\n- 3 missing docstrings\n- 12 lint warnings (run with --fix)\n\nLanguage-Specific Profiles\n\nRead the appropriate profile before running:\n\nPython: references/python-profile.md\nTypeScript: references/typescript-profile.md\nGDScript: references/gdscript-profile.md\nGeneral (any language): references/general-profile.md"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/IsonaEi/code-qc",
    "publisherUrl": "https://clawhub.ai/IsonaEi/code-qc",
    "owner": "IsonaEi",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/code-qc",
    "downloadUrl": "https://openagent3.xyz/downloads/code-qc",
    "agentUrl": "https://openagent3.xyz/skills/code-qc/agent",
    "manifestUrl": "https://openagent3.xyz/skills/code-qc/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/code-qc/agent.md"
  }
}