{
  "schemaVersion": "1.0",
  "item": {
    "slug": "tcc-quality-gates",
    "name": "Generic Quality Gateways for Unattended Agent Development",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/TheCyberCore/tcc-quality-gates",
    "canonicalUrl": "https://clawhub.ai/TheCyberCore/tcc-quality-gates",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/tcc-quality-gates",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=tcc-quality-gates",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "templ/quality-gateway-definition-template.json"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/tcc-quality-gates"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/tcc-quality-gates",
    "agentPageUrl": "https://openagent3.xyz/skills/tcc-quality-gates/agent",
    "manifestUrl": "https://openagent3.xyz/skills/tcc-quality-gates/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/tcc-quality-gates/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Purpose",
        "body": "This skill defines and applies 6 universal quality gateways for typical application projects that include:\n\nBackend API services (any stack)\nWeb frontends (any stack)\nCI/CD pipelines (any provider)\n\nThe gateways are written in LLM-friendly operational language: how to check, calculate, evaluate, and document results consistently.\n\nThis skill is language-agnostic and can be used on any repository. It relies on a central configuration file:\n\n.defs/quality-gateway-definition.json (MUST be stored in the repository, not the workspace)"
      },
      {
        "title": "Non-Negotiable Storage Rules (openClaw)",
        "body": "The gateway definition file MUST be placed in: REPO_ROOT/.defs/quality-gateway-definition.json\nTemporary files MUST go to: REPO_ROOT/.tmp/quality-gates/ (do not create or delete other workspace directories)\nReports MUST be written to repository paths defined in the JSON config (default suggested below)"
      },
      {
        "title": "Inputs",
        "body": "Repository root path (REPO_ROOT)\nOptional CI artifacts path (if provided by the runtime)\nOptional commit range (for PR-focused evaluation)\nOptional environment notes (target load, environments, risk level)"
      },
      {
        "title": "Outputs",
        "body": "A human-readable report (Markdown)\nA machine-readable report (JSON) containing raw metrics + per-check scores\nEvidence references (paths, snippets, CI links if available)\n\nRecommended default output paths (override via JSON config):\n\ndocs/quality/quality-gate-report.md\ndocs/quality/quality-gate-report.json\nEvidence directory: docs/quality/evidence/"
      },
      {
        "title": "The 6 Quality Gateways",
        "body": "Each gateway produces:\n\nScore: 0–100\nStatus: PASS / WARN / FAIL\nBlocking behavior: some gateways are “blocking” (FAIL blocks release)\n\nAll gateway thresholds and weights come from:\n\n.defs/quality-gateway-definition.json"
      },
      {
        "title": "Goal",
        "body": "Ensure the system can be built and packaged reliably, and dependencies are manageable and safe to ship."
      },
      {
        "title": "What to Check (typical checks)",
        "body": "CI pipeline status (green on default branch / PR)\nReproducible build or deterministic packaging indicators\nDependency freshness (stale/outdated dependencies)\nLicense policy compliance (allowlist/denylist)\nSBOM presence (if required)"
      },
      {
        "title": "How to Measure / Calculate",
        "body": "Boolean checks: PASS=100, FAIL=0\nRatio checks (e.g., “outdated deps %”): scale 0–100 using thresholds\nPolicy checks: hard FAIL if a forbidden license is detected (if enabled)"
      },
      {
        "title": "Evidence to Collect",
        "body": "CI job summary (or local build logs)\nDependency list report output (tool-specific, but keep the report file)\nSBOM artifact path (if present)\nLicense scan output (if used)"
      },
      {
        "title": "How to Document",
        "body": "In the report, include:\n\nBuild command/pipeline name\nArtifact identifiers / versions\nSummary of dependency deltas and policy results"
      },
      {
        "title": "Goal",
        "body": "Prove correctness through automated tests and prevent regression."
      },
      {
        "title": "What to Check",
        "body": "Unit tests pass\nIntegration/API tests pass (or contract tests)\nE2E/smoke tests pass (for web apps)\nCode coverage meets thresholds (overall + critical components)\nFlaky test rate is controlled (if CI provides retries/flakes)"
      },
      {
        "title": "How to Measure / Calculate",
        "body": "Test pass: boolean\nCoverage: numeric percentage\n\nScore mapping example:\n\n\n\n= target: 100\n\n\nbetween warn and target: linear 70–99\nbelow warn: linear 0–69\n\n\n\n\nOptional “critical path coverage” gets extra weight"
      },
      {
        "title": "Evidence to Collect",
        "body": "Test run outputs (JUnit/TRX/etc.)\nCoverage summary files\nList of failed tests (if any) + links"
      },
      {
        "title": "How to Document",
        "body": "Test suites executed\nCoverage numbers (overall + key areas)\nNotes on skipped tests (if allowed) and rationale"
      },
      {
        "title": "Goal",
        "body": "Prevent known vulnerabilities, secrets leakage, insecure configs, and supply-chain risks."
      },
      {
        "title": "What to Check",
        "body": "Dependency vulnerabilities (Critical/High/Medium counts)\nSecret scanning results (must be zero leaked secrets)\nBasic secure configuration checks (CSP, TLS, auth boundaries) where applicable\nSAST findings severity counts (if tooling exists)\nContainer image scan (if containers exist)"
      },
      {
        "title": "How to Measure / Calculate",
        "body": "Vulnerability gating (typical):\n\nCritical = 0 required (FAIL otherwise)\nHigh = 0 required (or <= allowedHigh)\nMedium allowed up to a budget (WARN if above warn)\n\n\nSecrets: any secret finding => FAIL (blocking)\nScore: start at 100 and subtract penalties by severity and count (config-driven)"
      },
      {
        "title": "Evidence to Collect",
        "body": "Vulnerability scan report files\nSecret scan output (including file paths and fingerprint IDs, not actual secrets)\nSAST report snippet/summary"
      },
      {
        "title": "How to Document",
        "body": "Severity counts and whether exceptions exist\nAny exception MUST include: reason, owner, expiry date (if your org uses waivers)"
      },
      {
        "title": "Goal",
        "body": "Ensure the system meets baseline performance and user experience targets."
      },
      {
        "title": "What to Check",
        "body": "API (typical):\n\np95 latency under target\nError rate under target\nThroughput meets expected load (if known)\n\nWeb (typical):\n\nCore Web Vitals (LCP, CLS, INP) on a reference device/profile\nBundle size / asset weight thresholds (optional)"
      },
      {
        "title": "How to Measure / Calculate",
        "body": "Latency score:\n\np95 <= target: 100\nbetween target and warn: linear 70–99\n\n\nwarn: 0–69 (linear), with hard FAIL if beyond “max”\n\n\n\n\nError rate:\n\n<= target: 100\n<= warn: 70–99\n\n\nwarn: 0–69, FAIL if beyond max\n\n\n\n\nWeb vitals:\n\nEach metric scored independently; weighted into a single web score"
      },
      {
        "title": "Evidence to Collect",
        "body": "Load test or benchmark outputs (k6/JMeter/etc.)\nAPM snapshots (if available)\nLighthouse or Web Vitals report exports"
      },
      {
        "title": "How to Document",
        "body": "Test conditions: environment, dataset size, concurrency, device profile\nKey p95 / error rate / vitals values\nNotable regressions vs baseline"
      },
      {
        "title": "Goal",
        "body": "Keep the codebase understandable, changeable, and reviewable over time."
      },
      {
        "title": "What to Check",
        "body": "Static analysis quality (lint errors, rule violations)\nComplexity thresholds (cyclomatic complexity, large functions/classes)\nDuplication rate\n“Change risk” signals (hotspots: frequent churn + complexity)\nDocumentation coverage for public APIs (e.g., endpoint docs, component docs)"
      },
      {
        "title": "How to Measure / Calculate",
        "body": "Issue density: findings per KLOC (or per file for smaller repos)\nComplexity score: percentage of units exceeding complexity threshold\nDuplication: % duplicated lines\nScore: weighted average of normalized sub-scores (config-driven)"
      },
      {
        "title": "Evidence to Collect",
        "body": "Static analysis summaries\nComplexity and duplication reports (any tool is fine; store outputs)\nList of top hotspots and why (files + metrics)"
      },
      {
        "title": "How to Document",
        "body": "Top 10 problems by impact\nConcrete refactoring suggestions only if asked; otherwise just findings"
      },
      {
        "title": "Goal",
        "body": "Make sure the system can be operated safely in production."
      },
      {
        "title": "What to Check",
        "body": "Health endpoints exist and are meaningful\nLogging is structured and includes correlation IDs\nMetrics and dashboards exist for key signals (latency, error rate, saturation)\nAlerts configured for SLO breaches / error budget burn (if applicable)\nRunbooks for major failure modes exist (deploy rollback, incident triage)\nVersioning and changelog/release notes exist"
      },
      {
        "title": "How to Measure / Calculate",
        "body": "Mostly “presence + completeness” scoring:\n\nEach required artifact is a boolean check\nOptional maturity rubric: 0 (missing), 50 (partial), 100 (complete)\nBlocking if “minimum operability” is not met (config-driven)"
      },
      {
        "title": "Evidence to Collect",
        "body": "Paths to runbooks, dashboards-as-code, alert configs\nSample log/metric/tracing docs\nOn-call/ops notes (if present)"
      },
      {
        "title": "How to Document",
        "body": "List missing operational artifacts\nMinimum go-live checklist status"
      },
      {
        "title": "Step 1: Load configuration",
        "body": "Read REPO_ROOT/.defs/quality-gateway-definition.json\nValidate it against the schema description (see below)\nIf fields are missing, use documented defaults from the JSON"
      },
      {
        "title": "Step 2: Collect metrics per check",
        "body": "For each gate:\n\nFor each check:\n\nIdentify data source:\n\nPrefer CI artifacts if provided\nOtherwise use repository files and local commands (if allowed by runtime)\n\n\nProduce a metric value (number/boolean/string) and evidence references"
      },
      {
        "title": "Step 3: Score each check (0–100)",
        "body": "Use the scoring method defined per check:\n\nboolean: pass => 100, fail => 0\nthreshold_range: linear scoring between warn and target\npenalty_by_count: start at 100 and subtract per issue\nrubric: map {missing/partial/complete} to {0/50/100}"
      },
      {
        "title": "Step 4: Score each gateway",
        "body": "Compute weighted average of its checks\nDetermine gateway status using configured thresholds:\n\nScore >= passScore => PASS\nScore >= warnScore => WARN\nelse => FAIL\n\n\nIf gateway is marked blockingOnFail=true, any FAIL blocks release"
      },
      {
        "title": "Step 5: Produce reports",
        "body": "Write:\n\nMarkdown report (human)\nJSON report (machine)\r\nInclude:\n\nper-gateway score/status\nper-check metrics + evidence paths\noverall score and overall status\nexplicit “BLOCKERS” list if any"
      },
      {
        "title": "Report Template (Markdown)",
        "body": "Use this outline in docs/quality/quality-gate-report.md unless JSON overrides paths:"
      },
      {
        "title": "Summary",
        "body": "Overall Score:\nOverall Status:\nBlocking Failures:\nDate/Commit:"
      },
      {
        "title": "Gateway Results",
        "body": "GatewayScoreStatusKey MetricsEvidence"
      },
      {
        "title": "<Gateway Name>",
        "body": "Score/Status\nChecks:\n\n<Check>: metric=..., score=..., evidence=...\n\n\nNotes / Exceptions"
      },
      {
        "title": "quality-gateway-definition.json — JSON Schema Description",
        "body": "The configuration file is a normal JSON document with:"
      },
      {
        "title": "Root",
        "body": "schemaVersion (string) — version of this config layout\nprojectProfile (object) — context used for defaults\nscoring (object) — global pass/warn thresholds and aggregation rules\nreporting (object) — output paths and evidence folder\ngates (array) — list of gateway definitions (exactly 6 for this skill)"
      },
      {
        "title": "projectProfile (object)",
        "body": "applicationType (string) — e.g. \"web_api_and_web\"\nriskLevel (string) — \"low\"|\"medium\"|\"high\"\nreleaseCadence (string) — e.g. \"daily\"|\"weekly\"|\"monthly\"\nexpectedLoad (object, optional)\n\napiRps (number)\nconcurrency (number)"
      },
      {
        "title": "scoring (object)",
        "body": "passScore (number 0–100)\nwarnScore (number 0–100)\noverallAggregation (string) — \"weighted_average\"\nblockIfAnyBlockingGateFails (boolean)"
      },
      {
        "title": "reporting (object)",
        "body": "markdownReportPath (string, repo-relative)\njsonReportPath (string, repo-relative)\nevidenceDir (string, repo-relative)\ntempDir (string, repo-relative; MUST be inside .tmp/quality-gates/)"
      },
      {
        "title": "gates (array of objects)",
        "body": "Each gate:\n\nid (string) — stable identifier\nname (string)\ndescription (string)\nweight (number) — relative importance in overall score\nblockingOnFail (boolean)\nchecks (array)"
      },
      {
        "title": "checks (array of objects)",
        "body": "Each check:\n\nid (string)\nname (string)\ndescription (string)\nweight (number)\nmetricType (string) — \"boolean\"|\"percentage\"|\"count\"|\"duration_ms\"|\"rubric\"\nscoringMethod (string) — \"boolean\"|\"threshold_range\"|\"penalty_by_count\"|\"rubric\"\nthresholds (object) — depends on scoringMethod:\n\nfor threshold_range:\n\ntarget (number)\nwarn (number)\nmax (number, optional hard-fail)\ndirection (string) — \"higher_is_better\"|\"lower_is_better\"\n\n\nfor penalty_by_count:\n\nallowed (number)\nwarnAbove (number)\nfailAbove (number)\npenaltyPerUnit (number)\n\n\n\n\nevidenceHints (array of strings) — where to find evidence in a generic repo/CI\nnotes (string, optional)"
      },
      {
        "title": "Operational Notes",
        "body": "If a metric cannot be measured, do NOT invent numbers.\n\nMark the check as \"unknown\" in the JSON report and score it using the config’s fallback rule (recommended: treat unknown as WARN with score 70 unless the check is security/secrets, where unknown should be FAIL).\n\n\nAlways include evidence references (paths or CI artifact names).\nKeep all temp work inside .tmp/quality-gates/."
      },
      {
        "title": "JSON references",
        "body": "templ/quality-gateway-definition-template.json (template settings file. Can be copied to REPO_ROOT/.defs/quality-gateway-definition.json if missing)"
      }
    ],
    "body": "openClaw Skill: Quality Gateways (Generic Web + API Applications)\nPurpose\n\nThis skill defines and applies 6 universal quality gateways for typical application projects that include:\n\nBackend API services (any stack)\nWeb frontends (any stack)\nCI/CD pipelines (any provider)\n\nThe gateways are written in LLM-friendly operational language: how to check, calculate, evaluate, and document results consistently.\n\nThis skill is language-agnostic and can be used on any repository. It relies on a central configuration file:\n\n.defs/quality-gateway-definition.json (MUST be stored in the repository, not the workspace)\nNon-Negotiable Storage Rules (openClaw)\nThe gateway definition file MUST be placed in: REPO_ROOT/.defs/quality-gateway-definition.json\nTemporary files MUST go to: REPO_ROOT/.tmp/quality-gates/ (do not create or delete other workspace directories)\nReports MUST be written to repository paths defined in the JSON config (default suggested below)\nInputs\nRepository root path (REPO_ROOT)\nOptional CI artifacts path (if provided by the runtime)\nOptional commit range (for PR-focused evaluation)\nOptional environment notes (target load, environments, risk level)\nOutputs\nA human-readable report (Markdown)\nA machine-readable report (JSON) containing raw metrics + per-check scores\nEvidence references (paths, snippets, CI links if available)\n\nRecommended default output paths (override via JSON config):\n\ndocs/quality/quality-gate-report.md\ndocs/quality/quality-gate-report.json\nEvidence directory: docs/quality/evidence/\nThe 6 Quality Gateways\n\nEach gateway produces:\n\nScore: 0–100\nStatus: PASS / WARN / FAIL\nBlocking behavior: some gateways are “blocking” (FAIL blocks release)\n\nAll gateway thresholds and weights come from:\n\n.defs/quality-gateway-definition.json\nGateway 1 — Build & Dependency Health\nGoal\n\nEnsure the system can be built and packaged reliably, and dependencies are manageable and safe to ship.\n\nWhat to Check (typical checks)\nCI pipeline status (green on default branch / PR)\nReproducible build or deterministic packaging indicators\nDependency freshness (stale/outdated dependencies)\nLicense policy compliance (allowlist/denylist)\nSBOM presence (if required)\nHow to Measure / Calculate\nBoolean checks: PASS=100, FAIL=0\nRatio checks (e.g., “outdated deps %”): scale 0–100 using thresholds\nPolicy checks: hard FAIL if a forbidden license is detected (if enabled)\nEvidence to Collect\nCI job summary (or local build logs)\nDependency list report output (tool-specific, but keep the report file)\nSBOM artifact path (if present)\nLicense scan output (if used)\nHow to Document\n\nIn the report, include:\n\nBuild command/pipeline name\nArtifact identifiers / versions\nSummary of dependency deltas and policy results\nGateway 2 — Automated Testing & Coverage\nGoal\n\nProve correctness through automated tests and prevent regression.\n\nWhat to Check\nUnit tests pass\nIntegration/API tests pass (or contract tests)\nE2E/smoke tests pass (for web apps)\nCode coverage meets thresholds (overall + critical components)\nFlaky test rate is controlled (if CI provides retries/flakes)\nHow to Measure / Calculate\nTest pass: boolean\nCoverage: numeric percentage\nScore mapping example:\n\n= target: 100\n\nbetween warn and target: linear 70–99\nbelow warn: linear 0–69\nOptional “critical path coverage” gets extra weight\nEvidence to Collect\nTest run outputs (JUnit/TRX/etc.)\nCoverage summary files\nList of failed tests (if any) + links\nHow to Document\nTest suites executed\nCoverage numbers (overall + key areas)\nNotes on skipped tests (if allowed) and rationale\nGateway 3 — Security & Supply-Chain\nGoal\n\nPrevent known vulnerabilities, secrets leakage, insecure configs, and supply-chain risks.\n\nWhat to Check\nDependency vulnerabilities (Critical/High/Medium counts)\nSecret scanning results (must be zero leaked secrets)\nBasic secure configuration checks (CSP, TLS, auth boundaries) where applicable\nSAST findings severity counts (if tooling exists)\nContainer image scan (if containers exist)\nHow to Measure / Calculate\nVulnerability gating (typical):\nCritical = 0 required (FAIL otherwise)\nHigh = 0 required (or <= allowedHigh)\nMedium allowed up to a budget (WARN if above warn)\nSecrets: any secret finding => FAIL (blocking)\nScore: start at 100 and subtract penalties by severity and count (config-driven)\nEvidence to Collect\nVulnerability scan report files\nSecret scan output (including file paths and fingerprint IDs, not actual secrets)\nSAST report snippet/summary\nHow to Document\nSeverity counts and whether exceptions exist\nAny exception MUST include: reason, owner, expiry date (if your org uses waivers)\nGateway 4 — Performance & Efficiency (API + Web)\nGoal\n\nEnsure the system meets baseline performance and user experience targets.\n\nWhat to Check\n\nAPI (typical):\n\np95 latency under target\nError rate under target\nThroughput meets expected load (if known)\n\nWeb (typical):\n\nCore Web Vitals (LCP, CLS, INP) on a reference device/profile\nBundle size / asset weight thresholds (optional)\nHow to Measure / Calculate\nLatency score:\np95 <= target: 100\nbetween target and warn: linear 70–99\n\nwarn: 0–69 (linear), with hard FAIL if beyond “max”\n\nError rate:\n<= target: 100\n<= warn: 70–99\n\nwarn: 0–69, FAIL if beyond max\n\nWeb vitals:\nEach metric scored independently; weighted into a single web score\nEvidence to Collect\nLoad test or benchmark outputs (k6/JMeter/etc.)\nAPM snapshots (if available)\nLighthouse or Web Vitals report exports\nHow to Document\nTest conditions: environment, dataset size, concurrency, device profile\nKey p95 / error rate / vitals values\nNotable regressions vs baseline\nGateway 5 — Maintainability & Code Health\nGoal\n\nKeep the codebase understandable, changeable, and reviewable over time.\n\nWhat to Check\nStatic analysis quality (lint errors, rule violations)\nComplexity thresholds (cyclomatic complexity, large functions/classes)\nDuplication rate\n“Change risk” signals (hotspots: frequent churn + complexity)\nDocumentation coverage for public APIs (e.g., endpoint docs, component docs)\nHow to Measure / Calculate\nIssue density: findings per KLOC (or per file for smaller repos)\nComplexity score: percentage of units exceeding complexity threshold\nDuplication: % duplicated lines\nScore: weighted average of normalized sub-scores (config-driven)\nEvidence to Collect\nStatic analysis summaries\nComplexity and duplication reports (any tool is fine; store outputs)\nList of top hotspots and why (files + metrics)\nHow to Document\nTop 10 problems by impact\nConcrete refactoring suggestions only if asked; otherwise just findings\nGateway 6 — Release Readiness & Operability (Observability + Runbooks)\nGoal\n\nMake sure the system can be operated safely in production.\n\nWhat to Check\nHealth endpoints exist and are meaningful\nLogging is structured and includes correlation IDs\nMetrics and dashboards exist for key signals (latency, error rate, saturation)\nAlerts configured for SLO breaches / error budget burn (if applicable)\nRunbooks for major failure modes exist (deploy rollback, incident triage)\nVersioning and changelog/release notes exist\nHow to Measure / Calculate\n\nMostly “presence + completeness” scoring:\n\nEach required artifact is a boolean check\nOptional maturity rubric: 0 (missing), 50 (partial), 100 (complete)\nBlocking if “minimum operability” is not met (config-driven)\nEvidence to Collect\nPaths to runbooks, dashboards-as-code, alert configs\nSample log/metric/tracing docs\nOn-call/ops notes (if present)\nHow to Document\nList missing operational artifacts\nMinimum go-live checklist status\nStandard Evaluation Algorithm (LLM-Executable)\nStep 1: Load configuration\nRead REPO_ROOT/.defs/quality-gateway-definition.json\nValidate it against the schema description (see below)\nIf fields are missing, use documented defaults from the JSON\nStep 2: Collect metrics per check\n\nFor each gate:\n\nFor each check:\nIdentify data source:\nPrefer CI artifacts if provided\nOtherwise use repository files and local commands (if allowed by runtime)\nProduce a metric value (number/boolean/string) and evidence references\nStep 3: Score each check (0–100)\n\nUse the scoring method defined per check:\n\nboolean: pass => 100, fail => 0\nthreshold_range: linear scoring between warn and target\npenalty_by_count: start at 100 and subtract per issue\nrubric: map {missing/partial/complete} to {0/50/100}\nStep 4: Score each gateway\nCompute weighted average of its checks\nDetermine gateway status using configured thresholds:\nScore >= passScore => PASS\nScore >= warnScore => WARN\nelse => FAIL\nIf gateway is marked blockingOnFail=true, any FAIL blocks release\nStep 5: Produce reports\n\nWrite:\n\nMarkdown report (human)\nJSON report (machine) Include:\nper-gateway score/status\nper-check metrics + evidence paths\noverall score and overall status\nexplicit “BLOCKERS” list if any\nReport Template (Markdown)\n\nUse this outline in docs/quality/quality-gate-report.md unless JSON overrides paths:\n\nSummary\nOverall Score:\nOverall Status:\nBlocking Failures:\nDate/Commit:\nGateway Results\nGateway\tScore\tStatus\tKey Metrics\tEvidence\nDetails (per Gateway)\n<Gateway Name>\nScore/Status\nChecks:\n<Check>: metric=..., score=..., evidence=...\nNotes / Exceptions\nquality-gateway-definition.json — JSON Schema Description\n\nThe configuration file is a normal JSON document with:\n\nRoot\nschemaVersion (string) — version of this config layout\nprojectProfile (object) — context used for defaults\nscoring (object) — global pass/warn thresholds and aggregation rules\nreporting (object) — output paths and evidence folder\ngates (array) — list of gateway definitions (exactly 6 for this skill)\nprojectProfile (object)\napplicationType (string) — e.g. \"web_api_and_web\"\nriskLevel (string) — \"low\"|\"medium\"|\"high\"\nreleaseCadence (string) — e.g. \"daily\"|\"weekly\"|\"monthly\"\nexpectedLoad (object, optional)\napiRps (number)\nconcurrency (number)\nscoring (object)\npassScore (number 0–100)\nwarnScore (number 0–100)\noverallAggregation (string) — \"weighted_average\"\nblockIfAnyBlockingGateFails (boolean)\nreporting (object)\nmarkdownReportPath (string, repo-relative)\njsonReportPath (string, repo-relative)\nevidenceDir (string, repo-relative)\ntempDir (string, repo-relative; MUST be inside .tmp/quality-gates/)\ngates (array of objects)\n\nEach gate:\n\nid (string) — stable identifier\nname (string)\ndescription (string)\nweight (number) — relative importance in overall score\nblockingOnFail (boolean)\nchecks (array)\nchecks (array of objects)\n\nEach check:\n\nid (string)\nname (string)\ndescription (string)\nweight (number)\nmetricType (string) — \"boolean\"|\"percentage\"|\"count\"|\"duration_ms\"|\"rubric\"\nscoringMethod (string) — \"boolean\"|\"threshold_range\"|\"penalty_by_count\"|\"rubric\"\nthresholds (object) — depends on scoringMethod:\nfor threshold_range:\ntarget (number)\nwarn (number)\nmax (number, optional hard-fail)\ndirection (string) — \"higher_is_better\"|\"lower_is_better\"\nfor penalty_by_count:\nallowed (number)\nwarnAbove (number)\nfailAbove (number)\npenaltyPerUnit (number)\nevidenceHints (array of strings) — where to find evidence in a generic repo/CI\nnotes (string, optional)\nOperational Notes\nIf a metric cannot be measured, do NOT invent numbers.\nMark the check as \"unknown\" in the JSON report and score it using the config’s fallback rule (recommended: treat unknown as WARN with score 70 unless the check is security/secrets, where unknown should be FAIL).\nAlways include evidence references (paths or CI artifact names).\nKeep all temp work inside .tmp/quality-gates/.\nJSON references\ntempl/quality-gateway-definition-template.json (template settings file. Can be copied to REPO_ROOT/.defs/quality-gateway-definition.json if missing)"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/TheCyberCore/tcc-quality-gates",
    "publisherUrl": "https://clawhub.ai/TheCyberCore/tcc-quality-gates",
    "owner": "TheCyberCore",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/tcc-quality-gates",
    "downloadUrl": "https://openagent3.xyz/downloads/tcc-quality-gates",
    "agentUrl": "https://openagent3.xyz/skills/tcc-quality-gates/agent",
    "manifestUrl": "https://openagent3.xyz/skills/tcc-quality-gates/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/tcc-quality-gates/agent.md"
  }
}