{
  "schemaVersion": "1.0",
  "item": {
    "slug": "quorum",
    "name": "Quorum",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/dacervera/quorum",
    "canonicalUrl": "https://clawhub.ai/dacervera/quorum",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/quorum",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=quorum",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "CLAUDE.md",
      "CONTRIBUTING.md",
      "README.md",
      "SHIPPING.md",
      "SKILL.md",
      "SPEC.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-07T17:22:31.273Z",
      "expiresAt": "2026-05-14T17:22:31.273Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
        "contentDisposition": "attachment; filename=\"afrexai-annual-report-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/quorum"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/quorum",
    "agentPageUrl": "https://openagent3.xyz/skills/quorum/agent",
    "manifestUrl": "https://openagent3.xyz/skills/quorum/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/quorum/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Quorum — Multi-Agent Validation",
        "body": "Quorum validates AI agent outputs by spawning multiple independent critics that evaluate artifacts against rubrics. Every criticism must cite evidence. You get a structured verdict."
      },
      {
        "title": "Quick Start",
        "body": "Clone the repository and install:\n\ngit clone https://github.com/SharedIntellect/quorum.git\ncd quorum/reference-implementation\npip install -r requirements.txt\n\nRun a quorum check on any file:\n\npython -m quorum.cli run --target <path-to-artifact> --rubric <rubric-name>"
      },
      {
        "title": "Built-in Rubrics",
        "body": "research-synthesis — Research reports, literature reviews, technical analyses\nagent-config — Agent configurations, YAML specs, system prompts\npython-code — Python source files (25 criteria, PC-001–PC-025; auto-detected on .py files)"
      },
      {
        "title": "Depth Profiles",
        "body": "quick — 2 critics (correctness, completeness) + pre-screen, ~5-10 min\nstandard — 4 active (correctness, completeness, security + tester) + pre-screen, ~15-30 min (default)\nthorough — 5 active (+ code_hygiene) + pre-screen + fix loops, ~30-60 min\n\n†Cross-Consistency requires --relationships flag with a relationships manifest.\n\nAll depth profiles include the deterministic pre-screen (10 checks: credentials, PII, syntax errors, broken links, TODOs, and more) before any LLM critic runs."
      },
      {
        "title": "Examples",
        "body": "# Validate a research report\nquorum run --target my-report.md --rubric research-synthesis\n\n# Quick check (faster, fewer critics)\nquorum run --target my-report.md --rubric research-synthesis --depth quick\n\n# Batch: validate all markdown files in a directory\nquorum run --target ./docs/ --pattern \"*.md\" --rubric research-synthesis\n\n# Cross-artifact consistency check\nquorum run --target ./src/ --relationships quorum-relationships.yaml --depth standard\n\n# Use a custom rubric\nquorum run --target my-spec.md --rubric ./my-rubric.json\n\n# List available rubrics\nquorum rubrics list\n\n# Initialize config interactively\nquorum config init"
      },
      {
        "title": "Configuration",
        "body": "On first run, Quorum prompts for your preferred models and writes quorum-config.yaml. You can also create it manually:\n\nmodels:\n  tier_1: anthropic/claude-sonnet-4-6    # Judgment roles\n  tier_2: anthropic/claude-sonnet-4-6    # Evaluation roles\ndepth: standard\n\nSet your API key:\n\nexport ANTHROPIC_API_KEY=sk-ant-...\n# or\nexport OPENAI_API_KEY=sk-..."
      },
      {
        "title": "Output",
        "body": "Quorum produces a structured verdict:\n\nPASS — No significant issues found\nPASS_WITH_NOTES — Minor issues, artifact is usable\nREVISE — High/critical issues that need rework before proceeding\nREJECT — Unfixable problems; restart required\n\nExit codes: 0 = PASS/PASS_WITH_NOTES, 1 = error, 2 = REVISE/REJECT.\n\nEach finding includes: severity (CRITICAL/HIGH/MEDIUM/LOW), evidence citations pointing to specific locations in the artifact, and remediation suggestions. The run directory contains prescreen.json, per-critic finding JSONs, verdict.json, and a human-readable report.md."
      },
      {
        "title": "More Information",
        "body": "SPEC.md — Full architectural specification\nMODEL_REQUIREMENTS.md — Supported models and tiers\nCONFIG_REFERENCE.md — All configuration options\nFOR_BEGINNERS.md — New to agent validation? Start here\n\n⚖️ LICENSE — Not part of the operational specification above.\nThis file is part of Quorum.\nCopyright 2026 SharedIntellect. MIT License.\nSee LICENSE for full terms."
      }
    ],
    "body": "Quorum — Multi-Agent Validation\n\nQuorum validates AI agent outputs by spawning multiple independent critics that evaluate artifacts against rubrics. Every criticism must cite evidence. You get a structured verdict.\n\nQuick Start\n\nClone the repository and install:\n\ngit clone https://github.com/SharedIntellect/quorum.git\ncd quorum/reference-implementation\npip install -r requirements.txt\n\n\nRun a quorum check on any file:\n\npython -m quorum.cli run --target <path-to-artifact> --rubric <rubric-name>\n\nBuilt-in Rubrics\nresearch-synthesis — Research reports, literature reviews, technical analyses\nagent-config — Agent configurations, YAML specs, system prompts\npython-code — Python source files (25 criteria, PC-001–PC-025; auto-detected on .py files)\nDepth Profiles\nquick — 2 critics (correctness, completeness) + pre-screen, ~5-10 min\nstandard — 4 active (correctness, completeness, security + tester) + pre-screen, ~15-30 min (default)\nthorough — 5 active (+ code_hygiene) + pre-screen + fix loops, ~30-60 min\n\n†Cross-Consistency requires --relationships flag with a relationships manifest.\n\nAll depth profiles include the deterministic pre-screen (10 checks: credentials, PII, syntax errors, broken links, TODOs, and more) before any LLM critic runs.\n\nExamples\n# Validate a research report\nquorum run --target my-report.md --rubric research-synthesis\n\n# Quick check (faster, fewer critics)\nquorum run --target my-report.md --rubric research-synthesis --depth quick\n\n# Batch: validate all markdown files in a directory\nquorum run --target ./docs/ --pattern \"*.md\" --rubric research-synthesis\n\n# Cross-artifact consistency check\nquorum run --target ./src/ --relationships quorum-relationships.yaml --depth standard\n\n# Use a custom rubric\nquorum run --target my-spec.md --rubric ./my-rubric.json\n\n# List available rubrics\nquorum rubrics list\n\n# Initialize config interactively\nquorum config init\n\nConfiguration\n\nOn first run, Quorum prompts for your preferred models and writes quorum-config.yaml. You can also create it manually:\n\nmodels:\n  tier_1: anthropic/claude-sonnet-4-6    # Judgment roles\n  tier_2: anthropic/claude-sonnet-4-6    # Evaluation roles\ndepth: standard\n\n\nSet your API key:\n\nexport ANTHROPIC_API_KEY=sk-ant-...\n# or\nexport OPENAI_API_KEY=sk-...\n\nOutput\n\nQuorum produces a structured verdict:\n\nPASS — No significant issues found\nPASS_WITH_NOTES — Minor issues, artifact is usable\nREVISE — High/critical issues that need rework before proceeding\nREJECT — Unfixable problems; restart required\n\nExit codes: 0 = PASS/PASS_WITH_NOTES, 1 = error, 2 = REVISE/REJECT.\n\nEach finding includes: severity (CRITICAL/HIGH/MEDIUM/LOW), evidence citations pointing to specific locations in the artifact, and remediation suggestions. The run directory contains prescreen.json, per-critic finding JSONs, verdict.json, and a human-readable report.md.\n\nMore Information\nSPEC.md — Full architectural specification\nMODEL_REQUIREMENTS.md — Supported models and tiers\nCONFIG_REFERENCE.md — All configuration options\nFOR_BEGINNERS.md — New to agent validation? Start here\n\n⚖️ LICENSE — Not part of the operational specification above. This file is part of Quorum. Copyright 2026 SharedIntellect. MIT License. See LICENSE for full terms."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/dacervera/quorum",
    "publisherUrl": "https://clawhub.ai/dacervera/quorum",
    "owner": "dacervera",
    "version": "0.7.3",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/quorum",
    "downloadUrl": "https://openagent3.xyz/downloads/quorum",
    "agentUrl": "https://openagent3.xyz/skills/quorum/agent",
    "manifestUrl": "https://openagent3.xyz/skills/quorum/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/quorum/agent.md"
  }
}