# Send Quorum to your agent
Use the source page and any available docs to guide the install because the item is currently unstable or timing out.
## Fast path
- Open the source page via Review source status.
- If you can obtain the package, extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the source page and extracted files.
## Suggested prompts
### New install

```text
I tried to install a skill package from Yavira, but the item is currently unstable or timing out. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required. Then review README.md for any prerequisites, environment setup, or post-install checks.
```
### Upgrade existing

```text
I tried to upgrade a skill package from Yavira, but the item is currently unstable or timing out. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need. Then review README.md for any prerequisites, environment setup, or post-install checks.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "quorum",
    "name": "Quorum",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/dacervera/quorum",
    "canonicalUrl": "https://clawhub.ai/dacervera/quorum",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/quorum",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=quorum",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "CLAUDE.md",
      "CONTRIBUTING.md",
      "README.md",
      "SHIPPING.md",
      "SKILL.md",
      "SPEC.md"
    ],
    "downloadMode": "manual_only",
    "sourceHealth": {
      "source": "tencent",
      "slug": "quorum",
      "status": "unstable",
      "reason": "timeout",
      "recommendedAction": "retry_later",
      "checkedAt": "2026-05-07T18:57:12.121Z",
      "expiresAt": "2026-05-08T06:57:12.121Z",
      "httpStatus": null,
      "finalUrl": null,
      "contentType": null,
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=quorum",
        "error": "Timed out after 5000ms",
        "slug": "quorum"
      },
      "scope": "item",
      "summary": "Item is unstable.",
      "detail": "This item is timing out or returning errors right now. Review the source page and try again later.",
      "primaryActionLabel": "Review source status",
      "primaryActionHref": "https://clawhub.ai/dacervera/quorum"
    },
    "validation": {
      "installChecklist": [
        "Wait for the source to recover or retry later.",
        "Review SKILL.md only after the download returns a real package.",
        "Treat this source as transient until the upstream errors clear."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/quorum",
    "downloadUrl": "https://openagent3.xyz/downloads/quorum",
    "agentUrl": "https://openagent3.xyz/skills/quorum/agent",
    "manifestUrl": "https://openagent3.xyz/skills/quorum/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/quorum/agent.md"
  }
}
```
## Documentation

### Quorum — Multi-Agent Validation

Quorum validates AI agent outputs by spawning multiple independent critics that evaluate artifacts against rubrics. Every criticism must cite evidence. You get a structured verdict.

### Quick Start

Clone the repository and install:

git clone https://github.com/SharedIntellect/quorum.git
cd quorum/reference-implementation
pip install -r requirements.txt

Run a quorum check on any file:

python -m quorum.cli run --target <path-to-artifact> --rubric <rubric-name>

### Built-in Rubrics

research-synthesis — Research reports, literature reviews, technical analyses
agent-config — Agent configurations, YAML specs, system prompts
python-code — Python source files (25 criteria, PC-001–PC-025; auto-detected on .py files)

### Depth Profiles

quick — 2 critics (correctness, completeness) + pre-screen, ~5-10 min
standard — 4 active (correctness, completeness, security + tester) + pre-screen, ~15-30 min (default)
thorough — 5 active (+ code_hygiene) + pre-screen + fix loops, ~30-60 min

†Cross-Consistency requires --relationships flag with a relationships manifest.

All depth profiles include the deterministic pre-screen (10 checks: credentials, PII, syntax errors, broken links, TODOs, and more) before any LLM critic runs.

### Examples

# Validate a research report
quorum run --target my-report.md --rubric research-synthesis

# Quick check (faster, fewer critics)
quorum run --target my-report.md --rubric research-synthesis --depth quick

# Batch: validate all markdown files in a directory
quorum run --target ./docs/ --pattern "*.md" --rubric research-synthesis

# Cross-artifact consistency check
quorum run --target ./src/ --relationships quorum-relationships.yaml --depth standard

# Use a custom rubric
quorum run --target my-spec.md --rubric ./my-rubric.json

# List available rubrics
quorum rubrics list

# Initialize config interactively
quorum config init

### Configuration

On first run, Quorum prompts for your preferred models and writes quorum-config.yaml. You can also create it manually:

models:
  tier_1: anthropic/claude-sonnet-4-6    # Judgment roles
  tier_2: anthropic/claude-sonnet-4-6    # Evaluation roles
depth: standard

Set your API key:

export ANTHROPIC_API_KEY=sk-ant-...
# or
export OPENAI_API_KEY=sk-...

### Output

Quorum produces a structured verdict:

PASS — No significant issues found
PASS_WITH_NOTES — Minor issues, artifact is usable
REVISE — High/critical issues that need rework before proceeding
REJECT — Unfixable problems; restart required

Exit codes: 0 = PASS/PASS_WITH_NOTES, 1 = error, 2 = REVISE/REJECT.

Each finding includes: severity (CRITICAL/HIGH/MEDIUM/LOW), evidence citations pointing to specific locations in the artifact, and remediation suggestions. The run directory contains prescreen.json, per-critic finding JSONs, verdict.json, and a human-readable report.md.

### More Information

SPEC.md — Full architectural specification
MODEL_REQUIREMENTS.md — Supported models and tiers
CONFIG_REFERENCE.md — All configuration options
FOR_BEGINNERS.md — New to agent validation? Start here

⚖️ LICENSE — Not part of the operational specification above.
This file is part of Quorum.
Copyright 2026 SharedIntellect. MIT License.
See LICENSE for full terms.
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: dacervera
- Version: 0.7.3
## Source health
- Status: unstable
- Item is unstable.
- This item is timing out or returning errors right now. Review the source page and try again later.
- Health scope: item
- Reason: timeout
- Checked at: 2026-05-07T18:57:12.121Z
- Expires at: 2026-05-08T06:57:12.121Z
- Recommended action: Review source status
## Links
- [Detail page](https://openagent3.xyz/skills/quorum)
- [Send to Agent page](https://openagent3.xyz/skills/quorum/agent)
- [JSON manifest](https://openagent3.xyz/skills/quorum/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/quorum/agent.md)
- [Download page](https://openagent3.xyz/downloads/quorum)