{
  "schemaVersion": "1.0",
  "item": {
    "slug": "cross-model-review",
    "name": "Cross-Model Review",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/Don-GBot/cross-model-review",
    "canonicalUrl": "https://clawhub.ai/Don-GBot/cross-model-review",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/cross-model-review",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=cross-model-review",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "CHANGELOG.md",
      "CONTRIBUTING.md",
      "README.md",
      "SECURITY.md",
      "SKILL.md",
      "package.json"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/cross-model-review"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/cross-model-review",
    "agentPageUrl": "https://openagent3.xyz/skills/cross-model-review/agent",
    "manifestUrl": "https://openagent3.xyz/skills/cross-model-review/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/cross-model-review/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Metadata",
        "body": "name: cross-model-review\nversion: 2.0.0\ndescription: >\n  Adversarial plan review using two different AI models.\n  v2: Alternating mode — models swap writer/reviewer each round.\n  Fully autonomous loop — no human input between rounds.\n  Use when: building features touching auth/payments/data models,\n  plans that will take >1hr to implement.\n  NOT for: simple one-file fixes, research tasks, quick scripts.\ntriggers:\n  - \"review this plan\"\n  - \"cross review\"\n  - \"challenge this\"\n  - \"is this plan solid?\"\n  - \"adversarial review\""
      },
      {
        "title": "When to Activate",
        "body": "Activate this skill when the user:\n\nSays any trigger phrase above\nShares a plan and asks for adversarial/second-opinion review\nAsks you to \"sanity check\" a multi-step implementation plan\n\nDo NOT activate for: simple fixes, one-liners, pure research tasks."
      },
      {
        "title": "Static Mode (v1 — backward compatible)",
        "body": "Fixed roles: planner always writes, reviewer always reviews. Requires human to trigger each round."
      },
      {
        "title": "Alternating Mode (v2 — recommended)",
        "body": "Models swap roles each round. Fully autonomous — no human input between rounds.\n\nFlow:\n\nRound 1: Model A writes the plan. Model B reviews.\nRound 2: Model B rewrites (based on its own review). Model A reviews.\nRound 3: Model A rewrites (based on its own review). Model B reviews.\n...continues alternating until both agree (reviewer says APPROVED) or max rounds hit.\n\nWhy this works:\n\nEach model must implement its own critique — can't nitpick without owning the fix\nThe other model catches over-engineering or proportionality issues\nNatural convergence: each round addresses the other's concerns"
      },
      {
        "title": "Autonomous Orchestration (Alternating Mode)",
        "body": "You (the main agent) run this loop. It's fully autonomous after kickoff."
      },
      {
        "title": "Step 1 — Save the plan and init",
        "body": "node review.js init \\\n  --plan /path/to/plan.md \\\n  --mode alternating \\\n  --model-a \"anthropic/claude-opus-4-6\" \\\n  --model-b \"openai-codex/gpt-5.3-codex\" \\\n  --project-context \"Brief description for reviewer calibration\" \\\n  --out /home/ubuntu/clawd/tasks/reviews\n\nCaptures workspace path from stdout."
      },
      {
        "title": "Step 2 — The autonomous loop",
        "body": "while true:\n  step = next-step(workspace)\n\n  if step.action == \"done\":\n    break  # APPROVED!\n\n  if step.action == \"max-rounds\":\n    ask user: override or manual fix\n    break\n\n  if step.action == \"review\":\n    spawn sub-agent with step.model, step.prompt\n    save response to workspace/round-N-response.json\n    parse-round(workspace, round, response)\n    continue\n\n  if step.action == \"revise\":\n    spawn sub-agent with step.model, step.prompt\n    save output plan to temp file\n    save-plan(workspace, temp file, version)\n    continue"
      },
      {
        "title": "Step 3 — Finalize",
        "body": "When the loop exits with APPROVED:\n\nnode review.js finalize --workspace <workspace>\n\nPresent: rounds taken, issues found/resolved, rubric scores, plan-final.md location."
      },
      {
        "title": "CLI Reference",
        "body": "Commands:\n  init           Create a review workspace\n  next-step      Get next action for autonomous loop\n  parse-round    Parse a reviewer response, update issue tracker\n  save-plan      Save a revised plan version from writer output\n  finalize       Generate plan-final.md, changelog.md, summary.json\n  status         Print current workspace state\n\ninit options:\n  --plan <file>            Path to plan file (required)\n  --mode <m>               \"static\" (default) or \"alternating\"\n  --model-a <m>            Model A — writes first (alternating mode, required)\n  --model-b <m>            Model B — reviews first (alternating mode, required)\n  --reviewer-model <m>     Reviewer model (static mode, required)\n  --planner-model <m>      Planner model (static mode, required)\n  --project-context <s>    Brief project context for reviewer calibration\n  --out <dir>              Output base dir (default: tasks/reviews)\n  --max-rounds <n>         Max rounds (default: 5 static, 8 alternating)\n  --token-budget <n>       Token budget for context (default: 8000)\n\nnext-step options:\n  --workspace <dir>        Path to review workspace (required)\n  Returns JSON: { action, model, round, prompt, planVersion, saveTo }\n  Actions: \"review\", \"revise\", \"done\", \"max-rounds\"\n\nparse-round options:\n  --workspace <dir>        Path to review workspace (required)\n  --round <n>              Round number (required)\n  --response <file>        Path to raw reviewer response (required)\n\nsave-plan options:\n  --workspace <dir>        Path to review workspace (required)\n  --plan <file>            Path to revised plan markdown (required)\n  --version <n>            Plan version number (required)\n\nfinalize options:\n  --workspace <dir>        Path to review workspace (required)\n  --override-reason <s>    Reason for force-approving with open issues\n  --ci-force               Required in non-TTY mode when overriding\n\nstatus options:\n  --workspace <dir>        Path to review workspace (required)\n\nExit codes:\n  0   Approved / OK\n  1   Revise / max-rounds\n  2   Error"
      },
      {
        "title": "Spawning reviewers",
        "body": "step = next-step(workspace)  # action: \"review\"\nresponse = sessions_spawn(model=step.model, task=step.prompt, timeout=120s)\n# Save raw response to workspace/round-{step.round}-response.json\nparse-round(workspace, step.round, response_file)\n\nSystem instruction for reviewer: \"You are a senior engineering reviewer. Output ONLY valid JSON matching the schema. No tool calls. No markdown fences. No preamble.\""
      },
      {
        "title": "Spawning writers",
        "body": "step = next-step(workspace)  # action: \"revise\"\nrevised_plan = sessions_spawn(model=step.model, task=step.prompt, timeout=300s)\n# Save raw output as temp file\nsave-plan(workspace, temp_file, step.planVersion)\n\nSystem instruction for writer: none needed — the prompt is self-contained."
      },
      {
        "title": "Error handling",
        "body": "Reviewer timeout/failure: retry once, then ask user\nWriter timeout/failure: retry once, then ask user\nParse error on review JSON: re-prompt reviewer once with \"Your response was not valid JSON\"\nMax rounds hit: present status to user, ask for override or manual fix"
      },
      {
        "title": "Convergence",
        "body": "The loop converges when the reviewer says APPROVED with no open CRITICAL/HIGH blockers. The script enforces this — if reviewer says APPROVED but blockers remain, it overrides to REVISE."
      },
      {
        "title": "Static Mode (v1 — backward compatible)",
        "body": "For static mode, the original orchestration from v1 still works:"
      },
      {
        "title": "Step 1 — Init",
        "body": "node review.js init --plan <file> --reviewer-model <m> --planner-model <m>"
      },
      {
        "title": "Step 2 — Manual loop",
        "body": "For each round: build reviewer prompt from template, spawn reviewer, parse-round, revise plan yourself, continue."
      },
      {
        "title": "Step 3 — Finalize",
        "body": "Same as alternating mode."
      },
      {
        "title": "Integration with coding-agent",
        "body": "Before dispatching any plan to coding-agent that:\n\nTouches auth, payments, or data models\nHas 3+ implementation steps\nThe user hasn't already reviewed adversarially\n\nRun cross-model-review first. Only proceed if exit code 0."
      },
      {
        "title": "Notes",
        "body": "Workspace persists in tasks/reviews/ — referenceable later\nissues.json tracks full lifecycle of all issues\nmeta.json stores mode, models, current round, verdict, needsRevision flag\nnext-step is the state machine — always call it to determine what to do\nDedup warnings help catch semantic drift across rounds\nModels must be from different provider families (cross-provider enforcement)\n--project-context is injected into reviewer prompts for calibration"
      }
    ],
    "body": "cross-model-review\nMetadata\nname: cross-model-review\nversion: 2.0.0\ndescription: >\n  Adversarial plan review using two different AI models.\n  v2: Alternating mode — models swap writer/reviewer each round.\n  Fully autonomous loop — no human input between rounds.\n  Use when: building features touching auth/payments/data models,\n  plans that will take >1hr to implement.\n  NOT for: simple one-file fixes, research tasks, quick scripts.\ntriggers:\n  - \"review this plan\"\n  - \"cross review\"\n  - \"challenge this\"\n  - \"is this plan solid?\"\n  - \"adversarial review\"\n\nWhen to Activate\n\nActivate this skill when the user:\n\nSays any trigger phrase above\nShares a plan and asks for adversarial/second-opinion review\nAsks you to \"sanity check\" a multi-step implementation plan\n\nDo NOT activate for: simple fixes, one-liners, pure research tasks.\n\nModes\nStatic Mode (v1 — backward compatible)\n\nFixed roles: planner always writes, reviewer always reviews. Requires human to trigger each round.\n\nAlternating Mode (v2 — recommended)\n\nModels swap roles each round. Fully autonomous — no human input between rounds.\n\nFlow:\n\nRound 1: Model A writes the plan. Model B reviews.\nRound 2: Model B rewrites (based on its own review). Model A reviews.\nRound 3: Model A rewrites (based on its own review). Model B reviews.\n...continues alternating until both agree (reviewer says APPROVED) or max rounds hit.\n\nWhy this works:\n\nEach model must implement its own critique — can't nitpick without owning the fix\nThe other model catches over-engineering or proportionality issues\nNatural convergence: each round addresses the other's concerns\nAutonomous Orchestration (Alternating Mode)\n\nYou (the main agent) run this loop. It's fully autonomous after kickoff.\n\nStep 1 — Save the plan and init\nnode review.js init \\\n  --plan /path/to/plan.md \\\n  --mode alternating \\\n  --model-a \"anthropic/claude-opus-4-6\" \\\n  --model-b \"openai-codex/gpt-5.3-codex\" \\\n  --project-context \"Brief description for reviewer calibration\" \\\n  --out /home/ubuntu/clawd/tasks/reviews\n\n\nCaptures workspace path from stdout.\n\nStep 2 — The autonomous loop\nwhile true:\n  step = next-step(workspace)\n\n  if step.action == \"done\":\n    break  # APPROVED!\n\n  if step.action == \"max-rounds\":\n    ask user: override or manual fix\n    break\n\n  if step.action == \"review\":\n    spawn sub-agent with step.model, step.prompt\n    save response to workspace/round-N-response.json\n    parse-round(workspace, round, response)\n    continue\n\n  if step.action == \"revise\":\n    spawn sub-agent with step.model, step.prompt\n    save output plan to temp file\n    save-plan(workspace, temp file, version)\n    continue\n\nStep 3 — Finalize\n\nWhen the loop exits with APPROVED:\n\nnode review.js finalize --workspace <workspace>\n\n\nPresent: rounds taken, issues found/resolved, rubric scores, plan-final.md location.\n\nCLI Reference\nCommands:\n  init           Create a review workspace\n  next-step      Get next action for autonomous loop\n  parse-round    Parse a reviewer response, update issue tracker\n  save-plan      Save a revised plan version from writer output\n  finalize       Generate plan-final.md, changelog.md, summary.json\n  status         Print current workspace state\n\ninit options:\n  --plan <file>            Path to plan file (required)\n  --mode <m>               \"static\" (default) or \"alternating\"\n  --model-a <m>            Model A — writes first (alternating mode, required)\n  --model-b <m>            Model B — reviews first (alternating mode, required)\n  --reviewer-model <m>     Reviewer model (static mode, required)\n  --planner-model <m>      Planner model (static mode, required)\n  --project-context <s>    Brief project context for reviewer calibration\n  --out <dir>              Output base dir (default: tasks/reviews)\n  --max-rounds <n>         Max rounds (default: 5 static, 8 alternating)\n  --token-budget <n>       Token budget for context (default: 8000)\n\nnext-step options:\n  --workspace <dir>        Path to review workspace (required)\n  Returns JSON: { action, model, round, prompt, planVersion, saveTo }\n  Actions: \"review\", \"revise\", \"done\", \"max-rounds\"\n\nparse-round options:\n  --workspace <dir>        Path to review workspace (required)\n  --round <n>              Round number (required)\n  --response <file>        Path to raw reviewer response (required)\n\nsave-plan options:\n  --workspace <dir>        Path to review workspace (required)\n  --plan <file>            Path to revised plan markdown (required)\n  --version <n>            Plan version number (required)\n\nfinalize options:\n  --workspace <dir>        Path to review workspace (required)\n  --override-reason <s>    Reason for force-approving with open issues\n  --ci-force               Required in non-TTY mode when overriding\n\nstatus options:\n  --workspace <dir>        Path to review workspace (required)\n\nExit codes:\n  0   Approved / OK\n  1   Revise / max-rounds\n  2   Error\n\nDetailed Orchestration (for agent implementation)\nSpawning reviewers\nstep = next-step(workspace)  # action: \"review\"\nresponse = sessions_spawn(model=step.model, task=step.prompt, timeout=120s)\n# Save raw response to workspace/round-{step.round}-response.json\nparse-round(workspace, step.round, response_file)\n\n\nSystem instruction for reviewer: \"You are a senior engineering reviewer. Output ONLY valid JSON matching the schema. No tool calls. No markdown fences. No preamble.\"\n\nSpawning writers\nstep = next-step(workspace)  # action: \"revise\"\nrevised_plan = sessions_spawn(model=step.model, task=step.prompt, timeout=300s)\n# Save raw output as temp file\nsave-plan(workspace, temp_file, step.planVersion)\n\n\nSystem instruction for writer: none needed — the prompt is self-contained.\n\nError handling\nReviewer timeout/failure: retry once, then ask user\nWriter timeout/failure: retry once, then ask user\nParse error on review JSON: re-prompt reviewer once with \"Your response was not valid JSON\"\nMax rounds hit: present status to user, ask for override or manual fix\nConvergence\n\nThe loop converges when the reviewer says APPROVED with no open CRITICAL/HIGH blockers. The script enforces this — if reviewer says APPROVED but blockers remain, it overrides to REVISE.\n\nStatic Mode (v1 — backward compatible)\n\nFor static mode, the original orchestration from v1 still works:\n\nStep 1 — Init\nnode review.js init --plan <file> --reviewer-model <m> --planner-model <m>\n\nStep 2 — Manual loop\n\nFor each round: build reviewer prompt from template, spawn reviewer, parse-round, revise plan yourself, continue.\n\nStep 3 — Finalize\n\nSame as alternating mode.\n\nIntegration with coding-agent\n\nBefore dispatching any plan to coding-agent that:\n\nTouches auth, payments, or data models\nHas 3+ implementation steps\nThe user hasn't already reviewed adversarially\n\nRun cross-model-review first. Only proceed if exit code 0.\n\nNotes\nWorkspace persists in tasks/reviews/ — referenceable later\nissues.json tracks full lifecycle of all issues\nmeta.json stores mode, models, current round, verdict, needsRevision flag\nnext-step is the state machine — always call it to determine what to do\nDedup warnings help catch semantic drift across rounds\nModels must be from different provider families (cross-provider enforcement)\n--project-context is injected into reviewer prompts for calibration"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/Don-GBot/cross-model-review",
    "publisherUrl": "https://clawhub.ai/Don-GBot/cross-model-review",
    "owner": "Don-GBot",
    "version": "2.1.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/cross-model-review",
    "downloadUrl": "https://openagent3.xyz/downloads/cross-model-review",
    "agentUrl": "https://openagent3.xyz/skills/cross-model-review/agent",
    "manifestUrl": "https://openagent3.xyz/skills/cross-model-review/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/cross-model-review/agent.md"
  }
}