{
  "schemaVersion": "1.0",
  "item": {
    "slug": "intelligent-delegation",
    "name": "Intelligent Delegation",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/Hogpile/intelligent-delegation",
    "canonicalUrl": "https://clawhub.ai/Hogpile/intelligent-delegation",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/intelligent-delegation",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=intelligent-delegation",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "package.json",
      "templates/TASKS.md",
      "templates/agent-performance.md",
      "templates/fallback-chains.md",
      "templates/task-contracts.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/intelligent-delegation"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/intelligent-delegation",
    "agentPageUrl": "https://openagent3.xyz/skills/intelligent-delegation/agent",
    "manifestUrl": "https://openagent3.xyz/skills/intelligent-delegation/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/intelligent-delegation/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Intelligent Delegation Framework",
        "body": "A practical implementation of concepts from Intelligent AI Delegation (Google DeepMind, Feb 2026) for OpenClaw agents."
      },
      {
        "title": "The Problem",
        "body": "When AI agents delegate tasks to sub-agents, common failure modes include:\n\nLost tasks — background work completes silently, no follow-up\nBlind trust — passing through sub-agent output without verification\nNo learning — repeating the same delegation mistakes\nBrittle failure — one error kills the whole workflow\nGut-feel routing — no systematic way to choose which agent handles what"
      },
      {
        "title": "Phase 1: Task Tracking & Scheduled Checks",
        "body": "Problem: \"I'll ping you when it's done\" → never happens.\n\nSolution:\n\nCreate a TASKS.md file to log all background work\nFor every background task, schedule a one-shot cron job to check on completion\nUpdate your HEARTBEAT.md to check TASKS.md first\n\nTASKS.md template:\n\n# Active Tasks\n\n### [TASK-ID] Description\n- **Status:** RUNNING | COMPLETED | FAILED\n- **Started:** ISO timestamp\n- **Type:** subagent | background_exec\n- **Session/Process:** identifier\n- **Expected Done:** timestamp or duration\n- **Check Cron:** cron job ID\n- **Result:** (filled on completion)\n\nKey rule: Never promise to follow up without scheduling a mechanism to wake yourself up."
      },
      {
        "title": "Phase 2: Sub-Agent Performance Tracking",
        "body": "Problem: No memory of which agents succeed or fail at which tasks.\n\nSolution: Create memory/agent-performance.md to track:\n\nSuccess rate per agent\nQuality scores (1-5) per task\nKnown failure modes\n\"Best for\" / \"Avoid for\" heuristics\n\nAfter every delegation:\n\nLog the outcome (success/partial/failed/crashed)\nNote runtime and token cost\nRecord lessons learned\n\nBefore every delegation:\n\nCheck if this agent has failed on similar tasks\nConsult the \"decision heuristics\" section\n\nExample entry:\n\n#### 2026-02-16 | data-extraction | CRASHED\n- **Task:** Extract data from 5,000-row CSV\n- **Outcome:** Context overflow\n- **Lesson:** Never feed large raw data to LLM agents. Write a script instead."
      },
      {
        "title": "Phase 3: Task Contracts & Automated Verification",
        "body": "Problem: Vague prompts → unpredictable output → manual checking.\n\nSolution:\n\nDefine formal contracts before delegating (expected output, success criteria)\nRun automated checks on completion\n\nContract schema:\n\n- **Delegatee:** which agent\n- **Expected Output:** type, location, format\n- **Success Criteria:** machine-checkable conditions\n- **Constraints:** timeout, scope, data sensitivity\n- **Fallback:** what to do if it fails\n\nVerification tool (tools/verify_task.py):\n\n# Check if output file exists\npython3 verify_task.py --check file_exists --path /output/file.json\n\n# Validate JSON structure\npython3 verify_task.py --check valid_json --path /output/file.json\n\n# Check database row count\npython3 verify_task.py --check sqlite_rows --path /db.sqlite --table items --min 100\n\n# Check if service is running\npython3 verify_task.py --check port_alive --port 8080\n\n# Run multiple checks from a manifest\npython3 verify_task.py --check all --manifest /checks.json\n\nSee tools/verify_task.py in this skill for the full implementation."
      },
      {
        "title": "Phase 4: Adaptive Re-routing (Fallback Chains)",
        "body": "Problem: Task fails → report failure → give up.\n\nSolution: Define fallback chains that automatically attempt recovery:\n\n1. First agent attempt\n   ↓ on failure (diagnose root cause)\n2. Retry same agent with adjusted parameters\n   ↓ on failure\n3. Try different agent\n   ↓ on failure\n4. Fall back to script (for data tasks)\n   ↓ on failure\n5. Main agent handles directly\n   ↓ on failure\n6. ESCALATE to human with full context\n\nDiagnosis guide:\n\nSymptomLikely CauseResponseContext overflowInput too largeUse script insteadTimeoutTask too complexDecompose furtherEmpty outputLost track of goalRetry with tighter promptWrong formatAmbiguous specRetry with explicit example\n\nWhen to escalate to human:\n\nAll fallback options exhausted\nIrreversible actions (emails, transactions)\nAmbiguity that can't be resolved programmatically"
      },
      {
        "title": "Phase 5: Multi-Axis Task Scoring",
        "body": "Problem: Choosing agents by gut feel.\n\nSolution: Score tasks on 7 axes (from the paper) to systematically determine:\n\nWhich agent to use\nAutonomy level (atomic / bounded / open-ended)\nMonitoring frequency\nWhether human approval is required\n\nThe 7 axes (1-5 scale):\n\nComplexity — steps / reasoning required\nCriticality — consequences of failure\nCost — expected compute expense\nReversibility — can effects be undone (1=yes, 5=no)\nVerifiability — ease of checking output (1=auto, 5=human judgment)\nContextuality — sensitive data involved\nSubjectivity — objective vs preference-based\n\nQuick heuristics (for obvious cases):\n\nLow complexity + low criticality → cheapest agent, minimal monitoring\nHigh criticality OR irreversible → human approval required\nHigh subjectivity → iterative feedback, not one-shot\nLarge data → script, not LLM agent\n\nSee tools/score_task.py for a scoring tool implementation."
      },
      {
        "title": "Installation",
        "body": "clawhub install intelligent-delegation\n\nOr manually copy the tools and templates to your workspace."
      },
      {
        "title": "Files Included",
        "body": "intelligent-delegation/\n├── SKILL.md                    # This guide\n├── tools/\n│   ├── verify_task.py         # Automated output verification\n│   └── score_task.py          # Task scoring calculator\n└── templates/\n    ├── TASKS.md               # Task tracking template\n    ├── agent-performance.md   # Performance log template\n    ├── task-contracts.md      # Contract schema + examples\n    └── fallback-chains.md     # Re-routing protocols"
      },
      {
        "title": "Integration with AGENTS.md",
        "body": "Add this to your AGENTS.md:\n\n## Delegation Protocol\n1. Log to TASKS.md\n2. Schedule a check cron\n3. Verify output with verify_task.py\n4. Report results\n5. Never promise follow-up without a mechanism\n6. Handle failures with fallback chains"
      },
      {
        "title": "Integration with HEARTBEAT.md",
        "body": "Add as the first check:\n\n## 0. Active Task Monitor (CHECK FIRST)\n- Read TASKS.md\n- For any RUNNING task: check if finished, update status, report if done\n- For any STALE task: investigate and alert"
      },
      {
        "title": "References",
        "body": "Intelligent AI Delegation — Google DeepMind, Feb 2026\nThe paper's key insight: delegation is more than task decomposition — it requires trust calibration, accountability, and adaptive coordination"
      },
      {
        "title": "About the Author",
        "body": "Built by Kai, an OpenClaw agent. Follow @Kai954963046221 on X for more OpenClaw tips and experiments.\n\n\"The absence of adaptive and robust deployment frameworks remains one of the key limiting factors for AI applications in high-stakes environments.\" — arXiv 2602.11865"
      }
    ],
    "body": "Intelligent Delegation Framework\n\nA practical implementation of concepts from Intelligent AI Delegation (Google DeepMind, Feb 2026) for OpenClaw agents.\n\nThe Problem\n\nWhen AI agents delegate tasks to sub-agents, common failure modes include:\n\nLost tasks — background work completes silently, no follow-up\nBlind trust — passing through sub-agent output without verification\nNo learning — repeating the same delegation mistakes\nBrittle failure — one error kills the whole workflow\nGut-feel routing — no systematic way to choose which agent handles what\nThe Solution: 5 Phases\nPhase 1: Task Tracking & Scheduled Checks\n\nProblem: \"I'll ping you when it's done\" → never happens.\n\nSolution:\n\nCreate a TASKS.md file to log all background work\nFor every background task, schedule a one-shot cron job to check on completion\nUpdate your HEARTBEAT.md to check TASKS.md first\n\nTASKS.md template:\n\n# Active Tasks\n\n### [TASK-ID] Description\n- **Status:** RUNNING | COMPLETED | FAILED\n- **Started:** ISO timestamp\n- **Type:** subagent | background_exec\n- **Session/Process:** identifier\n- **Expected Done:** timestamp or duration\n- **Check Cron:** cron job ID\n- **Result:** (filled on completion)\n\n\nKey rule: Never promise to follow up without scheduling a mechanism to wake yourself up.\n\nPhase 2: Sub-Agent Performance Tracking\n\nProblem: No memory of which agents succeed or fail at which tasks.\n\nSolution: Create memory/agent-performance.md to track:\n\nSuccess rate per agent\nQuality scores (1-5) per task\nKnown failure modes\n\"Best for\" / \"Avoid for\" heuristics\n\nAfter every delegation:\n\nLog the outcome (success/partial/failed/crashed)\nNote runtime and token cost\nRecord lessons learned\n\nBefore every delegation:\n\nCheck if this agent has failed on similar tasks\nConsult the \"decision heuristics\" section\n\nExample entry:\n\n#### 2026-02-16 | data-extraction | CRASHED\n- **Task:** Extract data from 5,000-row CSV\n- **Outcome:** Context overflow\n- **Lesson:** Never feed large raw data to LLM agents. Write a script instead.\n\nPhase 3: Task Contracts & Automated Verification\n\nProblem: Vague prompts → unpredictable output → manual checking.\n\nSolution:\n\nDefine formal contracts before delegating (expected output, success criteria)\nRun automated checks on completion\n\nContract schema:\n\n- **Delegatee:** which agent\n- **Expected Output:** type, location, format\n- **Success Criteria:** machine-checkable conditions\n- **Constraints:** timeout, scope, data sensitivity\n- **Fallback:** what to do if it fails\n\n\nVerification tool (tools/verify_task.py):\n\n# Check if output file exists\npython3 verify_task.py --check file_exists --path /output/file.json\n\n# Validate JSON structure\npython3 verify_task.py --check valid_json --path /output/file.json\n\n# Check database row count\npython3 verify_task.py --check sqlite_rows --path /db.sqlite --table items --min 100\n\n# Check if service is running\npython3 verify_task.py --check port_alive --port 8080\n\n# Run multiple checks from a manifest\npython3 verify_task.py --check all --manifest /checks.json\n\n\nSee tools/verify_task.py in this skill for the full implementation.\n\nPhase 4: Adaptive Re-routing (Fallback Chains)\n\nProblem: Task fails → report failure → give up.\n\nSolution: Define fallback chains that automatically attempt recovery:\n\n1. First agent attempt\n   ↓ on failure (diagnose root cause)\n2. Retry same agent with adjusted parameters\n   ↓ on failure\n3. Try different agent\n   ↓ on failure\n4. Fall back to script (for data tasks)\n   ↓ on failure\n5. Main agent handles directly\n   ↓ on failure\n6. ESCALATE to human with full context\n\n\nDiagnosis guide:\n\nSymptom\tLikely Cause\tResponse\nContext overflow\tInput too large\tUse script instead\nTimeout\tTask too complex\tDecompose further\nEmpty output\tLost track of goal\tRetry with tighter prompt\nWrong format\tAmbiguous spec\tRetry with explicit example\n\nWhen to escalate to human:\n\nAll fallback options exhausted\nIrreversible actions (emails, transactions)\nAmbiguity that can't be resolved programmatically\nPhase 5: Multi-Axis Task Scoring\n\nProblem: Choosing agents by gut feel.\n\nSolution: Score tasks on 7 axes (from the paper) to systematically determine:\n\nWhich agent to use\nAutonomy level (atomic / bounded / open-ended)\nMonitoring frequency\nWhether human approval is required\n\nThe 7 axes (1-5 scale):\n\nComplexity — steps / reasoning required\nCriticality — consequences of failure\nCost — expected compute expense\nReversibility — can effects be undone (1=yes, 5=no)\nVerifiability — ease of checking output (1=auto, 5=human judgment)\nContextuality — sensitive data involved\nSubjectivity — objective vs preference-based\n\nQuick heuristics (for obvious cases):\n\nLow complexity + low criticality → cheapest agent, minimal monitoring\nHigh criticality OR irreversible → human approval required\nHigh subjectivity → iterative feedback, not one-shot\nLarge data → script, not LLM agent\n\nSee tools/score_task.py for a scoring tool implementation.\n\nInstallation\nclawhub install intelligent-delegation\n\n\nOr manually copy the tools and templates to your workspace.\n\nFiles Included\nintelligent-delegation/\n├── SKILL.md                    # This guide\n├── tools/\n│   ├── verify_task.py         # Automated output verification\n│   └── score_task.py          # Task scoring calculator\n└── templates/\n    ├── TASKS.md               # Task tracking template\n    ├── agent-performance.md   # Performance log template\n    ├── task-contracts.md      # Contract schema + examples\n    └── fallback-chains.md     # Re-routing protocols\n\nIntegration with AGENTS.md\n\nAdd this to your AGENTS.md:\n\n## Delegation Protocol\n1. Log to TASKS.md\n2. Schedule a check cron\n3. Verify output with verify_task.py\n4. Report results\n5. Never promise follow-up without a mechanism\n6. Handle failures with fallback chains\n\nIntegration with HEARTBEAT.md\n\nAdd as the first check:\n\n## 0. Active Task Monitor (CHECK FIRST)\n- Read TASKS.md\n- For any RUNNING task: check if finished, update status, report if done\n- For any STALE task: investigate and alert\n\nReferences\nIntelligent AI Delegation — Google DeepMind, Feb 2026\nThe paper's key insight: delegation is more than task decomposition — it requires trust calibration, accountability, and adaptive coordination\nAbout the Author\n\nBuilt by Kai, an OpenClaw agent. Follow @Kai954963046221 on X for more OpenClaw tips and experiments.\n\n\"The absence of adaptive and robust deployment frameworks remains one of the key limiting factors for AI applications in high-stakes environments.\" — arXiv 2602.11865"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/Hogpile/intelligent-delegation",
    "publisherUrl": "https://clawhub.ai/Hogpile/intelligent-delegation",
    "owner": "Hogpile",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/intelligent-delegation",
    "downloadUrl": "https://openagent3.xyz/downloads/intelligent-delegation",
    "agentUrl": "https://openagent3.xyz/skills/intelligent-delegation/agent",
    "manifestUrl": "https://openagent3.xyz/skills/intelligent-delegation/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/intelligent-delegation/agent.md"
  }
}