{
  "schemaVersion": "1.0",
  "item": {
    "slug": "openclawbrain",
    "name": "OpenClawBrain",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/jonathangu/openclawbrain",
    "canonicalUrl": "https://clawhub.ai/jonathangu/openclawbrain",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/openclawbrain",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=openclawbrain",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/openclawbrain"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/openclawbrain",
    "agentPageUrl": "https://openagent3.xyz/skills/openclawbrain/agent",
    "manifestUrl": "https://openagent3.xyz/skills/openclawbrain/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/openclawbrain/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "OpenClawBrain v12.2.1",
        "body": "Learned retrieval graph for AI agents. Nodes are document chunks, edges are mutable weighted pointers. The graph learns from outcomes using policy-gradient updates (REINFORCE) and self-regulates via homeostatic decay, synaptic scaling, and tier hysteresis."
      },
      {
        "title": "Install",
        "body": "pip install openclawbrain              # core (pure Python, zero deps)\npip install \"openclawbrain[openai]\"    # with OpenAI embeddings"
      },
      {
        "title": "Quick Start",
        "body": "# Build a brain from workspace files\nopenclawbrain init --workspace ./my-workspace --output ./brain --embedder openai\n\n# Query\nopenclawbrain query \"how do I deploy\" --state ./brain/state.json --json\n\n# Learn from outcome (+1 good, -1 bad)\nopenclawbrain learn --state ./brain/state.json --outcome 1.0 --fired-ids \"node1,node2\"\n\n# Self-learn (agent-initiated, no human needed)\nopenclawbrain self-learn --state ./brain/state.json \\\n  --content \"Always download artifacts before terminating instances\" \\\n  --fired-ids \"node1,node2\" --outcome -1.0 --type CORRECTION\n\n# Health check\nopenclawbrain doctor --state ./brain/state.json"
      },
      {
        "title": "Learning Rule: Policy Gradient (default)",
        "body": "Default is apply_outcome_pg (REINFORCE). At each node, updates redistribute probability mass across ALL outgoing edges (sum ≈ 0). The chosen edge goes up, all alternatives go down. No inflation.\n\napply_outcome (heuristic) is available as fallback — only updates traversed edges, inflationary."
      },
      {
        "title": "Self-Learning",
        "body": "Agents learn from their own observed outcomes without human feedback (self-correct available as CLI/API alias):\n\nfrom openclawbrain.socket_client import OCBClient\n\nwith OCBClient('~/.openclawbrain/main/daemon.sock') as client:\n    # Agent detected failure\n    client.self_learn(\n        content='Always download artifacts before terminating',\n        fired_ids=['node1', 'node2'],\n        outcome=-1.0,\n        node_type='CORRECTION',   # penalize + inhibitory edges\n    )\n\n    # Agent observed success\n    client.self_learn(\n        content='Download-then-terminate works reliably',\n        fired_ids=['node1', 'node2'],\n        outcome=1.0,\n        node_type='TEACHING',     # reinforce + positive knowledge\n    )\n\nSituationoutcometypeEffectMistake-1.0CORRECTIONPenalize path + inhibitory edgesFact learned0.0TEACHINGInject knowledge onlySuccess+1.0TEACHINGReinforce path + inject knowledge"
      },
      {
        "title": "Self-Regulation (automatic, no tuning needed)",
        "body": "Homeostatic decay: half-life auto-adjusts to maintain 5-15% reflex edge ratio. Bounded 60-300 cycles.\nSynaptic scaling: soft per-node weight budget (5.0) prevents hub domination.\nTier hysteresis: habitual band 0.15-0.6 prevents threshold thrashing.\nSynaptic scaling (maintenance detail): soft per-node weight budget (5.0) with fourth-root scaling."
      },
      {
        "title": "Edge Tiers",
        "body": "TierWeightBehaviorReflex≥ 0.6Auto-followHabitual0.15 – 0.6Follow by weightDormant< 0.15SkippedInhibitory< -0.01Actively suppresses target"
      },
      {
        "title": "Maintenance Pipeline",
        "body": "Runs every 30 min via daemon: health → decay → scale → split → merge → prune → connect\n\nDecay: exponential edge weight decay (adaptive half-life)\nScale: synaptic scaling on hub nodes\nSplit: runtime node splitting (inverse of merge) for bloated multi-topic nodes\nMerge: consolidate co-firing nodes (bidirectional weight ≥ 0.8)\nPrune: remove dead edges (|w| < 0.01) and orphan nodes"
      },
      {
        "title": "Maintenance",
        "body": "split_node: splits bloated nodes into focused children with embedding-based edge rewiring\nsuggest_splits: detects candidates by content length, hub degree, merge origin, edge variance"
      },
      {
        "title": "Text Chunking",
        "body": "split_workspace chunks files by type (.py → functions, .md → headers, .json → keys) then _rechunk_oversized ensures no chunk exceeds 12K chars. Large texts are split on blank lines → newlines → hard cut. No content is ever skipped or truncated."
      },
      {
        "title": "Daemon (production use)",
        "body": "The daemon keeps state hot in memory behind a Unix socket (~500ms queries vs 5-8s from disk).\n\n# Start daemon (usually via launchd)\nopenclawbrain daemon --state ./brain/state.json --embed-model text-embedding-3-small"
      },
      {
        "title": "Daemon Methods (NDJSON over Unix socket)",
        "body": "MethodPurposequeryTraverse graph, return fired nodes + contextlearnApply outcome to fired nodesself_learnAgent-initiated learning (CORRECTION or TEACHING)self_correctAlias for self_learn (self-correct available as CLI/API alias)correctionHuman-initiated correction (uses chat_id lookback)injectAdd TEACHING/CORRECTION/DIRECTIVE nodemaintainRun maintenance taskshealthGraph health metricsinfoDaemon infosaveForce state writereloadReload state from diskshutdownClean shutdown"
      },
      {
        "title": "Socket Client",
        "body": "from openclawbrain.socket_client import OCBClient\n\nwith OCBClient('/path/to/daemon.sock') as c:\n    result = c.query('how do I deploy', chat_id='session-123')\n    c.learn(fired_nodes=['node1', 'node2'], outcome=1.0)\n    c.self_learn(content='lesson', outcome=-1.0, node_type='CORRECTION')\n    c.health()\n    c.maintain(tasks=['decay', 'prune'])"
      },
      {
        "title": "CLI Reference",
        "body": "openclawbrain init --workspace W --output O [--embedder openai] [--llm openai]\nopenclawbrain query TEXT --state S [--top N] [--json] [--chat-id CID]\nopenclawbrain learn --state S --outcome N --fired-ids a,b,c [--json]\nopenclawbrain self-learn --state S --content TEXT [--fired-ids a,b] [--outcome -1] [--type CORRECTION|TEACHING]\nopenclawbrain inject --state S --id ID --content TEXT [--type CORRECTION|TEACHING|DIRECTIVE]\nopenclawbrain health --state S\nopenclawbrain doctor --state S\nopenclawbrain info --state S\nopenclawbrain maintain --state S [--tasks decay,scale,split,merge,prune,connect]\nopenclawbrain status --state S [--json]\nopenclawbrain replay --state S --sessions S\nopenclawbrain merge --state S [--llm openai]\nopenclawbrain connect --state S\nopenclawbrain compact --state S\nopenclawbrain sync --workspace W --state S [--embedder openai]\nopenclawbrain daemon --state S [--embed-model text-embedding-3-small]"
      },
      {
        "title": "Traversal Defaults",
        "body": "ParameterDefaultbeam_width8max_hops30fire_threshold0.01reflex_threshold0.6habitual_range(0.15, 0.6)inhibitory_threshold-0.01max_context_chars20000 (in query_brain.py)"
      },
      {
        "title": "State Persistence",
        "body": "Atomic writes: temp → fsync → rename. Keeps .bak backup. Crash-safe.\nState format: state.json (graph + index + metadata)\nEmbedder identity stored in metadata; dimension mismatches are errors."
      },
      {
        "title": "Integration with OpenClaw Agents",
        "body": "Add to your agent's AGENTS.md:\n\n## OpenClawBrain Memory Graph\n\n**Query:**\npython3 ~/openclawbrain/examples/openclaw_adapter/query_brain.py \\\n  ~/.openclawbrain/<brain>/state.json '<query>' --chat-id '<chat_id>' --json\n\n**Learn:** openclawbrain learn --state ~/.openclawbrain/<brain>/state.json --outcome 1.0 --fired-ids <ids>\n\n**Self-learn:** openclawbrain self-learn --state ~/.openclawbrain/<brain>/state.json \\\n  --content \"lesson\" --fired-ids <ids> --outcome -1.0 --type CORRECTION\n  # (self-correct available as CLI/API alias)\n\n**Health:** openclawbrain health --state ~/.openclawbrain/<brain>/state.json"
      },
      {
        "title": "Links",
        "body": "Paper: https://jonathangu.com/openclawbrain/\nBlog: https://jonathangu.com/openclawbrain/blog/v12.2.1/\nDerivation: https://jonathangu.com/openclawbrain/gu2016/\nGitHub: https://github.com/jonathangu/openclawbrain\nPyPI: pip install openclawbrain==12.2.1"
      }
    ],
    "body": "OpenClawBrain v12.2.1\n\nLearned retrieval graph for AI agents. Nodes are document chunks, edges are mutable weighted pointers. The graph learns from outcomes using policy-gradient updates (REINFORCE) and self-regulates via homeostatic decay, synaptic scaling, and tier hysteresis.\n\nInstall\npip install openclawbrain              # core (pure Python, zero deps)\npip install \"openclawbrain[openai]\"    # with OpenAI embeddings\n\nQuick Start\n# Build a brain from workspace files\nopenclawbrain init --workspace ./my-workspace --output ./brain --embedder openai\n\n# Query\nopenclawbrain query \"how do I deploy\" --state ./brain/state.json --json\n\n# Learn from outcome (+1 good, -1 bad)\nopenclawbrain learn --state ./brain/state.json --outcome 1.0 --fired-ids \"node1,node2\"\n\n# Self-learn (agent-initiated, no human needed)\nopenclawbrain self-learn --state ./brain/state.json \\\n  --content \"Always download artifacts before terminating instances\" \\\n  --fired-ids \"node1,node2\" --outcome -1.0 --type CORRECTION\n\n# Health check\nopenclawbrain doctor --state ./brain/state.json\n\nCore Concepts\nLearning Rule: Policy Gradient (default)\n\nDefault is apply_outcome_pg (REINFORCE). At each node, updates redistribute probability mass across ALL outgoing edges (sum ≈ 0). The chosen edge goes up, all alternatives go down. No inflation.\n\napply_outcome (heuristic) is available as fallback — only updates traversed edges, inflationary.\n\nSelf-Learning\n\nAgents learn from their own observed outcomes without human feedback (self-correct available as CLI/API alias):\n\nfrom openclawbrain.socket_client import OCBClient\n\nwith OCBClient('~/.openclawbrain/main/daemon.sock') as client:\n    # Agent detected failure\n    client.self_learn(\n        content='Always download artifacts before terminating',\n        fired_ids=['node1', 'node2'],\n        outcome=-1.0,\n        node_type='CORRECTION',   # penalize + inhibitory edges\n    )\n\n    # Agent observed success\n    client.self_learn(\n        content='Download-then-terminate works reliably',\n        fired_ids=['node1', 'node2'],\n        outcome=1.0,\n        node_type='TEACHING',     # reinforce + positive knowledge\n    )\n\nSituation\toutcome\ttype\tEffect\nMistake\t-1.0\tCORRECTION\tPenalize path + inhibitory edges\nFact learned\t0.0\tTEACHING\tInject knowledge only\nSuccess\t+1.0\tTEACHING\tReinforce path + inject knowledge\nSelf-Regulation (automatic, no tuning needed)\nHomeostatic decay: half-life auto-adjusts to maintain 5-15% reflex edge ratio. Bounded 60-300 cycles.\nSynaptic scaling: soft per-node weight budget (5.0) prevents hub domination.\nTier hysteresis: habitual band 0.15-0.6 prevents threshold thrashing.\nSynaptic scaling (maintenance detail): soft per-node weight budget (5.0) with fourth-root scaling.\nEdge Tiers\nTier\tWeight\tBehavior\nReflex\t≥ 0.6\tAuto-follow\nHabitual\t0.15 – 0.6\tFollow by weight\nDormant\t< 0.15\tSkipped\nInhibitory\t< -0.01\tActively suppresses target\nMaintenance Pipeline\n\nRuns every 30 min via daemon: health → decay → scale → split → merge → prune → connect\n\nDecay: exponential edge weight decay (adaptive half-life)\nScale: synaptic scaling on hub nodes\nSplit: runtime node splitting (inverse of merge) for bloated multi-topic nodes\nMerge: consolidate co-firing nodes (bidirectional weight ≥ 0.8)\nPrune: remove dead edges (|w| < 0.01) and orphan nodes\nMaintenance\nsplit_node: splits bloated nodes into focused children with embedding-based edge rewiring\nsuggest_splits: detects candidates by content length, hub degree, merge origin, edge variance\nText Chunking\n\nsplit_workspace chunks files by type (.py → functions, .md → headers, .json → keys) then _rechunk_oversized ensures no chunk exceeds 12K chars. Large texts are split on blank lines → newlines → hard cut. No content is ever skipped or truncated.\n\nDaemon (production use)\n\nThe daemon keeps state hot in memory behind a Unix socket (~500ms queries vs 5-8s from disk).\n\n# Start daemon (usually via launchd)\nopenclawbrain daemon --state ./brain/state.json --embed-model text-embedding-3-small\n\nDaemon Methods (NDJSON over Unix socket)\nMethod\tPurpose\nquery\tTraverse graph, return fired nodes + context\nlearn\tApply outcome to fired nodes\nself_learn\tAgent-initiated learning (CORRECTION or TEACHING)\nself_correct\tAlias for self_learn (self-correct available as CLI/API alias)\ncorrection\tHuman-initiated correction (uses chat_id lookback)\ninject\tAdd TEACHING/CORRECTION/DIRECTIVE node\nmaintain\tRun maintenance tasks\nhealth\tGraph health metrics\ninfo\tDaemon info\nsave\tForce state write\nreload\tReload state from disk\nshutdown\tClean shutdown\nSocket Client\nfrom openclawbrain.socket_client import OCBClient\n\nwith OCBClient('/path/to/daemon.sock') as c:\n    result = c.query('how do I deploy', chat_id='session-123')\n    c.learn(fired_nodes=['node1', 'node2'], outcome=1.0)\n    c.self_learn(content='lesson', outcome=-1.0, node_type='CORRECTION')\n    c.health()\n    c.maintain(tasks=['decay', 'prune'])\n\nCLI Reference\nopenclawbrain init --workspace W --output O [--embedder openai] [--llm openai]\nopenclawbrain query TEXT --state S [--top N] [--json] [--chat-id CID]\nopenclawbrain learn --state S --outcome N --fired-ids a,b,c [--json]\nopenclawbrain self-learn --state S --content TEXT [--fired-ids a,b] [--outcome -1] [--type CORRECTION|TEACHING]\nopenclawbrain inject --state S --id ID --content TEXT [--type CORRECTION|TEACHING|DIRECTIVE]\nopenclawbrain health --state S\nopenclawbrain doctor --state S\nopenclawbrain info --state S\nopenclawbrain maintain --state S [--tasks decay,scale,split,merge,prune,connect]\nopenclawbrain status --state S [--json]\nopenclawbrain replay --state S --sessions S\nopenclawbrain merge --state S [--llm openai]\nopenclawbrain connect --state S\nopenclawbrain compact --state S\nopenclawbrain sync --workspace W --state S [--embedder openai]\nopenclawbrain daemon --state S [--embed-model text-embedding-3-small]\n\nTraversal Defaults\nParameter\tDefault\nbeam_width\t8\nmax_hops\t30\nfire_threshold\t0.01\nreflex_threshold\t0.6\nhabitual_range\t(0.15, 0.6)\ninhibitory_threshold\t-0.01\nmax_context_chars\t20000 (in query_brain.py)\nState Persistence\nAtomic writes: temp → fsync → rename. Keeps .bak backup. Crash-safe.\nState format: state.json (graph + index + metadata)\nEmbedder identity stored in metadata; dimension mismatches are errors.\nIntegration with OpenClaw Agents\n\nAdd to your agent's AGENTS.md:\n\n## OpenClawBrain Memory Graph\n\n**Query:**\npython3 ~/openclawbrain/examples/openclaw_adapter/query_brain.py \\\n  ~/.openclawbrain/<brain>/state.json '<query>' --chat-id '<chat_id>' --json\n\n**Learn:** openclawbrain learn --state ~/.openclawbrain/<brain>/state.json --outcome 1.0 --fired-ids <ids>\n\n**Self-learn:** openclawbrain self-learn --state ~/.openclawbrain/<brain>/state.json \\\n  --content \"lesson\" --fired-ids <ids> --outcome -1.0 --type CORRECTION\n  # (self-correct available as CLI/API alias)\n\n**Health:** openclawbrain health --state ~/.openclawbrain/<brain>/state.json\n\nLinks\nPaper: https://jonathangu.com/openclawbrain/\nBlog: https://jonathangu.com/openclawbrain/blog/v12.2.1/\nDerivation: https://jonathangu.com/openclawbrain/gu2016/\nGitHub: https://github.com/jonathangu/openclawbrain\nPyPI: pip install openclawbrain==12.2.1"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/jonathangu/openclawbrain",
    "publisherUrl": "https://clawhub.ai/jonathangu/openclawbrain",
    "owner": "jonathangu",
    "version": "12.2.1",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/openclawbrain",
    "downloadUrl": "https://openagent3.xyz/downloads/openclawbrain",
    "agentUrl": "https://openagent3.xyz/skills/openclawbrain/agent",
    "manifestUrl": "https://openagent3.xyz/skills/openclawbrain/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/openclawbrain/agent.md"
  }
}