# Send OpenClawBrain to your agent
Use the source page and any available docs to guide the install because the item currently does not return a direct package file.
## Fast path
- Open the source page via Open source listing.
- If you can obtain the package, extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the source page and extracted files.
## Suggested prompts
### New install

```text
I tried to install a skill package from Yavira, but the item currently does not return a direct package file. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required.
```
### Upgrade existing

```text
I tried to upgrade a skill package from Yavira, but the item currently does not return a direct package file. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "openclawbrain",
    "name": "OpenClawBrain",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/jonathangu/openclawbrain",
    "canonicalUrl": "https://clawhub.ai/jonathangu/openclawbrain",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/openclawbrain",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=openclawbrain",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "SKILL.md"
    ],
    "downloadMode": "manual_only",
    "sourceHealth": {
      "source": "tencent",
      "slug": "openclawbrain",
      "status": "source_issue",
      "reason": "not_found",
      "recommendedAction": "review_source",
      "checkedAt": "2026-05-02T22:04:17.563Z",
      "expiresAt": "2026-05-03T22:04:17.563Z",
      "httpStatus": 404,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=openclawbrain",
      "contentType": "text/plain",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=openclawbrain",
        "contentDisposition": null,
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "openclawbrain"
      },
      "scope": "item",
      "summary": "Known item issue.",
      "detail": "This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.",
      "primaryActionLabel": "Open source listing",
      "primaryActionHref": "https://clawhub.ai/jonathangu/openclawbrain"
    },
    "validation": {
      "installChecklist": [
        "Open the source listing and confirm there is a real package or setup artifact available.",
        "Review SKILL.md before asking your agent to continue.",
        "Treat this source as manual setup until the upstream download flow is fixed."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/openclawbrain",
    "downloadUrl": "https://openagent3.xyz/downloads/openclawbrain",
    "agentUrl": "https://openagent3.xyz/skills/openclawbrain/agent",
    "manifestUrl": "https://openagent3.xyz/skills/openclawbrain/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/openclawbrain/agent.md"
  }
}
```
## Documentation

### OpenClawBrain v12.2.1

Learned retrieval graph for AI agents. Nodes are document chunks, edges are mutable weighted pointers. The graph learns from outcomes using policy-gradient updates (REINFORCE) and self-regulates via homeostatic decay, synaptic scaling, and tier hysteresis.

### Install

pip install openclawbrain              # core (pure Python, zero deps)
pip install "openclawbrain[openai]"    # with OpenAI embeddings

### Quick Start

# Build a brain from workspace files
openclawbrain init --workspace ./my-workspace --output ./brain --embedder openai

# Query
openclawbrain query "how do I deploy" --state ./brain/state.json --json

# Learn from outcome (+1 good, -1 bad)
openclawbrain learn --state ./brain/state.json --outcome 1.0 --fired-ids "node1,node2"

# Self-learn (agent-initiated, no human needed)
openclawbrain self-learn --state ./brain/state.json \\
  --content "Always download artifacts before terminating instances" \\
  --fired-ids "node1,node2" --outcome -1.0 --type CORRECTION

# Health check
openclawbrain doctor --state ./brain/state.json

### Learning Rule: Policy Gradient (default)

Default is apply_outcome_pg (REINFORCE). At each node, updates redistribute probability mass across ALL outgoing edges (sum ≈ 0). The chosen edge goes up, all alternatives go down. No inflation.

apply_outcome (heuristic) is available as fallback — only updates traversed edges, inflationary.

### Self-Learning

Agents learn from their own observed outcomes without human feedback (self-correct available as CLI/API alias):

from openclawbrain.socket_client import OCBClient

with OCBClient('~/.openclawbrain/main/daemon.sock') as client:
    # Agent detected failure
    client.self_learn(
        content='Always download artifacts before terminating',
        fired_ids=['node1', 'node2'],
        outcome=-1.0,
        node_type='CORRECTION',   # penalize + inhibitory edges
    )

    # Agent observed success
    client.self_learn(
        content='Download-then-terminate works reliably',
        fired_ids=['node1', 'node2'],
        outcome=1.0,
        node_type='TEACHING',     # reinforce + positive knowledge
    )

SituationoutcometypeEffectMistake-1.0CORRECTIONPenalize path + inhibitory edgesFact learned0.0TEACHINGInject knowledge onlySuccess+1.0TEACHINGReinforce path + inject knowledge

### Self-Regulation (automatic, no tuning needed)

Homeostatic decay: half-life auto-adjusts to maintain 5-15% reflex edge ratio. Bounded 60-300 cycles.
Synaptic scaling: soft per-node weight budget (5.0) prevents hub domination.
Tier hysteresis: habitual band 0.15-0.6 prevents threshold thrashing.
Synaptic scaling (maintenance detail): soft per-node weight budget (5.0) with fourth-root scaling.

### Edge Tiers

TierWeightBehaviorReflex≥ 0.6Auto-followHabitual0.15 – 0.6Follow by weightDormant< 0.15SkippedInhibitory< -0.01Actively suppresses target

### Maintenance Pipeline

Runs every 30 min via daemon: health → decay → scale → split → merge → prune → connect

Decay: exponential edge weight decay (adaptive half-life)
Scale: synaptic scaling on hub nodes
Split: runtime node splitting (inverse of merge) for bloated multi-topic nodes
Merge: consolidate co-firing nodes (bidirectional weight ≥ 0.8)
Prune: remove dead edges (|w| < 0.01) and orphan nodes

### Maintenance

split_node: splits bloated nodes into focused children with embedding-based edge rewiring
suggest_splits: detects candidates by content length, hub degree, merge origin, edge variance

### Text Chunking

split_workspace chunks files by type (.py → functions, .md → headers, .json → keys) then _rechunk_oversized ensures no chunk exceeds 12K chars. Large texts are split on blank lines → newlines → hard cut. No content is ever skipped or truncated.

### Daemon (production use)

The daemon keeps state hot in memory behind a Unix socket (~500ms queries vs 5-8s from disk).

# Start daemon (usually via launchd)
openclawbrain daemon --state ./brain/state.json --embed-model text-embedding-3-small

### Daemon Methods (NDJSON over Unix socket)

MethodPurposequeryTraverse graph, return fired nodes + contextlearnApply outcome to fired nodesself_learnAgent-initiated learning (CORRECTION or TEACHING)self_correctAlias for self_learn (self-correct available as CLI/API alias)correctionHuman-initiated correction (uses chat_id lookback)injectAdd TEACHING/CORRECTION/DIRECTIVE nodemaintainRun maintenance taskshealthGraph health metricsinfoDaemon infosaveForce state writereloadReload state from diskshutdownClean shutdown

### Socket Client

from openclawbrain.socket_client import OCBClient

with OCBClient('/path/to/daemon.sock') as c:
    result = c.query('how do I deploy', chat_id='session-123')
    c.learn(fired_nodes=['node1', 'node2'], outcome=1.0)
    c.self_learn(content='lesson', outcome=-1.0, node_type='CORRECTION')
    c.health()
    c.maintain(tasks=['decay', 'prune'])

### CLI Reference

openclawbrain init --workspace W --output O [--embedder openai] [--llm openai]
openclawbrain query TEXT --state S [--top N] [--json] [--chat-id CID]
openclawbrain learn --state S --outcome N --fired-ids a,b,c [--json]
openclawbrain self-learn --state S --content TEXT [--fired-ids a,b] [--outcome -1] [--type CORRECTION|TEACHING]
openclawbrain inject --state S --id ID --content TEXT [--type CORRECTION|TEACHING|DIRECTIVE]
openclawbrain health --state S
openclawbrain doctor --state S
openclawbrain info --state S
openclawbrain maintain --state S [--tasks decay,scale,split,merge,prune,connect]
openclawbrain status --state S [--json]
openclawbrain replay --state S --sessions S
openclawbrain merge --state S [--llm openai]
openclawbrain connect --state S
openclawbrain compact --state S
openclawbrain sync --workspace W --state S [--embedder openai]
openclawbrain daemon --state S [--embed-model text-embedding-3-small]

### Traversal Defaults

ParameterDefaultbeam_width8max_hops30fire_threshold0.01reflex_threshold0.6habitual_range(0.15, 0.6)inhibitory_threshold-0.01max_context_chars20000 (in query_brain.py)

### State Persistence

Atomic writes: temp → fsync → rename. Keeps .bak backup. Crash-safe.
State format: state.json (graph + index + metadata)
Embedder identity stored in metadata; dimension mismatches are errors.

### Integration with OpenClaw Agents

Add to your agent's AGENTS.md:

## OpenClawBrain Memory Graph

**Query:**
python3 ~/openclawbrain/examples/openclaw_adapter/query_brain.py \\
  ~/.openclawbrain/<brain>/state.json '<query>' --chat-id '<chat_id>' --json

**Learn:** openclawbrain learn --state ~/.openclawbrain/<brain>/state.json --outcome 1.0 --fired-ids <ids>

**Self-learn:** openclawbrain self-learn --state ~/.openclawbrain/<brain>/state.json \\
  --content "lesson" --fired-ids <ids> --outcome -1.0 --type CORRECTION
  # (self-correct available as CLI/API alias)

**Health:** openclawbrain health --state ~/.openclawbrain/<brain>/state.json

### Links

Paper: https://jonathangu.com/openclawbrain/
Blog: https://jonathangu.com/openclawbrain/blog/v12.2.1/
Derivation: https://jonathangu.com/openclawbrain/gu2016/
GitHub: https://github.com/jonathangu/openclawbrain
PyPI: pip install openclawbrain==12.2.1
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: jonathangu
- Version: 12.2.1
## Source health
- Status: source_issue
- Known item issue.
- This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.
- Health scope: item
- Reason: not_found
- Checked at: 2026-05-02T22:04:17.563Z
- Expires at: 2026-05-03T22:04:17.563Z
- Recommended action: Open source listing
## Links
- [Detail page](https://openagent3.xyz/skills/openclawbrain)
- [Send to Agent page](https://openagent3.xyz/skills/openclawbrain/agent)
- [JSON manifest](https://openagent3.xyz/skills/openclawbrain/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/openclawbrain/agent.md)
- [Download page](https://openagent3.xyz/downloads/openclawbrain)