← All skills
Tencent SkillHub Β· AI

OpenClawBrain

Learned memory graph for AI agents. Policy-gradient routing over document chunks with self-learning, self-regulation, and autonomous correction. Pure Python...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Learned memory graph for AI agents. Policy-gradient routing over document chunks with self-learning, self-regulation, and autonomous correction. Pure Python...

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Known item issue.

This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.

Quick setup
  1. Open the source page and confirm the package flow manually.
  2. Review SKILL.md if you can obtain the files.
  3. Treat this source as manual setup until the download is verified.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Manual review
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md

Validation

  • Open the source listing and confirm there is a real package or setup artifact available.
  • Review SKILL.md before asking your agent to continue.
  • Treat this source as manual setup until the upstream download flow is fixed.

Install with your agent

Agent handoff

Use the source page and any available docs to guide the install because the item currently does not return a direct package file.

  1. Open the source page via Open source listing.
  2. If you can obtain the package, extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the source page and extracted files.
New install

I tried to install a skill package from Yavira, but the item currently does not return a direct package file. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required.

Upgrade existing

I tried to upgrade a skill package from Yavira, but the item currently does not return a direct package file. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
12.2.1

Documentation

ClawHub primary doc Primary doc: SKILL.md 18 sections Open source page

OpenClawBrain v12.2.1

Learned retrieval graph for AI agents. Nodes are document chunks, edges are mutable weighted pointers. The graph learns from outcomes using policy-gradient updates (REINFORCE) and self-regulates via homeostatic decay, synaptic scaling, and tier hysteresis.

Install

pip install openclawbrain # core (pure Python, zero deps) pip install "openclawbrain[openai]" # with OpenAI embeddings

Quick Start

# Build a brain from workspace files openclawbrain init --workspace ./my-workspace --output ./brain --embedder openai # Query openclawbrain query "how do I deploy" --state ./brain/state.json --json # Learn from outcome (+1 good, -1 bad) openclawbrain learn --state ./brain/state.json --outcome 1.0 --fired-ids "node1,node2" # Self-learn (agent-initiated, no human needed) openclawbrain self-learn --state ./brain/state.json \ --content "Always download artifacts before terminating instances" \ --fired-ids "node1,node2" --outcome -1.0 --type CORRECTION # Health check openclawbrain doctor --state ./brain/state.json

Learning Rule: Policy Gradient (default)

Default is apply_outcome_pg (REINFORCE). At each node, updates redistribute probability mass across ALL outgoing edges (sum β‰ˆ 0). The chosen edge goes up, all alternatives go down. No inflation. apply_outcome (heuristic) is available as fallback β€” only updates traversed edges, inflationary.

Self-Learning

Agents learn from their own observed outcomes without human feedback (self-correct available as CLI/API alias): from openclawbrain.socket_client import OCBClient with OCBClient('~/.openclawbrain/main/daemon.sock') as client: # Agent detected failure client.self_learn( content='Always download artifacts before terminating', fired_ids=['node1', 'node2'], outcome=-1.0, node_type='CORRECTION', # penalize + inhibitory edges ) # Agent observed success client.self_learn( content='Download-then-terminate works reliably', fired_ids=['node1', 'node2'], outcome=1.0, node_type='TEACHING', # reinforce + positive knowledge ) SituationoutcometypeEffectMistake-1.0CORRECTIONPenalize path + inhibitory edgesFact learned0.0TEACHINGInject knowledge onlySuccess+1.0TEACHINGReinforce path + inject knowledge

Self-Regulation (automatic, no tuning needed)

Homeostatic decay: half-life auto-adjusts to maintain 5-15% reflex edge ratio. Bounded 60-300 cycles. Synaptic scaling: soft per-node weight budget (5.0) prevents hub domination. Tier hysteresis: habitual band 0.15-0.6 prevents threshold thrashing. Synaptic scaling (maintenance detail): soft per-node weight budget (5.0) with fourth-root scaling.

Edge Tiers

TierWeightBehaviorReflexβ‰₯ 0.6Auto-followHabitual0.15 – 0.6Follow by weightDormant< 0.15SkippedInhibitory< -0.01Actively suppresses target

Maintenance Pipeline

Runs every 30 min via daemon: health β†’ decay β†’ scale β†’ split β†’ merge β†’ prune β†’ connect Decay: exponential edge weight decay (adaptive half-life) Scale: synaptic scaling on hub nodes Split: runtime node splitting (inverse of merge) for bloated multi-topic nodes Merge: consolidate co-firing nodes (bidirectional weight β‰₯ 0.8) Prune: remove dead edges (|w| < 0.01) and orphan nodes

Maintenance

split_node: splits bloated nodes into focused children with embedding-based edge rewiring suggest_splits: detects candidates by content length, hub degree, merge origin, edge variance

Text Chunking

split_workspace chunks files by type (.py β†’ functions, .md β†’ headers, .json β†’ keys) then _rechunk_oversized ensures no chunk exceeds 12K chars. Large texts are split on blank lines β†’ newlines β†’ hard cut. No content is ever skipped or truncated.

Daemon (production use)

The daemon keeps state hot in memory behind a Unix socket (~500ms queries vs 5-8s from disk). # Start daemon (usually via launchd) openclawbrain daemon --state ./brain/state.json --embed-model text-embedding-3-small

Daemon Methods (NDJSON over Unix socket)

MethodPurposequeryTraverse graph, return fired nodes + contextlearnApply outcome to fired nodesself_learnAgent-initiated learning (CORRECTION or TEACHING)self_correctAlias for self_learn (self-correct available as CLI/API alias)correctionHuman-initiated correction (uses chat_id lookback)injectAdd TEACHING/CORRECTION/DIRECTIVE nodemaintainRun maintenance taskshealthGraph health metricsinfoDaemon infosaveForce state writereloadReload state from diskshutdownClean shutdown

Socket Client

from openclawbrain.socket_client import OCBClient with OCBClient('/path/to/daemon.sock') as c: result = c.query('how do I deploy', chat_id='session-123') c.learn(fired_nodes=['node1', 'node2'], outcome=1.0) c.self_learn(content='lesson', outcome=-1.0, node_type='CORRECTION') c.health() c.maintain(tasks=['decay', 'prune'])

CLI Reference

openclawbrain init --workspace W --output O [--embedder openai] [--llm openai] openclawbrain query TEXT --state S [--top N] [--json] [--chat-id CID] openclawbrain learn --state S --outcome N --fired-ids a,b,c [--json] openclawbrain self-learn --state S --content TEXT [--fired-ids a,b] [--outcome -1] [--type CORRECTION|TEACHING] openclawbrain inject --state S --id ID --content TEXT [--type CORRECTION|TEACHING|DIRECTIVE] openclawbrain health --state S openclawbrain doctor --state S openclawbrain info --state S openclawbrain maintain --state S [--tasks decay,scale,split,merge,prune,connect] openclawbrain status --state S [--json] openclawbrain replay --state S --sessions S openclawbrain merge --state S [--llm openai] openclawbrain connect --state S openclawbrain compact --state S openclawbrain sync --workspace W --state S [--embedder openai] openclawbrain daemon --state S [--embed-model text-embedding-3-small]

Traversal Defaults

ParameterDefaultbeam_width8max_hops30fire_threshold0.01reflex_threshold0.6habitual_range(0.15, 0.6)inhibitory_threshold-0.01max_context_chars20000 (in query_brain.py)

State Persistence

Atomic writes: temp β†’ fsync β†’ rename. Keeps .bak backup. Crash-safe. State format: state.json (graph + index + metadata) Embedder identity stored in metadata; dimension mismatches are errors.

Integration with OpenClaw Agents

Add to your agent's AGENTS.md: ## OpenClawBrain Memory Graph **Query:** python3 ~/openclawbrain/examples/openclaw_adapter/query_brain.py \ ~/.openclawbrain/<brain>/state.json '<query>' --chat-id '<chat_id>' --json **Learn:** openclawbrain learn --state ~/.openclawbrain/<brain>/state.json --outcome 1.0 --fired-ids <ids> **Self-learn:** openclawbrain self-learn --state ~/.openclawbrain/<brain>/state.json \ --content "lesson" --fired-ids <ids> --outcome -1.0 --type CORRECTION # (self-correct available as CLI/API alias) **Health:** openclawbrain health --state ~/.openclawbrain/<brain>/state.json

Links

Paper: https://jonathangu.com/openclawbrain/ Blog: https://jonathangu.com/openclawbrain/blog/v12.2.1/ Derivation: https://jonathangu.com/openclawbrain/gu2016/ GitHub: https://github.com/jonathangu/openclawbrain PyPI: pip install openclawbrain==12.2.1

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
1 Docs
  • SKILL.md Primary doc