Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Neuroscience-based multi-layer memory system for OpenClaw agents that improves context efficiency using semantic schemas, vector stores, and sleep cycle cons...
Neuroscience-based multi-layer memory system for OpenClaw agents that improves context efficiency using semantic schemas, vector stores, and sleep cycle cons...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
A neuroscience-inspired memory architecture for OpenClaw agents. Replaces flat file injection with sparse, semantic, frequency-gated memory loading.
memory/ βββ INDEX.md β Hippocampus: topic router + cross-links βββ ANCHORS.md β Permanent high-significance event store βββ schemas/ β Domain-specific semantic schemas (you create these) memory_brain/ βββ index_memory.py β Embeds schemas into LanceDB vector store βββ query_memory.py β Semantic similarity search βββ nrem.py β NREM sleep cycle (compression + anchor promotion) βββ rem.py β REM sleep cycle (LLM consolidation via Ollama) βββ vectorstore/ β LanceDB database (auto-created)
# 1. Run the installer python3 ~/.openclaw/workspace/skills/brain-cms/install.py # 2. Index your schemas cd ~/.openclaw/workspace/memory_brain .venv/bin/python3 index_memory.py # 3. Test retrieval .venv/bin/python3 query_memory.py "your topic here" --sources-only
Boot sequence: Load MEMORY.md (lean core) + today's daily log. Nothing else. When a topic appears: Read memory/INDEX.md β load only the relevant schemas (spreading activation). Check memory/ANCHORS.md for high-significance events. For ambiguous topics: Run semantic search: memory_brain/.venv/bin/python3 memory_brain/query_memory.py "message text" --sources-only Auto-schema creation: When a new significant project or domain appears: Create memory/<topic>.md Add to INDEX.md with triggers + priority + cross-links Re-index: memory_brain/.venv/bin/python3 memory_brain/index_memory.py Sleep cycles: # NREM β run on shutdown (~30s, no LLM) cd ~/.openclaw/workspace/memory_brain && .venv/bin/python3 nrem.py # REM β run weekly (2-5 min, uses local llama3.2:3b, free) cd ~/.openclaw/workspace/memory_brain && .venv/bin/python3 rem.py
LayerFilesWhen loadedPurposeWorkingMEMORY.md + today logEvery sessionCore contextEpisodicmemory/YYYY-MM-DD.mdSession bootRecent eventsSemanticmemory/*.md schemasOn triggerDomain knowledgeAnchorsmemory/ANCHORS.mdOn CRITICAL topicsPermanent ground truthVectormemory_brain/vectorstore/On demandSemantic search
In any daily log, tag high-significance events: [ANCHOR] Major demo success β full pipeline working end-to-end NREM auto-promotes these to ANCHORS.md on next shutdown.
Typical MEMORY.md: 150-300 lines injected every session. With Brain CMS: ~50-line core + schemas loaded only when relevant. Estimated savings: 40-60% reduction in context tokens per session.
Python 3.10+ Ollama (for embeddings + REM consolidation) 500MB+ storage for vector store and models lancedb, numpy, pyarrow, requests (auto-installed)
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.