Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Context-aware memory for AI agents with dual retrieval modes — fast vector search or curated Focus Agent synthesis. SQLite backend, zero configuration, local embeddings.
Context-aware memory for AI agents with dual retrieval modes — fast vector search or curated Focus Agent synthesis. SQLite backend, zero configuration, local embeddings.
This item is timing out or returning errors right now. Review the source page and try again later.
Use the source page and any available docs to guide the install because the item is currently unstable or timing out.
I tried to install a skill package from Yavira, but the item is currently unstable or timing out. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required.
I tried to upgrade a skill package from Yavira, but the item is currently unstable or timing out. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need.
Smart Memory v2 is a persistent cognitive memory runtime, not a legacy vector-memory CLI. Core runtime: Node adapter: smart-memory/index.js Local API: server.py (FastAPI) Orchestrator: cognitive_memory_system.py
Structured long-term memory (episodic, semantic, belief, goal) Entity-aware retrieval and reranking Hot working memory Background cognition (reflection, consolidation, decay, conflict resolution) Strict token-bounded prompt composition Observability endpoints (/health, /memories, /memory/{id}, /insights/pending)
Use the native OpenClaw skill package: skills/smart-memory-v25/index.js Optional hook helper: skills/smart-memory-v25/openclaw-hooks.js Skill descriptor: skills/smart-memory-v25/SKILL.md Primary exports: createSmartMemorySkill(options) createOpenClawHooks({ skill, agentIdentity, summarizeWithLLM })
memory_search Purpose: query long-term memory. Input: query (string, required) type (all|semantic|episodic|belief|goal, default all) limit (number, default 5) min_relevance (number, default 0.6) Behavior: checks /health first, then retrieves via /retrieve and returns formatted memory results. memory_commit Purpose: explicitly persist important facts/decisions/beliefs/goals. Input: content (string, required) type (semantic|episodic|belief|goal, required) importance (1-10, default 5) tags (string array, optional) Behavior: checks /health first auto-tags if missing (working_question, decision heuristics) commits are serialized (sequential) to protect local CPU embedding throughput if server is unreachable, payload is queued to .memory_retry_queue.json unreachable response is explicit: Memory commit failed - server unreachable. Queued for retry. memory_insights Purpose: surface pending background insights. Input: limit (number, default 10) Behavior: checks /health first, calls /insights/pending, returns formatted insight list.
Mandatory health gate before each tool call (GET /health). Retry queue flushes automatically on healthy tool calls and heartbeat. Heartbeat supports automatic retry recovery and background maintenance.
The v2.5 skill supports episodic session arc capture: checkpoint capture every 20 turns session-end capture during teardown/reset Flow: Extract recent conversation turns (up to 20). Run summarization with prompt: Summarize this session arc: What was the goal? What approaches were tried? What decisions were made? What remains open? Persist summary through internal memory_commit as: type: "episodic" tags: ["session_arc", "YYYY-MM-DD"]
const { createSmartMemorySkill, createOpenClawHooks, } = require("./skills/smart-memory-v25"); const memory = createSmartMemorySkill({ baseUrl: "http://127.0.0.1:8000", summarizeSessionArc: async ({ prompt, conversationText }) => { return openclaw.llm.complete({ system: prompt, user: conversationText }); }, }); const hooks = createOpenClawHooks({ skill: memory.skill, agentIdentity: "OpenClaw Agent", summarizeWithLLM: async ({ prompt, conversationText }) => { return openclaw.llm.complete({ system: prompt, user: conversationText }); }, }); // Register memory.tools as callable tools: // - memory_search // - memory_commit // - memory_insights // and call hooks.beforeModelResponse / hooks.onTurn / hooks.onSessionEnd at lifecycle points.
start() / init() ingestMessage(interaction) retrieveContext({ user_message, conversation_history }) getPromptContext(promptComposerRequest) runBackground(scheduled) stop()
GET /health POST /ingest POST /retrieve POST /compose POST /run_background GET /memories GET /memory/{memory_id} GET /insights/pending
For Docker, WSL, and laptops without NVIDIA GPUs, use CPU-only PyTorch. # from repository root cd smart-memory # Create Python venv python3 -m venv .venv source .venv/bin/activate # Windows: .venv\Scripts\activate # Install CPU-only PyTorch FIRST pip install torch --index-url https://download.pytorch.org/whl/cpu # Then install remaining dependencies pip install -r requirements-cognitive.txt # Finally, install Node dependencies npm install
Smart Memory v2 supports CPU-only PyTorch only. Do not install GPU/CUDA PyTorch builds for this project. Use the bundled installer flow (npm install -> postinstall.js) so CPU wheels are always used.
Legacy vector-memory CLI artifacts (smart_memory.js, vector_memory_local.js, focus_agent.js) are removed in v2.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.