Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Long-term memory via ChromaDB with local Ollama embeddings. Auto-recall injects relevant context every turn. No cloud APIs required — fully self-hosted.
Long-term memory via ChromaDB with local Ollama embeddings. Auto-recall injects relevant context every turn. No cloud APIs required — fully self-hosted.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Long-term semantic memory backed by ChromaDB and local Ollama embeddings. Zero cloud dependencies.
Auto-recall: Before every agent turn, queries ChromaDB with the user's message and injects relevant context automatically chromadb_search tool: Manual semantic search over your ChromaDB collection 100% local: Ollama (nomic-embed-text) for embeddings, ChromaDB for vector storage
ChromaDB running (Docker recommended): docker run -d --name chromadb -p 8100:8000 chromadb/chroma:latest Ollama with an embedding model: ollama pull nomic-embed-text Indexed documents in ChromaDB. Use any ChromaDB-compatible indexer to populate your collection.
# 1. Copy the plugin extension mkdir -p ~/.openclaw/extensions/chromadb-memory cp {baseDir}/scripts/index.ts ~/.openclaw/extensions/chromadb-memory/ cp {baseDir}/scripts/openclaw.plugin.json ~/.openclaw/extensions/chromadb-memory/ # 2. Add to your OpenClaw config (~/.openclaw/openclaw.json): { "plugins": { "entries": { "chromadb-memory": { "enabled": true, "config": { "chromaUrl": "http://localhost:8100", "collectionName": "longterm_memory", "ollamaUrl": "http://localhost:11434", "embeddingModel": "nomic-embed-text", "autoRecall": true, "autoRecallResults": 3, "minScore": 0.5 } } } } } # 4. Restart the gateway openclaw gateway restart
OptionDefaultDescriptionchromaUrlhttp://localhost:8100ChromaDB server URLcollectionNamelongterm_memoryCollection name (auto-resolves UUID, survives reindexing)collectionId—Collection UUID (optional fallback)ollamaUrlhttp://localhost:11434Ollama API URLembeddingModelnomic-embed-textOllama embedding modelautoRecalltrueAuto-inject relevant memories each turnautoRecallResults3Max auto-recall results per turnminScore0.5Minimum similarity score (0-1)
You send a message Plugin embeds your message via Ollama (nomic-embed-text, 768 dimensions) Queries ChromaDB for nearest neighbors Results above minScore are injected into the agent's context as <chromadb-memories> Agent responds with relevant long-term context available
Auto-recall adds ~275 tokens per turn worst case (3 results × ~300 chars + wrapper). Against a 200K+ context window, this is negligible.
Too noisy? Raise minScore to 0.6 or 0.7 Missing context? Lower minScore to 0.4, increase autoRecallResults to 5 Want manual only? Set autoRecall: false, use chromadb_search tool
User Message → Ollama (embed) → ChromaDB (query) → Context Injection ↓ Agent Response No OpenAI. No cloud. Your memories stay on your hardware.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.