Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Local-first agent memory with Ebbinghaus decay, hybrid search, and MCP tools. Import files, extract facts, search with BM25 + semantic, track confidence over...
Local-first agent memory with Ebbinghaus decay, hybrid search, and MCP tools. Import files, extract facts, search with BM25 + semantic, track confidence over...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
The memory layer OpenClaw should have built in. Cortex is an open-source, import-first memory system for AI agents. Single Go binary, SQLite storage, zero cloud dependencies. It solves the #1 complaint about OpenClaw: agents forget everything after compaction. GitHub: https://github.com/hurttlocker/cortex Install: brew install hurttlocker/cortex/cortex or download from Releases
OpenClaw's default memory is Markdown files. When context fills up, compaction summarizes and destroys specifics. Cortex fixes this: ProblemCortex SolutionCompaction loses detailsPersistent SQLite DB survives any sessionNo search โ just dump files into contextHybrid BM25 + semantic search (~16ms keyword, ~52ms semantic)Everything has equal weightEbbinghaus decay โ important facts stay, noise fades naturallyCan't import existing filesImport-first: Markdown, text, any file. 8 connectors (GitHub, Gmail, Calendar, Drive, Slack, Notion, Discord, Telegram)Multi-agent memory leaksPer-agent scoping built inExpensive cloud memory services$0/month. Forever. Local SQLite.
# macOS/Linux (Homebrew) brew install hurttlocker/cortex/cortex # Or download binary directly # https://github.com/hurttlocker/cortex/releases/latest
# Import OpenClaw's memory files cortex import ~/clawd/memory/ --extract # Import specific files cortex import ~/clawd/MEMORY.md --extract cortex import ~/clawd/USER.md --extract
# Fast keyword search cortex search "wedding venue" --limit 5 # Semantic search (requires ollama with nomic-embed-text) cortex search "what decisions did I make about the project" --mode semantic # Hybrid (recommended) cortex search "trading strategy" --mode hybrid
# Add to your MCP config โ Cortex exposes 17 tools + 4 resources cortex mcp # stdio mode cortex mcp --port 8080 # HTTP+SSE mode
Facts decay at different rates based on type. Identity facts (names, roles) last ~2 years. Temporal facts (events, dates) fade in ~1 week. State facts (status, mood) fade in ~2 weeks. This means search results naturally prioritize what matters โ without manual curation.
BM25 โ instant keyword matching via SQLite FTS5 (~16ms) Semantic โ meaning-based via local embeddings (~52ms) Hybrid โ combines both with reciprocal rank fusion
Every imported file gets facts extracted automatically: Rule-based extraction (zero cost, instant) Optional LLM enrichment (Grok, Gemini, or any provider โ finds facts rules miss) Auto-classification into 9 types: identity, relationship, preference, decision, temporal, location, state, config, kv
Pull memory from external sources: cortex connect sync --provider github --extract cortex connect sync --provider gmail --extract cortex connect sync --all --extract
Explore your memory visually: cortex graph --serve --port 8090 # Opens interactive 2D graph explorer in browser
cortex cleanup --purge-noise # Remove garbage + duplicates cortex stale 30 # Find facts not accessed in 30 days cortex conflicts # Detect contradictions cortex conflicts --resolve llm # Auto-resolve with LLM
memory_search โ Cortex โ QMD โ ripgrep โ web search Use OpenClaw's built-in memory_search for conversation history, then Cortex for deep knowledge retrieval.
The included scripts/cortex.sh provides shortcuts: scripts/cortex.sh search "query" 5 # Hybrid search scripts/cortex.sh stats # Memory health scripts/cortex.sh stale 30 # Stale fact detection scripts/cortex.sh conflicts # Contradiction detection scripts/cortex.sh sync # Incremental import scripts/cortex.sh reimport # Full wipe + re-import scripts/cortex.sh compaction # Pre-compaction state brief
# Auto-import sessions + sync connectors every 30 min cortex connect schedule --every 30m --install
Language: Go (62,300+ lines, 1,081 tests) Storage: SQLite + FTS5 + WAL mode Binary: 19MB, pure Go, zero CGO, zero runtime dependencies Platforms: macOS (arm64/amd64), Linux (arm64/amd64), Windows (amd64) MCP: 17 tools + 4 resources (stdio or HTTP+SSE) Embeddings: Local via Ollama (nomic-embed-text), or OpenAI/DeepSeek/custom LLM: Optional enrichment via any provider (Grok, Gemini, DeepSeek, OpenRouter) Scale: Tested to 100K+ memories. At ~20-50/day, won't hit ceiling for 5+ years. License: MIT
CortexMem0ZepLangMemDeploySingle binaryCloud or K8sCloudPython libCost$0$19-249/mo$25/mo+Infra costsPrivacy100% localCloud by defaultCloudDependsDecayEbbinghaus (7 rates)TTL onlyTemporalNoneImportFiles + 8 connectorsChat extractionChat/docsChat extractionSearchBM25 + semanticVector + graphTemporal KGJSON docsMCP17 tools nativeNoNoNoDependenciesZeroPython + cloudCloud + creditsPython + LangGraph
Cortex binary โ install via Homebrew or download from GitHub Releases Optional: Ollama with nomic-embed-text for semantic search Optional: LLM API key for enrichment (Grok, Gemini, etc.) No Python. No Node. No Docker. No cloud account. Just the binary.
answer โ "What do I know about X?" / "Who is Y?" / synthesis questions โ single coherent response with citations search โ "Find the file where X is mentioned" / debugging / exploring what exists โ ranked result list
Add to ~/.cortex/config.yaml: search: source_boost: - prefix: "memory/" weight: 1.5 - prefix: "file:MEMORY" weight: 1.6 - prefix: "github" weight: 1.3 - prefix: "session:" weight: 0.9 Higher weight = more trusted. Daily notes and core files rank above auto-imported sessions.
Use --intent when you know where the answer lives: --intent memory โ personal decisions, preferences, people --intent connector โ code, PRs, emails, external data --intent import โ imported files and documents No flag = search everything (default, good for discovery)
# Nightly dry-run + apply (launchd or cron) cortex lifecycle run --dry-run > /tmp/lifecycle-plan.log 2>&1 # If anything found, apply: cortex lifecycle run Recommended: 3:30 AM daily. First week: dry-run only, review logs.
Fresh agent (< 500 facts): policies: reinforce_promote: min_reinforcements: 3 min_sources: 2 decay_retire: inactive_days: 90 confidence_below: 0.25 conflict_supersede: min_confidence_delta: 0.20 Mature agent (2000+ facts): policies: reinforce_promote: min_reinforcements: 5 min_sources: 3 decay_retire: inactive_days: 45 confidence_below: 0.35 conflict_supersede: min_confidence_delta: 0.10
After any bulk import, run: cortex cleanup --dedup-facts # Remove near-duplicates cortex conflicts --auto-resolve # Resolve contradictions
memory_search โ cortex answer (synthesis) โ cortex search (pointers) โ QMD โ ripgrep โ web
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.