Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
EvoClaw Tiered Memory Architecture v2.1.0 - LLM-powered three-tier memory system with structured metadata extraction, URL preservation, validation, and cloud...
EvoClaw Tiered Memory Architecture v2.1.0 - LLM-powered three-tier memory system with structured metadata extraction, URL preservation, validation, and cloud...
This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.
Use the source page and any available docs to guide the install because the item currently does not return a direct package file.
I tried to install a skill package from Yavira, but the item currently does not return a direct package file. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required. Then review README.md for any prerequisites, environment setup, or post-install checks.
I tried to upgrade a skill package from Yavira, but the item currently does not return a direct package file. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need. Then review README.md for any prerequisites, environment setup, or post-install checks.
A mind that remembers everything is as useless as one that remembers nothing. The art is knowing what to keep. π§ EvoClaw-compatible three-tier memory system inspired by human cognition and PageIndex tree retrieval.
π Automatic Daily Note Ingestion Consolidation (daily/monthly/full modes) now auto-runs ingest-daily Bridges memory/YYYY-MM-DD.md files β tiered memory system No more manual ingestion required β facts flow automatically Fixes the "two disconnected data paths" problem
π― Structured Metadata Extraction Automatic extraction of URLs, shell commands, and file paths from facts Preserved during distillation and consolidation Searchable by URL fragment β Memory Completeness Validation Check daily notes for missing URLs, commands, and next steps Proactive warnings for incomplete information Actionable suggestions for improvement π Enhanced Search Search facts by URL fragment Get all stored URLs from warm memory Metadata-aware fact storage π‘οΈ URL Preservation URLs explicitly preserved during LLM distillation Fallback metadata extraction if LLM misses them Command-line support for adding metadata manually
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β AGENT CONTEXT (~8-15KB) β β β β ββββββββββββ ββββββββββββββββββββββββββββββββββ β β β Tree β β Retrieved Memory Nodes β β β β Index β β (on-demand, 1-3KB) β β β β (~2KB) β β β β β β β β Fetched per conversation β β β β Always β β based on tree reasoning β β β β loaded β β β β β ββββββ¬ββββββ ββββββββββββββββββββββββββββββββββ β β β β βββββββββΌββββββββββββββββββββββββββββββββββββββββββββββ β β LLM-powered tree search β βββββββββΌββββββββββββββββββββββββββββββββββββββββββββββ β MEMORY TIERS β β β β π΄ HOT (5KB) π‘ WARM (50KB) π’ COLD (β) β β β β Core memory Scored facts Full archive β β - Identity - 30-day - Turso DB β β - Owner profile - Decaying - Queryable β β - Active context - On-device - 10-year β β - Lessons (20 max) β β β β Always in Retrieved via Retrieved via β β context tree search tree search β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Consolidation β Short-term β long-term happens during consolidation cycles Relevance Decay β Unused memories fade; accessed memories strengthen Strategic Forgetting β Not remembering everything is a feature Hierarchical Organization β Navigate categories, not scan linearly
Vectorless Retrieval β LLM reasoning instead of embedding similarity Tree-Structured Index β O(log n) navigation, not O(n) scan Explainable Results β Every retrieval traces a path through categories Reasoning-Based Search β "Why relevant?" not "how similar?"
Device is replaceable β Soul lives in cloud (Turso) Critical sync β Hot + tree sync after every conversation Disaster recovery β Full restore in <2 minutes Multi-device β Same agent across phone/desktop/embedded
Purpose: Core identity and active context, always in agent's context window. Structure: { "identity": { "agent_name": "Agent", "owner_name": "User", "owner_preferred_name": "User", "relationship_start": "2026-01-15", "trust_level": 0.95 }, "owner_profile": { "personality": "technical, direct communication", "family": ["Sarah (wife)", "Luna (daughter, 3yo)"], "topics_loved": ["AI architecture", "blockchain", "system design"], "topics_avoid": ["small talk about weather"], "timezone": "Australia/Sydney", "work_hours": "9am-6pm" }, "active_context": { "projects": [ { "name": "EvoClaw", "description": "Self-evolving agent framework", "status": "Active - BSC integration for hackathon" } ], "events": [ {"text": "Hackathon deadline Feb 15", "timestamp": 1707350400} ], "tasks": [ {"text": "Deploy to BSC testnet", "status": "pending", "timestamp": 1707350400} ] }, "critical_lessons": [ { "text": "Always test on testnet before mainnet", "category": "blockchain", "importance": 0.9, "timestamp": 1707350400 } ] } Auto-pruning: Lessons: Max 20, removes lowest-importance when full Events: Keeps last 10 only Tasks: Max 10 pending Total size: Hard limit at 5KB, progressively prunes if exceeded Generates: MEMORY.md β auto-rebuilt from structured hot state
Purpose: Recent distilled facts with decay scoring. Entry format: { "id": "abc123def456", "text": "Decided to use zero go-ethereum deps for EvoClaw to keep binary small", "category": "projects/evoclaw/architecture", "importance": 0.8, "created_at": 1707350400, "access_count": 3, "score": 0.742, "tier": "warm" } Scoring: score = importance Γ recency_decay(age) Γ reinforcement(access_count) recency_decay(age) = exp(-age_days / 30) reinforcement(access) = 1 + 0.1 Γ access_count Tier classification: score >= 0.7 β Hot (promote to hot state) score >= 0.3 β Warm (keep) score >= 0.05 β Cold (archive) score < 0.05 β Frozen (delete after retention period) Eviction triggers: Age > 30 days AND score < 0.3 Total warm size > 50KB (evicts lowest-scored) Manual consolidation
Purpose: Long-term archive, queryable but never bulk-loaded. Schema: CREATE TABLE cold_memories ( id TEXT PRIMARY KEY, agent_id TEXT NOT NULL, text TEXT NOT NULL, category TEXT NOT NULL, importance REAL DEFAULT 0.5, created_at INTEGER NOT NULL, access_count INTEGER DEFAULT 0 ); CREATE TABLE critical_state ( agent_id TEXT PRIMARY KEY, data TEXT NOT NULL, -- {hot_state, tree_nodes, timestamp} updated_at INTEGER NOT NULL ); Retention: 10 years (configurable) Cleanup: Monthly consolidation removes frozen entries older than retention period
Purpose: Hierarchical category map for O(log n) retrieval. Constraints: Max 50 nodes Max depth 4 levels Max 2KB serialized Max 10 children per node Example: Memory Tree Index ================================================== π Root (warm:15, cold:234) π owner β Owner profile and preferences Memories: warm=5, cold=89 π projects β Active projects Memories: warm=8, cold=67 π projects/evoclaw β EvoClaw framework Memories: warm=6, cold=45 π projects/evoclaw/bsc β BSC integration Memories: warm=3, cold=12 π technical β Technical setup and config Memories: warm=2, cold=34 π lessons β Learned lessons and rules Memories: warm=0, cold=44 Nodes: 7/50 Size: 1842 / 2048 bytes Operations: --add PATH DESC β Add category node --remove PATH β Remove node (only if no data) --prune β Remove dead nodes (no activity in 60+ days) --show β Pretty-print tree
Purpose: Three-stage compression of conversations. Pipeline: Raw conversation (500B) β Stage 1β2: Extract structured info Distilled fact (80B) β Stage 2β3: Generate one-line summary Core summary (20B)
Input: Raw conversation text Output: Structured JSON { "fact": "User decided to use raw JSON-RPC for BSC to avoid go-ethereum dependency", "emotion": "determined", "people": ["User"], "topics": ["blockchain", "architecture", "dependencies"], "actions": ["decided to use raw JSON-RPC", "avoid go-ethereum"], "outcome": "positive" } Modes: rule: Regex/heuristic extraction (fast, no LLM) llm: LLM-powered extraction (accurate, requires endpoint) Usage: # Rule-based (default) distiller.py --text "Had a productive chat about the BSC integration..." --mode rule # LLM-powered distiller.py --text "..." --mode llm --llm-endpoint http://localhost:8080/complete # With core summary distiller.py --text "..." --mode rule --core-summary
Purpose: One-line summary for tree index Example: Distilled: { "fact": "User decided raw JSON-RPC for BSC, no go-ethereum", "outcome": "positive" } Core summary: "BSC integration: raw JSON-RPC (no deps)" Target: <30 bytes
Purpose: Semantic search through tree structure using LLM reasoning. How it works: Build prompt with tree structure + query LLM reasons about which categories are relevant Returns category paths with relevance scores Fetches memories from those categories Example: Query: "What did we decide about the hackathon deadline?" Keyword search returns: projects/evoclaw (0.8) technical/deployment (0.4) LLM search reasons: projects/evoclaw/bsc (0.95) β "BSC integration for hackathon" active_context/events (0.85) β "Deadline mentioned here" LLM prompt template: You are a memory retrieval system. Given a memory tree index and a query, identify which categories are relevant. Memory Tree Index: projects/evoclaw β EvoClaw framework (warm:6, cold:45) projects/evoclaw/bsc β BSC integration (warm:3, cold:12) ... User Query: What did we decide about the hackathon deadline? Output (JSON): [ {"path": "projects/evoclaw/bsc", "relevance": 0.95, "reason": "BSC work for hackathon"}, {"path": "active_context/events", "relevance": 0.85, "reason": "deadline tracking"} ] Usage: # Keyword search (fast) tree_search.py --query "BSC integration" --tree-file memory-tree.json --mode keyword # LLM search (accurate) tree_search.py --query "what did we decide about hackathon?" \ --tree-file memory-tree.json --mode llm --llm-endpoint http://localhost:8080/complete # Generate prompt for external LLM tree_search.py --query "..." --tree-file memory-tree.json \ --mode llm --llm-prompt-file prompt.txt
Agent ID scoping β All operations support --agent-id flag. File layout: memory/ default/ warm-memory.json memory-tree.json hot-memory-state.json metrics.json agent-2/ warm-memory.json memory-tree.json ... MEMORY.md # default agent MEMORY-agent-2.md # agent-2 Cold storage: Agent-scoped queries via agent_id column Usage: # Store for agent-2 memory_cli.py store --text "..." --category "..." --agent-id agent-2 # Retrieve for agent-2 memory_cli.py retrieve --query "..." --agent-id agent-2 # Consolidate agent-2 memory_cli.py consolidate --mode daily --agent-id agent-2
Purpose: Periodic memory maintenance and optimization.
Warm eviction (score-based) Archive expired to cold Recalculate all scores Rebuild MEMORY.md
Everything in Quick Tree prune (remove dead nodes, 60+ days no activity)
Everything in Daily Tree rebuild (LLM-powered restructuring, future) Cold cleanup (delete frozen entries older than retention)
Everything in Monthly Full recalculation of all scores Deep tree analysis Generate consolidation report Usage: # Quick consolidation (default) memory_cli.py consolidate # Daily (run via cron) memory_cli.py consolidate --mode daily # Monthly (run via cron) memory_cli.py consolidate --mode monthly --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN" Recommended schedule: Quick: Every 2-4 hours (heartbeat) Daily: Midnight via cron Monthly: 1st of month via cron
Purpose: Cloud backup of hot state + tree after every conversation. What syncs: Hot memory state (identity, owner profile, active context, lessons) Tree index (structure + counts) Timestamp Recovery: If device lost, restore from cloud in <2 minutes Usage: # Manual critical sync memory_cli.py sync-critical --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN" --agent-id default # Automatic: Call after every important conversation # In agent code: # 1. Process conversation # 2. Store distilled facts # 3. Call sync-critical Retry strategy: Exponential backoff if cloud unreachable (5s, 10s, 20s, 40s)
Tracked metrics: { "tree_index_size_bytes": 1842, "tree_node_count": 37, "hot_memory_size_bytes": 4200, "warm_memory_count": 145, "warm_memory_size_kb": 38.2, "retrieval_count": 234, "evictions_today": 12, "reinforcements_today": 67, "consolidation_count": 8, "last_consolidation": 1707350400, "context_tokens_saved": 47800, "timestamp": "2026-02-10T14:30:00" } Usage: memory_cli.py metrics --agent-id default Key metrics: context_tokens_saved β Estimated tokens saved vs. flat MEMORY.md retrieval_count β How often memories are accessed evictions_today β Memory pressure indicator warm_memory_size_kb β Storage usage
memory_cli.py store --text "Fact text" --category "path/to/category" [--importance 0.8] [--agent-id default] Importance guide: 0.9-1.0 β Critical decisions, credentials, core identity 0.7-0.8 β Project decisions, architecture, preferences 0.5-0.6 β General facts, daily events 0.3-0.4 β Casual mentions, low priority Example: memory_cli.py store \ --text "Decided to deploy EvoClaw on BSC testnet before mainnet" \ --category "projects/evoclaw/deployment" \ --importance 0.85 \ --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN" # Store with explicit metadata (v2.1.0+) memory_cli.py store \ --text "Z-Image ComfyUI model for photorealistic images" \ --category "tools/image-generation" \ --importance 0.8 \ --url "https://docs.comfy.org/tutorials/image/z-image/z-image" \ --command "huggingface-cli download Tongyi-MAI/Z-Image" \ --path "/home/user/models/"
memory_cli.py validate [--file PATH] [--agent-id default] Purpose: Check daily notes for incomplete information (missing URLs, commands, next steps). Example: # Validate today's daily notes memory_cli.py validate # Validate specific file memory_cli.py validate --file memory/2026-02-13.md Output: { "status": "warning", "warnings_count": 2, "warnings": [ "Tool 'Z-Image' mentioned without URL/documentation link", "Action 'install' mentioned without command example" ], "suggestions": [ "Add URLs for mentioned tools/services", "Include command examples for setup/installation steps", "Document next steps after decisions" ] }
memory_cli.py extract-metadata --file PATH Purpose: Extract structured metadata (URLs, commands, paths) from a file. Example: memory_cli.py extract-metadata --file memory/2026-02-13.md Output: { "file": "memory/2026-02-13.md", "metadata": { "urls": [ "https://docs.comfy.org/tutorials/image/z-image/z-image", "https://github.com/Lightricks/LTX-Video" ], "commands": [ "huggingface-cli download Tongyi-MAI/Z-Image", "git clone https://github.com/Lightricks/LTX-Video.git" ], "paths": [ "/home/peter/ai-stack/comfyui/models", "./configs/ltx-video-2-config.yaml" ] }, "summary": { "urls_count": 2, "commands_count": 2, "paths_count": 2 } }
memory_cli.py search-url --url FRAGMENT [--limit 5] [--agent-id default] Purpose: Search facts by URL fragment. Example: # Find all facts with comfy.org URLs memory_cli.py search-url --url "comfy.org" # Find GitHub repos memory_cli.py search-url --url "github.com" --limit 10 Output: { "query": "comfy.org", "results_count": 1, "results": [ { "id": "abc123", "text": "Z-Image ComfyUI model for photorealistic images", "category": "tools/image-generation", "metadata": { "urls": ["https://docs.comfy.org/tutorials/image/z-image/z-image"], "commands": ["huggingface-cli download Tongyi-MAI/Z-Image"], "paths": [] } } ] }
memory_cli.py retrieve --query "search query" [--limit 5] [--llm] [--llm-endpoint URL] [--agent-id default] Modes: Default: Keyword-based tree + warm + cold search --llm: LLM-powered semantic tree search Example: # Keyword search memory_cli.py retrieve --query "BSC deployment decision" --limit 5 # LLM search (more accurate) memory_cli.py retrieve \ --query "what did we decide about blockchain integration?" \ --llm --llm-endpoint http://localhost:8080/complete \ --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN"
memory_cli.py distill --text "raw conversation" [--llm] [--llm-endpoint URL] Example: # Rule-based distillation memory_cli.py distill --text "User: Let's deploy to testnet first. Agent: Good idea, safer that way." # LLM distillation memory_cli.py distill \ --text "Long conversation with nuance..." \ --llm --llm-endpoint http://localhost:8080/complete Output: { "distilled": { "fact": "Decided to deploy to testnet before mainnet", "emotion": "cautious", "people": [], "topics": ["deployment", "testnet", "safety"], "actions": ["deploy to testnet"], "outcome": "positive" }, "mode": "rule", "original_size": 87, "distilled_size": 156 }
# Update hot state memory_cli.py hot --update KEY JSON [--agent-id default] # Rebuild MEMORY.md memory_cli.py hot --rebuild [--agent-id default] # Show current hot state memory_cli.py hot [--agent-id default] Keys: identity β Agent/owner identity info owner_profile β Owner preferences, personality lesson β Add critical lesson event β Add event to active context task β Add task to active context project β Add/update project Examples: # Update owner profile memory_cli.py hot --update owner_profile '{"timezone": "Australia/Sydney", "work_hours": "9am-6pm"}' # Add lesson memory_cli.py hot --update lesson '{"text": "Always test on testnet first", "category": "blockchain", "importance": 0.9}' # Add project memory_cli.py hot --update project '{"name": "EvoClaw", "status": "Active", "description": "Self-evolving agent framework"}' # Rebuild MEMORY.md memory_cli.py hot --rebuild
# Show tree memory_cli.py tree --show [--agent-id default] # Add node memory_cli.py tree --add "path/to/category" "Description" [--agent-id default] # Remove node memory_cli.py tree --remove "path/to/category" [--agent-id default] # Prune dead nodes memory_cli.py tree --prune [--agent-id default] Examples: # Add category memory_cli.py tree --add "projects/evoclaw/bsc" "BSC blockchain integration" # Remove empty category memory_cli.py tree --remove "old/unused/path" # Prune dead nodes (60+ days no activity) memory_cli.py tree --prune
# Initialize Turso tables memory_cli.py cold --init --db-url URL --auth-token TOKEN # Query cold storage memory_cli.py cold --query "search term" [--limit 10] [--agent-id default] --db-url URL --auth-token TOKEN Examples: # Init tables (once) memory_cli.py cold --init --db-url "https://your-db.turso.io" --auth-token "your-token" # Query cold archive memory_cli.py cold --query "blockchain decision" --limit 10 --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN"
File: config.json (optional, uses defaults if not present) { "agent_id": "default", "hot": { "max_bytes": 5120, "max_lessons": 20, "max_events": 10, "max_tasks": 10 }, "warm": { "max_kb": 50, "retention_days": 30, "eviction_threshold": 0.3 }, "cold": { "backend": "turso", "retention_years": 10 }, "scoring": { "half_life_days": 30, "reinforcement_boost": 0.1 }, "tree": { "max_nodes": 50, "max_depth": 4, "max_size_bytes": 2048 }, "distillation": { "aggression": 0.7, "max_distilled_bytes": 100, "mode": "rule" }, "consolidation": { "warm_eviction": "hourly", "tree_prune": "daily", "tree_rebuild": "monthly" } }
import subprocess import json def process_conversation(user_message, agent_response, category="conversations"): # 1. Distill conversation text = f"User: {user_message}\nAgent: {agent_response}" result = subprocess.run( ["python3", "skills/tiered-memory/scripts/memory_cli.py", "distill", "--text", text], capture_output=True, text=True ) distilled = json.loads(result.stdout) # 2. Determine importance importance = 0.7 if "decision" in distilled["distilled"]["outcome"] else 0.5 # 3. Store subprocess.run([ "python3", "skills/tiered-memory/scripts/memory_cli.py", "store", "--text", distilled["distilled"]["fact"], "--category", category, "--importance", str(importance), "--db-url", os.getenv("TURSO_URL"), "--auth-token", os.getenv("TURSO_TOKEN") ]) # 4. Critical sync subprocess.run([ "python3", "skills/tiered-memory/scripts/memory_cli.py", "sync-critical", "--db-url", os.getenv("TURSO_URL"), "--auth-token", os.getenv("TURSO_TOKEN") ])
def get_relevant_context(query): result = subprocess.run( [ "python3", "skills/tiered-memory/scripts/memory_cli.py", "retrieve", "--query", query, "--limit", "5", "--llm", "--llm-endpoint", "http://localhost:8080/complete", "--db-url", os.getenv("TURSO_URL"), "--auth-token", os.getenv("TURSO_TOKEN") ], capture_output=True, text=True ) memories = json.loads(result.stdout) return "\n".join([f"- {m['text']}" for m in memories])
import schedule # Hourly quick consolidation schedule.every(2).hours.do(lambda: subprocess.run([ "python3", "skills/tiered-memory/scripts/memory_cli.py", "consolidate", "--mode", "quick", "--db-url", os.getenv("TURSO_URL"), "--auth-token", os.getenv("TURSO_TOKEN") ])) # Daily tree prune schedule.every().day.at("00:00").do(lambda: subprocess.run([ "python3", "skills/tiered-memory/scripts/memory_cli.py", "consolidate", "--mode", "daily", "--db-url", os.getenv("TURSO_URL"), "--auth-token", os.getenv("TURSO_TOKEN") ])) # Monthly full consolidation schedule.every().month.do(lambda: subprocess.run([ "python3", "skills/tiered-memory/scripts/memory_cli.py", "consolidate", "--mode", "monthly", "--db-url", os.getenv("TURSO_URL"), "--auth-token", os.getenv("TURSO_TOKEN") ]))
For Distillation & Tree Search: Claude 3 Haiku (fast, cheap, excellent structure) GPT-4o-mini (good balance) Gemini 1.5 Flash (very fast) For Tree Rebuilding: Claude 3.5 Sonnet (better reasoning) GPT-4o (strong planning)
Use cheaper models for frequent operations (distill, search) Batch distillation β Queue conversations, distill in batch Cache tree prompts β Tree structure doesn't change often Skip LLM for simple β Use rule-based for short conversations
from flask import Flask, request, jsonify app = Flask(__name__) @app.route("/complete", methods=["POST"]) def complete(): data = request.json prompt = data["prompt"] # Call your LLM (OpenAI, Anthropic, local model, etc.) response = llm_client.complete(prompt) return jsonify({"text": response}) if __name__ == "__main__": app.run(port=8080)
Context Size: Hot: ~5KB (always loaded) Tree: ~2KB (always loaded) Retrieved: ~1-3KB per query Total: ~8-15KB (constant, regardless of agent age) Retrieval Speed: Keyword: 10-20ms LLM tree search: 300-600ms Cold query: 50-100ms 5-Year Scenario: Hot: Still 5KB (living document) Warm: Last 30 days (~50KB) Cold: ~50MB in Turso (compressed distilled facts) Tree: Still 2KB (different nodes, same size) Context per session: Same as day 1
SystemMemory ModelScalingAccuracyCostFlat MEMORY.mdLinear textβ Monthsβ οΈ Degradesβ LinearVector RAGEmbeddingsβ Yearsβ οΈ Similarityβ relevanceβ οΈ ModerateEvoClaw TieredTree + tiersβ Decadesβ Reasoning-basedβ Fixed Why tree > vectors: Accuracy: 98%+ vs. 70-80% (PageIndex benchmark) Explainable: "Projects β EvoClaw β BSC" vs. "cosine 0.73" Multi-hop: Natural vs. poor False positives: Low vs. high
# Prune dead nodes memory_cli.py tree --prune # Check which nodes are largest memory_cli.py tree --show | grep "Memories:" # Manually remove unused categories memory_cli.py tree --remove "unused/category"
# Run consolidation memory_cli.py consolidate --mode daily --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN" # Check stats memory_cli.py metrics # Lower eviction threshold (keeps less in warm) # Edit config.json: "eviction_threshold": 0.4
# Hot auto-prunes, but check structure memory_cli.py hot # Remove old projects/tasks manually memory_cli.py hot --update project '{"name": "OldProject", "status": "Completed"}' # Rebuild to force pruning memory_cli.py hot --rebuild
# Fallback to keyword search (automatic) memory_cli.py retrieve --query "..." --limit 5 # Test LLM endpoint curl -X POST http://localhost:8080/complete -d '{"prompt": "test"}' # Generate prompt for external testing tree_search.py --query "..." --tree-file memory/memory-tree.json --mode llm --llm-prompt-file test.txt
Backward compatible: Existing warm-memory.json and memory-tree.json files work as-is. New files: config.json (optional, uses defaults) hot-memory-state.json (auto-created) metrics.json (auto-created) Steps: Update skill: clawhub update tiered-memory Run consolidation to rebuild hot state: memory_cli.py consolidate Initialize cold storage (optional): memory_cli.py cold --init --db-url ... --auth-token ... Configure agent to use new commands (see Integration section)
Fully backward compatible: Existing memory files work without changes. What's new: β Metadata automatically extracted from existing facts when loaded β New commands: validate, extract-metadata, search-url β store command now accepts --url, --command, --path flags β Distillation preserves URLs and technical details β No action required - just update and use new features Testing the upgrade: # Update skill clawhub update tiered-memory # Test metadata extraction memory_cli.py extract-metadata --file memory/2026-02-13.md # Validate your recent notes memory_cli.py validate # Search by URL memory_cli.py search-url --url "github.com"
Design: /docs/TIERED-MEMORY.md (EvoClaw) Cloud Sync: /docs/CLOUD-SYNC.md (EvoClaw) Inspiration: PageIndex (tree-based retrieval) v2.1.0 β A mind that remembers everything is as useless as one that remembers nothing. The art is knowing what to keep. Now with structured metadata to remember HOW, not just WHAT. π§ π²π
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.