← All skills
Tencent SkillHub Β· AI

Tiered Memory

EvoClaw Tiered Memory Architecture v2.1.0 - LLM-powered three-tier memory system with structured metadata extraction, URL preservation, validation, and cloud...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

EvoClaw Tiered Memory Architecture v2.1.0 - LLM-powered three-tier memory system with structured metadata extraction, URL preservation, validation, and cloud...

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
API_ENDPOINTS.md, METRICS_TRACKER.md, README.md, SKILL.md, config.json, scripts/distiller.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
2.2.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 48 sections Open source page

Tiered Memory System v2.2.0

A mind that remembers everything is as useless as one that remembers nothing. The art is knowing what to keep. 🧠 EvoClaw-compatible three-tier memory system inspired by human cognition and PageIndex tree retrieval.

What's New in v2.2.0

πŸ”„ Automatic Daily Note Ingestion Consolidation (daily/monthly/full modes) now auto-runs ingest-daily Bridges memory/YYYY-MM-DD.md files β†’ tiered memory system No more manual ingestion required β€” facts flow automatically Fixes the "two disconnected data paths" problem

What's New in v2.1.0

🎯 Structured Metadata Extraction Automatic extraction of URLs, shell commands, and file paths from facts Preserved during distillation and consolidation Searchable by URL fragment βœ… Memory Completeness Validation Check daily notes for missing URLs, commands, and next steps Proactive warnings for incomplete information Actionable suggestions for improvement πŸ” Enhanced Search Search facts by URL fragment Get all stored URLs from warm memory Metadata-aware fact storage πŸ›‘οΈ URL Preservation URLs explicitly preserved during LLM distillation Fallback metadata extraction if LLM misses them Command-line support for adding metadata manually

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ AGENT CONTEXT (~8-15KB) β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ Tree β”‚ β”‚ Retrieved Memory Nodes β”‚ β”‚ β”‚ β”‚ Index β”‚ β”‚ (on-demand, 1-3KB) β”‚ β”‚ β”‚ β”‚ (~2KB) β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ Fetched per conversation β”‚ β”‚ β”‚ β”‚ Always β”‚ β”‚ based on tree reasoning β”‚ β”‚ β”‚ β”‚ loaded β”‚ β”‚ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚ LLM-powered tree search β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ MEMORY TIERS β”‚ β”‚ β”‚ β”‚ πŸ”΄ HOT (5KB) 🟑 WARM (50KB) 🟒 COLD (∞) β”‚ β”‚ β”‚ β”‚ Core memory Scored facts Full archive β”‚ β”‚ - Identity - 30-day - Turso DB β”‚ β”‚ - Owner profile - Decaying - Queryable β”‚ β”‚ - Active context - On-device - 10-year β”‚ β”‚ - Lessons (20 max) β”‚ β”‚ β”‚ β”‚ Always in Retrieved via Retrieved via β”‚ β”‚ context tree search tree search β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

From Human Memory

Consolidation β€” Short-term β†’ long-term happens during consolidation cycles Relevance Decay β€” Unused memories fade; accessed memories strengthen Strategic Forgetting β€” Not remembering everything is a feature Hierarchical Organization β€” Navigate categories, not scan linearly

From PageIndex

Vectorless Retrieval β€” LLM reasoning instead of embedding similarity Tree-Structured Index β€” O(log n) navigation, not O(n) scan Explainable Results β€” Every retrieval traces a path through categories Reasoning-Based Search β€” "Why relevant?" not "how similar?"

Cloud-First (EvoClaw)

Device is replaceable β€” Soul lives in cloud (Turso) Critical sync β€” Hot + tree sync after every conversation Disaster recovery β€” Full restore in <2 minutes Multi-device β€” Same agent across phone/desktop/embedded

πŸ”΄ Hot Memory (5KB max)

Purpose: Core identity and active context, always in agent's context window. Structure: { "identity": { "agent_name": "Agent", "owner_name": "User", "owner_preferred_name": "User", "relationship_start": "2026-01-15", "trust_level": 0.95 }, "owner_profile": { "personality": "technical, direct communication", "family": ["Sarah (wife)", "Luna (daughter, 3yo)"], "topics_loved": ["AI architecture", "blockchain", "system design"], "topics_avoid": ["small talk about weather"], "timezone": "Australia/Sydney", "work_hours": "9am-6pm" }, "active_context": { "projects": [ { "name": "EvoClaw", "description": "Self-evolving agent framework", "status": "Active - BSC integration for hackathon" } ], "events": [ {"text": "Hackathon deadline Feb 15", "timestamp": 1707350400} ], "tasks": [ {"text": "Deploy to BSC testnet", "status": "pending", "timestamp": 1707350400} ] }, "critical_lessons": [ { "text": "Always test on testnet before mainnet", "category": "blockchain", "importance": 0.9, "timestamp": 1707350400 } ] } Auto-pruning: Lessons: Max 20, removes lowest-importance when full Events: Keeps last 10 only Tasks: Max 10 pending Total size: Hard limit at 5KB, progressively prunes if exceeded Generates: MEMORY.md β€” auto-rebuilt from structured hot state

🟑 Warm Memory (50KB max, 30-day retention)

Purpose: Recent distilled facts with decay scoring. Entry format: { "id": "abc123def456", "text": "Decided to use zero go-ethereum deps for EvoClaw to keep binary small", "category": "projects/evoclaw/architecture", "importance": 0.8, "created_at": 1707350400, "access_count": 3, "score": 0.742, "tier": "warm" } Scoring: score = importance Γ— recency_decay(age) Γ— reinforcement(access_count) recency_decay(age) = exp(-age_days / 30) reinforcement(access) = 1 + 0.1 Γ— access_count Tier classification: score >= 0.7 β†’ Hot (promote to hot state) score >= 0.3 β†’ Warm (keep) score >= 0.05 β†’ Cold (archive) score < 0.05 β†’ Frozen (delete after retention period) Eviction triggers: Age > 30 days AND score < 0.3 Total warm size > 50KB (evicts lowest-scored) Manual consolidation

🟒 Cold Memory (Unlimited, Turso)

Purpose: Long-term archive, queryable but never bulk-loaded. Schema: CREATE TABLE cold_memories ( id TEXT PRIMARY KEY, agent_id TEXT NOT NULL, text TEXT NOT NULL, category TEXT NOT NULL, importance REAL DEFAULT 0.5, created_at INTEGER NOT NULL, access_count INTEGER DEFAULT 0 ); CREATE TABLE critical_state ( agent_id TEXT PRIMARY KEY, data TEXT NOT NULL, -- {hot_state, tree_nodes, timestamp} updated_at INTEGER NOT NULL ); Retention: 10 years (configurable) Cleanup: Monthly consolidation removes frozen entries older than retention period

Tree Index

Purpose: Hierarchical category map for O(log n) retrieval. Constraints: Max 50 nodes Max depth 4 levels Max 2KB serialized Max 10 children per node Example: Memory Tree Index ================================================== πŸ“‚ Root (warm:15, cold:234) πŸ“ owner β€” Owner profile and preferences Memories: warm=5, cold=89 πŸ“ projects β€” Active projects Memories: warm=8, cold=67 πŸ“ projects/evoclaw β€” EvoClaw framework Memories: warm=6, cold=45 πŸ“ projects/evoclaw/bsc β€” BSC integration Memories: warm=3, cold=12 πŸ“ technical β€” Technical setup and config Memories: warm=2, cold=34 πŸ“ lessons β€” Learned lessons and rules Memories: warm=0, cold=44 Nodes: 7/50 Size: 1842 / 2048 bytes Operations: --add PATH DESC β€” Add category node --remove PATH β€” Remove node (only if no data) --prune β€” Remove dead nodes (no activity in 60+ days) --show β€” Pretty-print tree

Distillation Engine

Purpose: Three-stage compression of conversations. Pipeline: Raw conversation (500B) ↓ Stage 1β†’2: Extract structured info Distilled fact (80B) ↓ Stage 2β†’3: Generate one-line summary Core summary (20B)

Stage 1β†’2: Raw β†’ Distilled

Input: Raw conversation text Output: Structured JSON { "fact": "User decided to use raw JSON-RPC for BSC to avoid go-ethereum dependency", "emotion": "determined", "people": ["User"], "topics": ["blockchain", "architecture", "dependencies"], "actions": ["decided to use raw JSON-RPC", "avoid go-ethereum"], "outcome": "positive" } Modes: rule: Regex/heuristic extraction (fast, no LLM) llm: LLM-powered extraction (accurate, requires endpoint) Usage: # Rule-based (default) distiller.py --text "Had a productive chat about the BSC integration..." --mode rule # LLM-powered distiller.py --text "..." --mode llm --llm-endpoint http://localhost:8080/complete # With core summary distiller.py --text "..." --mode rule --core-summary

Stage 2β†’3: Distilled β†’ Core Summary

Purpose: One-line summary for tree index Example: Distilled: { "fact": "User decided raw JSON-RPC for BSC, no go-ethereum", "outcome": "positive" } Core summary: "BSC integration: raw JSON-RPC (no deps)" Target: <30 bytes

LLM-Powered Tree Search

Purpose: Semantic search through tree structure using LLM reasoning. How it works: Build prompt with tree structure + query LLM reasons about which categories are relevant Returns category paths with relevance scores Fetches memories from those categories Example: Query: "What did we decide about the hackathon deadline?" Keyword search returns: projects/evoclaw (0.8) technical/deployment (0.4) LLM search reasons: projects/evoclaw/bsc (0.95) β€” "BSC integration for hackathon" active_context/events (0.85) β€” "Deadline mentioned here" LLM prompt template: You are a memory retrieval system. Given a memory tree index and a query, identify which categories are relevant. Memory Tree Index: projects/evoclaw β€” EvoClaw framework (warm:6, cold:45) projects/evoclaw/bsc β€” BSC integration (warm:3, cold:12) ... User Query: What did we decide about the hackathon deadline? Output (JSON): [ {"path": "projects/evoclaw/bsc", "relevance": 0.95, "reason": "BSC work for hackathon"}, {"path": "active_context/events", "relevance": 0.85, "reason": "deadline tracking"} ] Usage: # Keyword search (fast) tree_search.py --query "BSC integration" --tree-file memory-tree.json --mode keyword # LLM search (accurate) tree_search.py --query "what did we decide about hackathon?" \ --tree-file memory-tree.json --mode llm --llm-endpoint http://localhost:8080/complete # Generate prompt for external LLM tree_search.py --query "..." --tree-file memory-tree.json \ --mode llm --llm-prompt-file prompt.txt

Multi-Agent Support

Agent ID scoping β€” All operations support --agent-id flag. File layout: memory/ default/ warm-memory.json memory-tree.json hot-memory-state.json metrics.json agent-2/ warm-memory.json memory-tree.json ... MEMORY.md # default agent MEMORY-agent-2.md # agent-2 Cold storage: Agent-scoped queries via agent_id column Usage: # Store for agent-2 memory_cli.py store --text "..." --category "..." --agent-id agent-2 # Retrieve for agent-2 memory_cli.py retrieve --query "..." --agent-id agent-2 # Consolidate agent-2 memory_cli.py consolidate --mode daily --agent-id agent-2

Consolidation Modes

Purpose: Periodic memory maintenance and optimization.

Quick (hourly)

Warm eviction (score-based) Archive expired to cold Recalculate all scores Rebuild MEMORY.md

Daily

Everything in Quick Tree prune (remove dead nodes, 60+ days no activity)

Monthly

Everything in Daily Tree rebuild (LLM-powered restructuring, future) Cold cleanup (delete frozen entries older than retention)

Full

Everything in Monthly Full recalculation of all scores Deep tree analysis Generate consolidation report Usage: # Quick consolidation (default) memory_cli.py consolidate # Daily (run via cron) memory_cli.py consolidate --mode daily # Monthly (run via cron) memory_cli.py consolidate --mode monthly --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN" Recommended schedule: Quick: Every 2-4 hours (heartbeat) Daily: Midnight via cron Monthly: 1st of month via cron

Critical Sync (Cloud-First)

Purpose: Cloud backup of hot state + tree after every conversation. What syncs: Hot memory state (identity, owner profile, active context, lessons) Tree index (structure + counts) Timestamp Recovery: If device lost, restore from cloud in <2 minutes Usage: # Manual critical sync memory_cli.py sync-critical --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN" --agent-id default # Automatic: Call after every important conversation # In agent code: # 1. Process conversation # 2. Store distilled facts # 3. Call sync-critical Retry strategy: Exponential backoff if cloud unreachable (5s, 10s, 20s, 40s)

Metrics & Observability

Tracked metrics: { "tree_index_size_bytes": 1842, "tree_node_count": 37, "hot_memory_size_bytes": 4200, "warm_memory_count": 145, "warm_memory_size_kb": 38.2, "retrieval_count": 234, "evictions_today": 12, "reinforcements_today": 67, "consolidation_count": 8, "last_consolidation": 1707350400, "context_tokens_saved": 47800, "timestamp": "2026-02-10T14:30:00" } Usage: memory_cli.py metrics --agent-id default Key metrics: context_tokens_saved β€” Estimated tokens saved vs. flat MEMORY.md retrieval_count β€” How often memories are accessed evictions_today β€” Memory pressure indicator warm_memory_size_kb β€” Storage usage

Store

memory_cli.py store --text "Fact text" --category "path/to/category" [--importance 0.8] [--agent-id default] Importance guide: 0.9-1.0 β€” Critical decisions, credentials, core identity 0.7-0.8 β€” Project decisions, architecture, preferences 0.5-0.6 β€” General facts, daily events 0.3-0.4 β€” Casual mentions, low priority Example: memory_cli.py store \ --text "Decided to deploy EvoClaw on BSC testnet before mainnet" \ --category "projects/evoclaw/deployment" \ --importance 0.85 \ --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN" # Store with explicit metadata (v2.1.0+) memory_cli.py store \ --text "Z-Image ComfyUI model for photorealistic images" \ --category "tools/image-generation" \ --importance 0.8 \ --url "https://docs.comfy.org/tutorials/image/z-image/z-image" \ --command "huggingface-cli download Tongyi-MAI/Z-Image" \ --path "/home/user/models/"

Validate (v2.1.0)

memory_cli.py validate [--file PATH] [--agent-id default] Purpose: Check daily notes for incomplete information (missing URLs, commands, next steps). Example: # Validate today's daily notes memory_cli.py validate # Validate specific file memory_cli.py validate --file memory/2026-02-13.md Output: { "status": "warning", "warnings_count": 2, "warnings": [ "Tool 'Z-Image' mentioned without URL/documentation link", "Action 'install' mentioned without command example" ], "suggestions": [ "Add URLs for mentioned tools/services", "Include command examples for setup/installation steps", "Document next steps after decisions" ] }

Extract Metadata (v2.1.0)

memory_cli.py extract-metadata --file PATH Purpose: Extract structured metadata (URLs, commands, paths) from a file. Example: memory_cli.py extract-metadata --file memory/2026-02-13.md Output: { "file": "memory/2026-02-13.md", "metadata": { "urls": [ "https://docs.comfy.org/tutorials/image/z-image/z-image", "https://github.com/Lightricks/LTX-Video" ], "commands": [ "huggingface-cli download Tongyi-MAI/Z-Image", "git clone https://github.com/Lightricks/LTX-Video.git" ], "paths": [ "/home/peter/ai-stack/comfyui/models", "./configs/ltx-video-2-config.yaml" ] }, "summary": { "urls_count": 2, "commands_count": 2, "paths_count": 2 } }

Search by URL (v2.1.0)

memory_cli.py search-url --url FRAGMENT [--limit 5] [--agent-id default] Purpose: Search facts by URL fragment. Example: # Find all facts with comfy.org URLs memory_cli.py search-url --url "comfy.org" # Find GitHub repos memory_cli.py search-url --url "github.com" --limit 10 Output: { "query": "comfy.org", "results_count": 1, "results": [ { "id": "abc123", "text": "Z-Image ComfyUI model for photorealistic images", "category": "tools/image-generation", "metadata": { "urls": ["https://docs.comfy.org/tutorials/image/z-image/z-image"], "commands": ["huggingface-cli download Tongyi-MAI/Z-Image"], "paths": [] } } ] }

Retrieve

memory_cli.py retrieve --query "search query" [--limit 5] [--llm] [--llm-endpoint URL] [--agent-id default] Modes: Default: Keyword-based tree + warm + cold search --llm: LLM-powered semantic tree search Example: # Keyword search memory_cli.py retrieve --query "BSC deployment decision" --limit 5 # LLM search (more accurate) memory_cli.py retrieve \ --query "what did we decide about blockchain integration?" \ --llm --llm-endpoint http://localhost:8080/complete \ --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN"

Distill

memory_cli.py distill --text "raw conversation" [--llm] [--llm-endpoint URL] Example: # Rule-based distillation memory_cli.py distill --text "User: Let's deploy to testnet first. Agent: Good idea, safer that way." # LLM distillation memory_cli.py distill \ --text "Long conversation with nuance..." \ --llm --llm-endpoint http://localhost:8080/complete Output: { "distilled": { "fact": "Decided to deploy to testnet before mainnet", "emotion": "cautious", "people": [], "topics": ["deployment", "testnet", "safety"], "actions": ["deploy to testnet"], "outcome": "positive" }, "mode": "rule", "original_size": 87, "distilled_size": 156 }

Hot Memory

# Update hot state memory_cli.py hot --update KEY JSON [--agent-id default] # Rebuild MEMORY.md memory_cli.py hot --rebuild [--agent-id default] # Show current hot state memory_cli.py hot [--agent-id default] Keys: identity β€” Agent/owner identity info owner_profile β€” Owner preferences, personality lesson β€” Add critical lesson event β€” Add event to active context task β€” Add task to active context project β€” Add/update project Examples: # Update owner profile memory_cli.py hot --update owner_profile '{"timezone": "Australia/Sydney", "work_hours": "9am-6pm"}' # Add lesson memory_cli.py hot --update lesson '{"text": "Always test on testnet first", "category": "blockchain", "importance": 0.9}' # Add project memory_cli.py hot --update project '{"name": "EvoClaw", "status": "Active", "description": "Self-evolving agent framework"}' # Rebuild MEMORY.md memory_cli.py hot --rebuild

Tree

# Show tree memory_cli.py tree --show [--agent-id default] # Add node memory_cli.py tree --add "path/to/category" "Description" [--agent-id default] # Remove node memory_cli.py tree --remove "path/to/category" [--agent-id default] # Prune dead nodes memory_cli.py tree --prune [--agent-id default] Examples: # Add category memory_cli.py tree --add "projects/evoclaw/bsc" "BSC blockchain integration" # Remove empty category memory_cli.py tree --remove "old/unused/path" # Prune dead nodes (60+ days no activity) memory_cli.py tree --prune

Cold Storage

# Initialize Turso tables memory_cli.py cold --init --db-url URL --auth-token TOKEN # Query cold storage memory_cli.py cold --query "search term" [--limit 10] [--agent-id default] --db-url URL --auth-token TOKEN Examples: # Init tables (once) memory_cli.py cold --init --db-url "https://your-db.turso.io" --auth-token "your-token" # Query cold archive memory_cli.py cold --query "blockchain decision" --limit 10 --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN"

Configuration

File: config.json (optional, uses defaults if not present) { "agent_id": "default", "hot": { "max_bytes": 5120, "max_lessons": 20, "max_events": 10, "max_tasks": 10 }, "warm": { "max_kb": 50, "retention_days": 30, "eviction_threshold": 0.3 }, "cold": { "backend": "turso", "retention_years": 10 }, "scoring": { "half_life_days": 30, "reinforcement_boost": 0.1 }, "tree": { "max_nodes": 50, "max_depth": 4, "max_size_bytes": 2048 }, "distillation": { "aggression": 0.7, "max_distilled_bytes": 100, "mode": "rule" }, "consolidation": { "warm_eviction": "hourly", "tree_prune": "daily", "tree_rebuild": "monthly" } }

After Conversation

import subprocess import json def process_conversation(user_message, agent_response, category="conversations"): # 1. Distill conversation text = f"User: {user_message}\nAgent: {agent_response}" result = subprocess.run( ["python3", "skills/tiered-memory/scripts/memory_cli.py", "distill", "--text", text], capture_output=True, text=True ) distilled = json.loads(result.stdout) # 2. Determine importance importance = 0.7 if "decision" in distilled["distilled"]["outcome"] else 0.5 # 3. Store subprocess.run([ "python3", "skills/tiered-memory/scripts/memory_cli.py", "store", "--text", distilled["distilled"]["fact"], "--category", category, "--importance", str(importance), "--db-url", os.getenv("TURSO_URL"), "--auth-token", os.getenv("TURSO_TOKEN") ]) # 4. Critical sync subprocess.run([ "python3", "skills/tiered-memory/scripts/memory_cli.py", "sync-critical", "--db-url", os.getenv("TURSO_URL"), "--auth-token", os.getenv("TURSO_TOKEN") ])

Before Responding (Retrieval)

def get_relevant_context(query): result = subprocess.run( [ "python3", "skills/tiered-memory/scripts/memory_cli.py", "retrieve", "--query", query, "--limit", "5", "--llm", "--llm-endpoint", "http://localhost:8080/complete", "--db-url", os.getenv("TURSO_URL"), "--auth-token", os.getenv("TURSO_TOKEN") ], capture_output=True, text=True ) memories = json.loads(result.stdout) return "\n".join([f"- {m['text']}" for m in memories])

Heartbeat Consolidation

import schedule # Hourly quick consolidation schedule.every(2).hours.do(lambda: subprocess.run([ "python3", "skills/tiered-memory/scripts/memory_cli.py", "consolidate", "--mode", "quick", "--db-url", os.getenv("TURSO_URL"), "--auth-token", os.getenv("TURSO_TOKEN") ])) # Daily tree prune schedule.every().day.at("00:00").do(lambda: subprocess.run([ "python3", "skills/tiered-memory/scripts/memory_cli.py", "consolidate", "--mode", "daily", "--db-url", os.getenv("TURSO_URL"), "--auth-token", os.getenv("TURSO_TOKEN") ])) # Monthly full consolidation schedule.every().month.do(lambda: subprocess.run([ "python3", "skills/tiered-memory/scripts/memory_cli.py", "consolidate", "--mode", "monthly", "--db-url", os.getenv("TURSO_URL"), "--auth-token", os.getenv("TURSO_TOKEN") ]))

Model Recommendations

For Distillation & Tree Search: Claude 3 Haiku (fast, cheap, excellent structure) GPT-4o-mini (good balance) Gemini 1.5 Flash (very fast) For Tree Rebuilding: Claude 3.5 Sonnet (better reasoning) GPT-4o (strong planning)

Cost Optimization

Use cheaper models for frequent operations (distill, search) Batch distillation β€” Queue conversations, distill in batch Cache tree prompts β€” Tree structure doesn't change often Skip LLM for simple β€” Use rule-based for short conversations

Example LLM Endpoint

from flask import Flask, request, jsonify app = Flask(__name__) @app.route("/complete", methods=["POST"]) def complete(): data = request.json prompt = data["prompt"] # Call your LLM (OpenAI, Anthropic, local model, etc.) response = llm_client.complete(prompt) return jsonify({"text": response}) if __name__ == "__main__": app.run(port=8080)

Performance Characteristics

Context Size: Hot: ~5KB (always loaded) Tree: ~2KB (always loaded) Retrieved: ~1-3KB per query Total: ~8-15KB (constant, regardless of agent age) Retrieval Speed: Keyword: 10-20ms LLM tree search: 300-600ms Cold query: 50-100ms 5-Year Scenario: Hot: Still 5KB (living document) Warm: Last 30 days (~50KB) Cold: ~50MB in Turso (compressed distilled facts) Tree: Still 2KB (different nodes, same size) Context per session: Same as day 1

Comparison with Alternatives

SystemMemory ModelScalingAccuracyCostFlat MEMORY.mdLinear text❌ Months⚠️ Degrades❌ LinearVector RAGEmbeddingsβœ… Years⚠️ Similarityβ‰ relevance⚠️ ModerateEvoClaw TieredTree + tiersβœ… Decadesβœ… Reasoning-basedβœ… Fixed Why tree > vectors: Accuracy: 98%+ vs. 70-80% (PageIndex benchmark) Explainable: "Projects β†’ EvoClaw β†’ BSC" vs. "cosine 0.73" Multi-hop: Natural vs. poor False positives: Low vs. high

Tree size exceeding limit

# Prune dead nodes memory_cli.py tree --prune # Check which nodes are largest memory_cli.py tree --show | grep "Memories:" # Manually remove unused categories memory_cli.py tree --remove "unused/category"

Warm memory filling up

# Run consolidation memory_cli.py consolidate --mode daily --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN" # Check stats memory_cli.py metrics # Lower eviction threshold (keeps less in warm) # Edit config.json: "eviction_threshold": 0.4

Hot memory exceeding 5KB

# Hot auto-prunes, but check structure memory_cli.py hot # Remove old projects/tasks manually memory_cli.py hot --update project '{"name": "OldProject", "status": "Completed"}' # Rebuild to force pruning memory_cli.py hot --rebuild

LLM search failing

# Fallback to keyword search (automatic) memory_cli.py retrieve --query "..." --limit 5 # Test LLM endpoint curl -X POST http://localhost:8080/complete -d '{"prompt": "test"}' # Generate prompt for external testing tree_search.py --query "..." --tree-file memory/memory-tree.json --mode llm --llm-prompt-file test.txt

Migration from v1.x

Backward compatible: Existing warm-memory.json and memory-tree.json files work as-is. New files: config.json (optional, uses defaults) hot-memory-state.json (auto-created) metrics.json (auto-created) Steps: Update skill: clawhub update tiered-memory Run consolidation to rebuild hot state: memory_cli.py consolidate Initialize cold storage (optional): memory_cli.py cold --init --db-url ... --auth-token ... Configure agent to use new commands (see Integration section)

Migration from v2.0 to v2.1

Fully backward compatible: Existing memory files work without changes. What's new: βœ… Metadata automatically extracted from existing facts when loaded βœ… New commands: validate, extract-metadata, search-url βœ… store command now accepts --url, --command, --path flags βœ… Distillation preserves URLs and technical details βœ… No action required - just update and use new features Testing the upgrade: # Update skill clawhub update tiered-memory # Test metadata extraction memory_cli.py extract-metadata --file memory/2026-02-13.md # Validate your recent notes memory_cli.py validate # Search by URL memory_cli.py search-url --url "github.com"

References

Design: /docs/TIERED-MEMORY.md (EvoClaw) Cloud Sync: /docs/CLOUD-SYNC.md (EvoClaw) Inspiration: PageIndex (tree-based retrieval) v2.1.0 β€” A mind that remembers everything is as useless as one that remembers nothing. The art is knowing what to keep. Now with structured metadata to remember HOW, not just WHAT. πŸ§ πŸŒ²πŸ”—

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
4 Docs1 Scripts1 Config
  • SKILL.md Primary doc
  • API_ENDPOINTS.md Docs
  • METRICS_TRACKER.md Docs
  • README.md Docs
  • scripts/distiller.py Scripts
  • config.json Config