{
  "schemaVersion": "1.0",
  "item": {
    "slug": "tiered-memory",
    "name": "Tiered Memory",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/bowen31337/tiered-memory",
    "canonicalUrl": "https://clawhub.ai/bowen31337/tiered-memory",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/tiered-memory",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=tiered-memory",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "API_ENDPOINTS.md",
      "METRICS_TRACKER.md",
      "README.md",
      "SKILL.md",
      "config.json",
      "scripts/distiller.py"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/tiered-memory"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/tiered-memory",
    "agentPageUrl": "https://openagent3.xyz/skills/tiered-memory/agent",
    "manifestUrl": "https://openagent3.xyz/skills/tiered-memory/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/tiered-memory/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Tiered Memory System v2.2.0",
        "body": "A mind that remembers everything is as useless as one that remembers nothing. The art is knowing what to keep. 🧠\n\nEvoClaw-compatible three-tier memory system inspired by human cognition and PageIndex tree retrieval."
      },
      {
        "title": "What's New in v2.2.0",
        "body": "🔄 Automatic Daily Note Ingestion\n\nConsolidation (daily/monthly/full modes) now auto-runs ingest-daily\nBridges memory/YYYY-MM-DD.md files → tiered memory system\nNo more manual ingestion required — facts flow automatically\nFixes the \"two disconnected data paths\" problem"
      },
      {
        "title": "What's New in v2.1.0",
        "body": "🎯 Structured Metadata Extraction\n\nAutomatic extraction of URLs, shell commands, and file paths from facts\nPreserved during distillation and consolidation\nSearchable by URL fragment\n\n✅ Memory Completeness Validation\n\nCheck daily notes for missing URLs, commands, and next steps\nProactive warnings for incomplete information\nActionable suggestions for improvement\n\n🔍 Enhanced Search\n\nSearch facts by URL fragment\nGet all stored URLs from warm memory\nMetadata-aware fact storage\n\n🛡️ URL Preservation\n\nURLs explicitly preserved during LLM distillation\nFallback metadata extraction if LLM misses them\nCommand-line support for adding metadata manually"
      },
      {
        "title": "Architecture",
        "body": "┌─────────────────────────────────────────────────────┐\n│              AGENT CONTEXT (~8-15KB)                │\n│                                                     │\n│  ┌──────────┐  ┌────────────────────────────────┐  │\n│  │  Tree    │  │  Retrieved Memory Nodes         │  │\n│  │  Index   │  │  (on-demand, 1-3KB)            │  │\n│  │  (~2KB)  │  │                                │  │\n│  │          │  │  Fetched per conversation      │  │\n│  │  Always  │  │  based on tree reasoning       │  │\n│  │  loaded  │  │                                │  │\n│  └────┬─────┘  └────────────────────────────────┘  │\n│       │                                             │\n└───────┼─────────────────────────────────────────────┘\n        │\n        │ LLM-powered tree search\n        │\n┌───────▼─────────────────────────────────────────────┐\n│              MEMORY TIERS                           │\n│                                                     │\n│  🔴 HOT (5KB)      🟡 WARM (50KB)     🟢 COLD (∞)  │\n│                                                     │\n│  Core memory       Scored facts      Full archive  │\n│  - Identity        - 30-day         - Turso DB     │\n│  - Owner profile   - Decaying       - Queryable    │\n│  - Active context  - On-device      - 10-year      │\n│  - Lessons (20 max)                                │\n│                                                     │\n│  Always in         Retrieved via     Retrieved via │\n│  context           tree search       tree search   │\n└─────────────────────────────────────────────────────┘"
      },
      {
        "title": "From Human Memory",
        "body": "Consolidation — Short-term → long-term happens during consolidation cycles\nRelevance Decay — Unused memories fade; accessed memories strengthen\nStrategic Forgetting — Not remembering everything is a feature\nHierarchical Organization — Navigate categories, not scan linearly"
      },
      {
        "title": "From PageIndex",
        "body": "Vectorless Retrieval — LLM reasoning instead of embedding similarity\nTree-Structured Index — O(log n) navigation, not O(n) scan\nExplainable Results — Every retrieval traces a path through categories\nReasoning-Based Search — \"Why relevant?\" not \"how similar?\""
      },
      {
        "title": "Cloud-First (EvoClaw)",
        "body": "Device is replaceable — Soul lives in cloud (Turso)\nCritical sync — Hot + tree sync after every conversation\nDisaster recovery — Full restore in <2 minutes\nMulti-device — Same agent across phone/desktop/embedded"
      },
      {
        "title": "🔴 Hot Memory (5KB max)",
        "body": "Purpose: Core identity and active context, always in agent's context window.\n\nStructure:\n\n{\n  \"identity\": {\n    \"agent_name\": \"Agent\",\n    \"owner_name\": \"User\",\n    \"owner_preferred_name\": \"User\",\n    \"relationship_start\": \"2026-01-15\",\n    \"trust_level\": 0.95\n  },\n  \"owner_profile\": {\n    \"personality\": \"technical, direct communication\",\n    \"family\": [\"Sarah (wife)\", \"Luna (daughter, 3yo)\"],\n    \"topics_loved\": [\"AI architecture\", \"blockchain\", \"system design\"],\n    \"topics_avoid\": [\"small talk about weather\"],\n    \"timezone\": \"Australia/Sydney\",\n    \"work_hours\": \"9am-6pm\"\n  },\n  \"active_context\": {\n    \"projects\": [\n      {\n        \"name\": \"EvoClaw\",\n        \"description\": \"Self-evolving agent framework\",\n        \"status\": \"Active - BSC integration for hackathon\"\n      }\n    ],\n    \"events\": [\n      {\"text\": \"Hackathon deadline Feb 15\", \"timestamp\": 1707350400}\n    ],\n    \"tasks\": [\n      {\"text\": \"Deploy to BSC testnet\", \"status\": \"pending\", \"timestamp\": 1707350400}\n    ]\n  },\n  \"critical_lessons\": [\n    {\n      \"text\": \"Always test on testnet before mainnet\",\n      \"category\": \"blockchain\",\n      \"importance\": 0.9,\n      \"timestamp\": 1707350400\n    }\n  ]\n}\n\nAuto-pruning:\n\nLessons: Max 20, removes lowest-importance when full\nEvents: Keeps last 10 only\nTasks: Max 10 pending\nTotal size: Hard limit at 5KB, progressively prunes if exceeded\n\nGenerates: MEMORY.md — auto-rebuilt from structured hot state"
      },
      {
        "title": "🟡 Warm Memory (50KB max, 30-day retention)",
        "body": "Purpose: Recent distilled facts with decay scoring.\n\nEntry format:\n\n{\n  \"id\": \"abc123def456\",\n  \"text\": \"Decided to use zero go-ethereum deps for EvoClaw to keep binary small\",\n  \"category\": \"projects/evoclaw/architecture\",\n  \"importance\": 0.8,\n  \"created_at\": 1707350400,\n  \"access_count\": 3,\n  \"score\": 0.742,\n  \"tier\": \"warm\"\n}\n\nScoring:\n\nscore = importance × recency_decay(age) × reinforcement(access_count)\n\nrecency_decay(age) = exp(-age_days / 30)\nreinforcement(access) = 1 + 0.1 × access_count\n\nTier classification:\n\nscore >= 0.7 → Hot (promote to hot state)\nscore >= 0.3 → Warm (keep)\nscore >= 0.05 → Cold (archive)\nscore < 0.05 → Frozen (delete after retention period)\n\nEviction triggers:\n\nAge > 30 days AND score < 0.3\nTotal warm size > 50KB (evicts lowest-scored)\nManual consolidation"
      },
      {
        "title": "🟢 Cold Memory (Unlimited, Turso)",
        "body": "Purpose: Long-term archive, queryable but never bulk-loaded.\n\nSchema:\n\nCREATE TABLE cold_memories (\n  id TEXT PRIMARY KEY,\n  agent_id TEXT NOT NULL,\n  text TEXT NOT NULL,\n  category TEXT NOT NULL,\n  importance REAL DEFAULT 0.5,\n  created_at INTEGER NOT NULL,\n  access_count INTEGER DEFAULT 0\n);\n\nCREATE TABLE critical_state (\n  agent_id TEXT PRIMARY KEY,\n  data TEXT NOT NULL,  -- {hot_state, tree_nodes, timestamp}\n  updated_at INTEGER NOT NULL\n);\n\nRetention: 10 years (configurable)\nCleanup: Monthly consolidation removes frozen entries older than retention period"
      },
      {
        "title": "Tree Index",
        "body": "Purpose: Hierarchical category map for O(log n) retrieval.\n\nConstraints:\n\nMax 50 nodes\nMax depth 4 levels\nMax 2KB serialized\nMax 10 children per node\n\nExample:\n\nMemory Tree Index\n==================================================\n📂 Root (warm:15, cold:234)\n  📁 owner — Owner profile and preferences\n     Memories: warm=5, cold=89\n  📁 projects — Active projects\n     Memories: warm=8, cold=67\n    📁 projects/evoclaw — EvoClaw framework\n       Memories: warm=6, cold=45\n      📁 projects/evoclaw/bsc — BSC integration\n         Memories: warm=3, cold=12\n  📁 technical — Technical setup and config\n     Memories: warm=2, cold=34\n  📁 lessons — Learned lessons and rules\n     Memories: warm=0, cold=44\n\nNodes: 7/50\nSize: 1842 / 2048 bytes\n\nOperations:\n\n--add PATH DESC — Add category node\n--remove PATH — Remove node (only if no data)\n--prune — Remove dead nodes (no activity in 60+ days)\n--show — Pretty-print tree"
      },
      {
        "title": "Distillation Engine",
        "body": "Purpose: Three-stage compression of conversations.\n\nPipeline:\n\nRaw conversation (500B)\n  ↓ Stage 1→2: Extract structured info\nDistilled fact (80B)\n  ↓ Stage 2→3: Generate one-line summary\nCore summary (20B)"
      },
      {
        "title": "Stage 1→2: Raw → Distilled",
        "body": "Input: Raw conversation text\nOutput: Structured JSON\n\n{\n  \"fact\": \"User decided to use raw JSON-RPC for BSC to avoid go-ethereum dependency\",\n  \"emotion\": \"determined\",\n  \"people\": [\"User\"],\n  \"topics\": [\"blockchain\", \"architecture\", \"dependencies\"],\n  \"actions\": [\"decided to use raw JSON-RPC\", \"avoid go-ethereum\"],\n  \"outcome\": \"positive\"\n}\n\nModes:\n\nrule: Regex/heuristic extraction (fast, no LLM)\nllm: LLM-powered extraction (accurate, requires endpoint)\n\nUsage:\n\n# Rule-based (default)\ndistiller.py --text \"Had a productive chat about the BSC integration...\" --mode rule\n\n# LLM-powered\ndistiller.py --text \"...\" --mode llm --llm-endpoint http://localhost:8080/complete\n\n# With core summary\ndistiller.py --text \"...\" --mode rule --core-summary"
      },
      {
        "title": "Stage 2→3: Distilled → Core Summary",
        "body": "Purpose: One-line summary for tree index\n\nExample:\n\nDistilled: {\n  \"fact\": \"User decided raw JSON-RPC for BSC, no go-ethereum\",\n  \"outcome\": \"positive\"\n}\n\nCore summary: \"BSC integration: raw JSON-RPC (no deps)\"\n\nTarget: <30 bytes"
      },
      {
        "title": "LLM-Powered Tree Search",
        "body": "Purpose: Semantic search through tree structure using LLM reasoning.\n\nHow it works:\n\nBuild prompt with tree structure + query\nLLM reasons about which categories are relevant\nReturns category paths with relevance scores\nFetches memories from those categories\n\nExample:\n\nQuery: \"What did we decide about the hackathon deadline?\"\n\nKeyword search returns:\n\nprojects/evoclaw (0.8)\ntechnical/deployment (0.4)\n\nLLM search reasons:\n\nprojects/evoclaw/bsc (0.95) — \"BSC integration for hackathon\"\nactive_context/events (0.85) — \"Deadline mentioned here\"\n\nLLM prompt template:\n\nYou are a memory retrieval system. Given a memory tree index and a query, \nidentify which categories are relevant.\n\nMemory Tree Index:\n  projects/evoclaw — EvoClaw framework (warm:6, cold:45)\n  projects/evoclaw/bsc — BSC integration (warm:3, cold:12)\n  ...\n\nUser Query: What did we decide about the hackathon deadline?\n\nOutput (JSON):\n[\n  {\"path\": \"projects/evoclaw/bsc\", \"relevance\": 0.95, \"reason\": \"BSC work for hackathon\"},\n  {\"path\": \"active_context/events\", \"relevance\": 0.85, \"reason\": \"deadline tracking\"}\n]\n\nUsage:\n\n# Keyword search (fast)\ntree_search.py --query \"BSC integration\" --tree-file memory-tree.json --mode keyword\n\n# LLM search (accurate)\ntree_search.py --query \"what did we decide about hackathon?\" \\\n  --tree-file memory-tree.json --mode llm --llm-endpoint http://localhost:8080/complete\n\n# Generate prompt for external LLM\ntree_search.py --query \"...\" --tree-file memory-tree.json \\\n  --mode llm --llm-prompt-file prompt.txt"
      },
      {
        "title": "Multi-Agent Support",
        "body": "Agent ID scoping — All operations support --agent-id flag.\n\nFile layout:\n\nmemory/\n  default/\n    warm-memory.json\n    memory-tree.json\n    hot-memory-state.json\n    metrics.json\n  agent-2/\n    warm-memory.json\n    memory-tree.json\n    ...\nMEMORY.md              # default agent\nMEMORY-agent-2.md      # agent-2\n\nCold storage: Agent-scoped queries via agent_id column\n\nUsage:\n\n# Store for agent-2\nmemory_cli.py store --text \"...\" --category \"...\" --agent-id agent-2\n\n# Retrieve for agent-2\nmemory_cli.py retrieve --query \"...\" --agent-id agent-2\n\n# Consolidate agent-2\nmemory_cli.py consolidate --mode daily --agent-id agent-2"
      },
      {
        "title": "Consolidation Modes",
        "body": "Purpose: Periodic memory maintenance and optimization."
      },
      {
        "title": "Quick (hourly)",
        "body": "Warm eviction (score-based)\nArchive expired to cold\nRecalculate all scores\nRebuild MEMORY.md"
      },
      {
        "title": "Daily",
        "body": "Everything in Quick\nTree prune (remove dead nodes, 60+ days no activity)"
      },
      {
        "title": "Monthly",
        "body": "Everything in Daily\nTree rebuild (LLM-powered restructuring, future)\nCold cleanup (delete frozen entries older than retention)"
      },
      {
        "title": "Full",
        "body": "Everything in Monthly\nFull recalculation of all scores\nDeep tree analysis\nGenerate consolidation report\n\nUsage:\n\n# Quick consolidation (default)\nmemory_cli.py consolidate\n\n# Daily (run via cron)\nmemory_cli.py consolidate --mode daily\n\n# Monthly (run via cron)\nmemory_cli.py consolidate --mode monthly --db-url \"$TURSO_URL\" --auth-token \"$TURSO_TOKEN\"\n\nRecommended schedule:\n\nQuick: Every 2-4 hours (heartbeat)\nDaily: Midnight via cron\nMonthly: 1st of month via cron"
      },
      {
        "title": "Critical Sync (Cloud-First)",
        "body": "Purpose: Cloud backup of hot state + tree after every conversation.\n\nWhat syncs:\n\nHot memory state (identity, owner profile, active context, lessons)\nTree index (structure + counts)\nTimestamp\n\nRecovery: If device lost, restore from cloud in <2 minutes\n\nUsage:\n\n# Manual critical sync\nmemory_cli.py sync-critical --db-url \"$TURSO_URL\" --auth-token \"$TURSO_TOKEN\" --agent-id default\n\n# Automatic: Call after every important conversation\n# In agent code:\n#   1. Process conversation\n#   2. Store distilled facts\n#   3. Call sync-critical\n\nRetry strategy: Exponential backoff if cloud unreachable (5s, 10s, 20s, 40s)"
      },
      {
        "title": "Metrics & Observability",
        "body": "Tracked metrics:\n\n{\n  \"tree_index_size_bytes\": 1842,\n  \"tree_node_count\": 37,\n  \"hot_memory_size_bytes\": 4200,\n  \"warm_memory_count\": 145,\n  \"warm_memory_size_kb\": 38.2,\n  \"retrieval_count\": 234,\n  \"evictions_today\": 12,\n  \"reinforcements_today\": 67,\n  \"consolidation_count\": 8,\n  \"last_consolidation\": 1707350400,\n  \"context_tokens_saved\": 47800,\n  \"timestamp\": \"2026-02-10T14:30:00\"\n}\n\nUsage:\n\nmemory_cli.py metrics --agent-id default\n\nKey metrics:\n\ncontext_tokens_saved — Estimated tokens saved vs. flat MEMORY.md\nretrieval_count — How often memories are accessed\nevictions_today — Memory pressure indicator\nwarm_memory_size_kb — Storage usage"
      },
      {
        "title": "Store",
        "body": "memory_cli.py store --text \"Fact text\" --category \"path/to/category\" [--importance 0.8] [--agent-id default]\n\nImportance guide:\n\n0.9-1.0 — Critical decisions, credentials, core identity\n0.7-0.8 — Project decisions, architecture, preferences\n0.5-0.6 — General facts, daily events\n0.3-0.4 — Casual mentions, low priority\n\nExample:\n\nmemory_cli.py store \\\n  --text \"Decided to deploy EvoClaw on BSC testnet before mainnet\" \\\n  --category \"projects/evoclaw/deployment\" \\\n  --importance 0.85 \\\n  --db-url \"$TURSO_URL\" --auth-token \"$TURSO_TOKEN\"\n\n# Store with explicit metadata (v2.1.0+)\nmemory_cli.py store \\\n  --text \"Z-Image ComfyUI model for photorealistic images\" \\\n  --category \"tools/image-generation\" \\\n  --importance 0.8 \\\n  --url \"https://docs.comfy.org/tutorials/image/z-image/z-image\" \\\n  --command \"huggingface-cli download Tongyi-MAI/Z-Image\" \\\n  --path \"/home/user/models/\""
      },
      {
        "title": "Validate (v2.1.0)",
        "body": "memory_cli.py validate [--file PATH] [--agent-id default]\n\nPurpose: Check daily notes for incomplete information (missing URLs, commands, next steps).\n\nExample:\n\n# Validate today's daily notes\nmemory_cli.py validate\n\n# Validate specific file\nmemory_cli.py validate --file memory/2026-02-13.md\n\nOutput:\n\n{\n  \"status\": \"warning\",\n  \"warnings_count\": 2,\n  \"warnings\": [\n    \"Tool 'Z-Image' mentioned without URL/documentation link\",\n    \"Action 'install' mentioned without command example\"\n  ],\n  \"suggestions\": [\n    \"Add URLs for mentioned tools/services\",\n    \"Include command examples for setup/installation steps\",\n    \"Document next steps after decisions\"\n  ]\n}"
      },
      {
        "title": "Extract Metadata (v2.1.0)",
        "body": "memory_cli.py extract-metadata --file PATH\n\nPurpose: Extract structured metadata (URLs, commands, paths) from a file.\n\nExample:\n\nmemory_cli.py extract-metadata --file memory/2026-02-13.md\n\nOutput:\n\n{\n  \"file\": \"memory/2026-02-13.md\",\n  \"metadata\": {\n    \"urls\": [\n      \"https://docs.comfy.org/tutorials/image/z-image/z-image\",\n      \"https://github.com/Lightricks/LTX-Video\"\n    ],\n    \"commands\": [\n      \"huggingface-cli download Tongyi-MAI/Z-Image\",\n      \"git clone https://github.com/Lightricks/LTX-Video.git\"\n    ],\n    \"paths\": [\n      \"/home/peter/ai-stack/comfyui/models\",\n      \"./configs/ltx-video-2-config.yaml\"\n    ]\n  },\n  \"summary\": {\n    \"urls_count\": 2,\n    \"commands_count\": 2,\n    \"paths_count\": 2\n  }\n}"
      },
      {
        "title": "Search by URL (v2.1.0)",
        "body": "memory_cli.py search-url --url FRAGMENT [--limit 5] [--agent-id default]\n\nPurpose: Search facts by URL fragment.\n\nExample:\n\n# Find all facts with comfy.org URLs\nmemory_cli.py search-url --url \"comfy.org\"\n\n# Find GitHub repos\nmemory_cli.py search-url --url \"github.com\" --limit 10\n\nOutput:\n\n{\n  \"query\": \"comfy.org\",\n  \"results_count\": 1,\n  \"results\": [\n    {\n      \"id\": \"abc123\",\n      \"text\": \"Z-Image ComfyUI model for photorealistic images\",\n      \"category\": \"tools/image-generation\",\n      \"metadata\": {\n        \"urls\": [\"https://docs.comfy.org/tutorials/image/z-image/z-image\"],\n        \"commands\": [\"huggingface-cli download Tongyi-MAI/Z-Image\"],\n        \"paths\": []\n      }\n    }\n  ]\n}"
      },
      {
        "title": "Retrieve",
        "body": "memory_cli.py retrieve --query \"search query\" [--limit 5] [--llm] [--llm-endpoint URL] [--agent-id default]\n\nModes:\n\nDefault: Keyword-based tree + warm + cold search\n--llm: LLM-powered semantic tree search\n\nExample:\n\n# Keyword search\nmemory_cli.py retrieve --query \"BSC deployment decision\" --limit 5\n\n# LLM search (more accurate)\nmemory_cli.py retrieve \\\n  --query \"what did we decide about blockchain integration?\" \\\n  --llm --llm-endpoint http://localhost:8080/complete \\\n  --db-url \"$TURSO_URL\" --auth-token \"$TURSO_TOKEN\""
      },
      {
        "title": "Distill",
        "body": "memory_cli.py distill --text \"raw conversation\" [--llm] [--llm-endpoint URL]\n\nExample:\n\n# Rule-based distillation\nmemory_cli.py distill --text \"User: Let's deploy to testnet first. Agent: Good idea, safer that way.\"\n\n# LLM distillation\nmemory_cli.py distill \\\n  --text \"Long conversation with nuance...\" \\\n  --llm --llm-endpoint http://localhost:8080/complete\n\nOutput:\n\n{\n  \"distilled\": {\n    \"fact\": \"Decided to deploy to testnet before mainnet\",\n    \"emotion\": \"cautious\",\n    \"people\": [],\n    \"topics\": [\"deployment\", \"testnet\", \"safety\"],\n    \"actions\": [\"deploy to testnet\"],\n    \"outcome\": \"positive\"\n  },\n  \"mode\": \"rule\",\n  \"original_size\": 87,\n  \"distilled_size\": 156\n}"
      },
      {
        "title": "Hot Memory",
        "body": "# Update hot state\nmemory_cli.py hot --update KEY JSON [--agent-id default]\n\n# Rebuild MEMORY.md\nmemory_cli.py hot --rebuild [--agent-id default]\n\n# Show current hot state\nmemory_cli.py hot [--agent-id default]\n\nKeys:\n\nidentity — Agent/owner identity info\nowner_profile — Owner preferences, personality\nlesson — Add critical lesson\nevent — Add event to active context\ntask — Add task to active context\nproject — Add/update project\n\nExamples:\n\n# Update owner profile\nmemory_cli.py hot --update owner_profile '{\"timezone\": \"Australia/Sydney\", \"work_hours\": \"9am-6pm\"}'\n\n# Add lesson\nmemory_cli.py hot --update lesson '{\"text\": \"Always test on testnet first\", \"category\": \"blockchain\", \"importance\": 0.9}'\n\n# Add project\nmemory_cli.py hot --update project '{\"name\": \"EvoClaw\", \"status\": \"Active\", \"description\": \"Self-evolving agent framework\"}'\n\n# Rebuild MEMORY.md\nmemory_cli.py hot --rebuild"
      },
      {
        "title": "Tree",
        "body": "# Show tree\nmemory_cli.py tree --show [--agent-id default]\n\n# Add node\nmemory_cli.py tree --add \"path/to/category\" \"Description\" [--agent-id default]\n\n# Remove node\nmemory_cli.py tree --remove \"path/to/category\" [--agent-id default]\n\n# Prune dead nodes\nmemory_cli.py tree --prune [--agent-id default]\n\nExamples:\n\n# Add category\nmemory_cli.py tree --add \"projects/evoclaw/bsc\" \"BSC blockchain integration\"\n\n# Remove empty category\nmemory_cli.py tree --remove \"old/unused/path\"\n\n# Prune dead nodes (60+ days no activity)\nmemory_cli.py tree --prune"
      },
      {
        "title": "Cold Storage",
        "body": "# Initialize Turso tables\nmemory_cli.py cold --init --db-url URL --auth-token TOKEN\n\n# Query cold storage\nmemory_cli.py cold --query \"search term\" [--limit 10] [--agent-id default] --db-url URL --auth-token TOKEN\n\nExamples:\n\n# Init tables (once)\nmemory_cli.py cold --init --db-url \"https://your-db.turso.io\" --auth-token \"your-token\"\n\n# Query cold archive\nmemory_cli.py cold --query \"blockchain decision\" --limit 10 --db-url \"$TURSO_URL\" --auth-token \"$TURSO_TOKEN\""
      },
      {
        "title": "Configuration",
        "body": "File: config.json (optional, uses defaults if not present)\n\n{\n  \"agent_id\": \"default\",\n  \"hot\": {\n    \"max_bytes\": 5120,\n    \"max_lessons\": 20,\n    \"max_events\": 10,\n    \"max_tasks\": 10\n  },\n  \"warm\": {\n    \"max_kb\": 50,\n    \"retention_days\": 30,\n    \"eviction_threshold\": 0.3\n  },\n  \"cold\": {\n    \"backend\": \"turso\",\n    \"retention_years\": 10\n  },\n  \"scoring\": {\n    \"half_life_days\": 30,\n    \"reinforcement_boost\": 0.1\n  },\n  \"tree\": {\n    \"max_nodes\": 50,\n    \"max_depth\": 4,\n    \"max_size_bytes\": 2048\n  },\n  \"distillation\": {\n    \"aggression\": 0.7,\n    \"max_distilled_bytes\": 100,\n    \"mode\": \"rule\"\n  },\n  \"consolidation\": {\n    \"warm_eviction\": \"hourly\",\n    \"tree_prune\": \"daily\",\n    \"tree_rebuild\": \"monthly\"\n  }\n}"
      },
      {
        "title": "After Conversation",
        "body": "import subprocess\nimport json\n\ndef process_conversation(user_message, agent_response, category=\"conversations\"):\n    # 1. Distill conversation\n    text = f\"User: {user_message}\\nAgent: {agent_response}\"\n    result = subprocess.run(\n        [\"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"distill\", \"--text\", text],\n        capture_output=True, text=True\n    )\n    distilled = json.loads(result.stdout)\n    \n    # 2. Determine importance\n    importance = 0.7 if \"decision\" in distilled[\"distilled\"][\"outcome\"] else 0.5\n    \n    # 3. Store\n    subprocess.run([\n        \"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"store\",\n        \"--text\", distilled[\"distilled\"][\"fact\"],\n        \"--category\", category,\n        \"--importance\", str(importance),\n        \"--db-url\", os.getenv(\"TURSO_URL\"),\n        \"--auth-token\", os.getenv(\"TURSO_TOKEN\")\n    ])\n    \n    # 4. Critical sync\n    subprocess.run([\n        \"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"sync-critical\",\n        \"--db-url\", os.getenv(\"TURSO_URL\"),\n        \"--auth-token\", os.getenv(\"TURSO_TOKEN\")\n    ])"
      },
      {
        "title": "Before Responding (Retrieval)",
        "body": "def get_relevant_context(query):\n    result = subprocess.run(\n        [\n            \"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"retrieve\",\n            \"--query\", query,\n            \"--limit\", \"5\",\n            \"--llm\",\n            \"--llm-endpoint\", \"http://localhost:8080/complete\",\n            \"--db-url\", os.getenv(\"TURSO_URL\"),\n            \"--auth-token\", os.getenv(\"TURSO_TOKEN\")\n        ],\n        capture_output=True, text=True\n    )\n    \n    memories = json.loads(result.stdout)\n    return \"\\n\".join([f\"- {m['text']}\" for m in memories])"
      },
      {
        "title": "Heartbeat Consolidation",
        "body": "import schedule\n\n# Hourly quick consolidation\nschedule.every(2).hours.do(lambda: subprocess.run([\n    \"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"consolidate\",\n    \"--mode\", \"quick\",\n    \"--db-url\", os.getenv(\"TURSO_URL\"),\n    \"--auth-token\", os.getenv(\"TURSO_TOKEN\")\n]))\n\n# Daily tree prune\nschedule.every().day.at(\"00:00\").do(lambda: subprocess.run([\n    \"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"consolidate\",\n    \"--mode\", \"daily\",\n    \"--db-url\", os.getenv(\"TURSO_URL\"),\n    \"--auth-token\", os.getenv(\"TURSO_TOKEN\")\n]))\n\n# Monthly full consolidation\nschedule.every().month.do(lambda: subprocess.run([\n    \"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"consolidate\",\n    \"--mode\", \"monthly\",\n    \"--db-url\", os.getenv(\"TURSO_URL\"),\n    \"--auth-token\", os.getenv(\"TURSO_TOKEN\")\n]))"
      },
      {
        "title": "Model Recommendations",
        "body": "For Distillation & Tree Search:\n\nClaude 3 Haiku (fast, cheap, excellent structure)\nGPT-4o-mini (good balance)\nGemini 1.5 Flash (very fast)\n\nFor Tree Rebuilding:\n\nClaude 3.5 Sonnet (better reasoning)\nGPT-4o (strong planning)"
      },
      {
        "title": "Cost Optimization",
        "body": "Use cheaper models for frequent operations (distill, search)\nBatch distillation — Queue conversations, distill in batch\nCache tree prompts — Tree structure doesn't change often\nSkip LLM for simple — Use rule-based for short conversations"
      },
      {
        "title": "Example LLM Endpoint",
        "body": "from flask import Flask, request, jsonify\n\napp = Flask(__name__)\n\n@app.route(\"/complete\", methods=[\"POST\"])\ndef complete():\n    data = request.json\n    prompt = data[\"prompt\"]\n    \n    # Call your LLM (OpenAI, Anthropic, local model, etc.)\n    response = llm_client.complete(prompt)\n    \n    return jsonify({\"text\": response})\n\nif __name__ == \"__main__\":\n    app.run(port=8080)"
      },
      {
        "title": "Performance Characteristics",
        "body": "Context Size:\n\nHot: ~5KB (always loaded)\nTree: ~2KB (always loaded)\nRetrieved: ~1-3KB per query\nTotal: ~8-15KB (constant, regardless of agent age)\n\nRetrieval Speed:\n\nKeyword: 10-20ms\nLLM tree search: 300-600ms\nCold query: 50-100ms\n\n5-Year Scenario:\n\nHot: Still 5KB (living document)\nWarm: Last 30 days (~50KB)\nCold: ~50MB in Turso (compressed distilled facts)\nTree: Still 2KB (different nodes, same size)\nContext per session: Same as day 1"
      },
      {
        "title": "Comparison with Alternatives",
        "body": "SystemMemory ModelScalingAccuracyCostFlat MEMORY.mdLinear text❌ Months⚠️ Degrades❌ LinearVector RAGEmbeddings✅ Years⚠️ Similarity≠relevance⚠️ ModerateEvoClaw TieredTree + tiers✅ Decades✅ Reasoning-based✅ Fixed\n\nWhy tree > vectors:\n\nAccuracy: 98%+ vs. 70-80% (PageIndex benchmark)\nExplainable: \"Projects → EvoClaw → BSC\" vs. \"cosine 0.73\"\nMulti-hop: Natural vs. poor\nFalse positives: Low vs. high"
      },
      {
        "title": "Tree size exceeding limit",
        "body": "# Prune dead nodes\nmemory_cli.py tree --prune\n\n# Check which nodes are largest\nmemory_cli.py tree --show | grep \"Memories:\"\n\n# Manually remove unused categories\nmemory_cli.py tree --remove \"unused/category\""
      },
      {
        "title": "Warm memory filling up",
        "body": "# Run consolidation\nmemory_cli.py consolidate --mode daily --db-url \"$TURSO_URL\" --auth-token \"$TURSO_TOKEN\"\n\n# Check stats\nmemory_cli.py metrics\n\n# Lower eviction threshold (keeps less in warm)\n# Edit config.json: \"eviction_threshold\": 0.4"
      },
      {
        "title": "Hot memory exceeding 5KB",
        "body": "# Hot auto-prunes, but check structure\nmemory_cli.py hot\n\n# Remove old projects/tasks manually\nmemory_cli.py hot --update project '{\"name\": \"OldProject\", \"status\": \"Completed\"}'\n\n# Rebuild to force pruning\nmemory_cli.py hot --rebuild"
      },
      {
        "title": "LLM search failing",
        "body": "# Fallback to keyword search (automatic)\nmemory_cli.py retrieve --query \"...\" --limit 5\n\n# Test LLM endpoint\ncurl -X POST http://localhost:8080/complete -d '{\"prompt\": \"test\"}'\n\n# Generate prompt for external testing\ntree_search.py --query \"...\" --tree-file memory/memory-tree.json --mode llm --llm-prompt-file test.txt"
      },
      {
        "title": "Migration from v1.x",
        "body": "Backward compatible: Existing warm-memory.json and memory-tree.json files work as-is.\n\nNew files:\n\nconfig.json (optional, uses defaults)\nhot-memory-state.json (auto-created)\nmetrics.json (auto-created)\n\nSteps:\n\nUpdate skill: clawhub update tiered-memory\nRun consolidation to rebuild hot state: memory_cli.py consolidate\nInitialize cold storage (optional): memory_cli.py cold --init --db-url ... --auth-token ...\nConfigure agent to use new commands (see Integration section)"
      },
      {
        "title": "Migration from v2.0 to v2.1",
        "body": "Fully backward compatible: Existing memory files work without changes.\n\nWhat's new:\n\n✅ Metadata automatically extracted from existing facts when loaded\n✅ New commands: validate, extract-metadata, search-url\n✅ store command now accepts --url, --command, --path flags\n✅ Distillation preserves URLs and technical details\n✅ No action required - just update and use new features\n\nTesting the upgrade:\n\n# Update skill\nclawhub update tiered-memory\n\n# Test metadata extraction\nmemory_cli.py extract-metadata --file memory/2026-02-13.md\n\n# Validate your recent notes\nmemory_cli.py validate\n\n# Search by URL\nmemory_cli.py search-url --url \"github.com\""
      },
      {
        "title": "References",
        "body": "Design: /docs/TIERED-MEMORY.md (EvoClaw)\nCloud Sync: /docs/CLOUD-SYNC.md (EvoClaw)\nInspiration: PageIndex (tree-based retrieval)\n\nv2.1.0 — A mind that remembers everything is as useless as one that remembers nothing. The art is knowing what to keep. Now with structured metadata to remember HOW, not just WHAT. 🧠🌲🔗"
      }
    ],
    "body": "Tiered Memory System v2.2.0\n\nA mind that remembers everything is as useless as one that remembers nothing. The art is knowing what to keep. 🧠\n\nEvoClaw-compatible three-tier memory system inspired by human cognition and PageIndex tree retrieval.\n\nWhat's New in v2.2.0\n\n🔄 Automatic Daily Note Ingestion\n\nConsolidation (daily/monthly/full modes) now auto-runs ingest-daily\nBridges memory/YYYY-MM-DD.md files → tiered memory system\nNo more manual ingestion required — facts flow automatically\nFixes the \"two disconnected data paths\" problem\nWhat's New in v2.1.0\n\n🎯 Structured Metadata Extraction\n\nAutomatic extraction of URLs, shell commands, and file paths from facts\nPreserved during distillation and consolidation\nSearchable by URL fragment\n\n✅ Memory Completeness Validation\n\nCheck daily notes for missing URLs, commands, and next steps\nProactive warnings for incomplete information\nActionable suggestions for improvement\n\n🔍 Enhanced Search\n\nSearch facts by URL fragment\nGet all stored URLs from warm memory\nMetadata-aware fact storage\n\n🛡️ URL Preservation\n\nURLs explicitly preserved during LLM distillation\nFallback metadata extraction if LLM misses them\nCommand-line support for adding metadata manually\nArchitecture\n┌─────────────────────────────────────────────────────┐\n│              AGENT CONTEXT (~8-15KB)                │\n│                                                     │\n│  ┌──────────┐  ┌────────────────────────────────┐  │\n│  │  Tree    │  │  Retrieved Memory Nodes         │  │\n│  │  Index   │  │  (on-demand, 1-3KB)            │  │\n│  │  (~2KB)  │  │                                │  │\n│  │          │  │  Fetched per conversation      │  │\n│  │  Always  │  │  based on tree reasoning       │  │\n│  │  loaded  │  │                                │  │\n│  └────┬─────┘  └────────────────────────────────┘  │\n│       │                                             │\n└───────┼─────────────────────────────────────────────┘\n        │\n        │ LLM-powered tree search\n        │\n┌───────▼─────────────────────────────────────────────┐\n│              MEMORY TIERS                           │\n│                                                     │\n│  🔴 HOT (5KB)      🟡 WARM (50KB)     🟢 COLD (∞)  │\n│                                                     │\n│  Core memory       Scored facts      Full archive  │\n│  - Identity        - 30-day         - Turso DB     │\n│  - Owner profile   - Decaying       - Queryable    │\n│  - Active context  - On-device      - 10-year      │\n│  - Lessons (20 max)                                │\n│                                                     │\n│  Always in         Retrieved via     Retrieved via │\n│  context           tree search       tree search   │\n└─────────────────────────────────────────────────────┘\n\nDesign Principles\nFrom Human Memory\nConsolidation — Short-term → long-term happens during consolidation cycles\nRelevance Decay — Unused memories fade; accessed memories strengthen\nStrategic Forgetting — Not remembering everything is a feature\nHierarchical Organization — Navigate categories, not scan linearly\nFrom PageIndex\nVectorless Retrieval — LLM reasoning instead of embedding similarity\nTree-Structured Index — O(log n) navigation, not O(n) scan\nExplainable Results — Every retrieval traces a path through categories\nReasoning-Based Search — \"Why relevant?\" not \"how similar?\"\nCloud-First (EvoClaw)\nDevice is replaceable — Soul lives in cloud (Turso)\nCritical sync — Hot + tree sync after every conversation\nDisaster recovery — Full restore in <2 minutes\nMulti-device — Same agent across phone/desktop/embedded\nMemory Tiers\n🔴 Hot Memory (5KB max)\n\nPurpose: Core identity and active context, always in agent's context window.\n\nStructure:\n\n{\n  \"identity\": {\n    \"agent_name\": \"Agent\",\n    \"owner_name\": \"User\",\n    \"owner_preferred_name\": \"User\",\n    \"relationship_start\": \"2026-01-15\",\n    \"trust_level\": 0.95\n  },\n  \"owner_profile\": {\n    \"personality\": \"technical, direct communication\",\n    \"family\": [\"Sarah (wife)\", \"Luna (daughter, 3yo)\"],\n    \"topics_loved\": [\"AI architecture\", \"blockchain\", \"system design\"],\n    \"topics_avoid\": [\"small talk about weather\"],\n    \"timezone\": \"Australia/Sydney\",\n    \"work_hours\": \"9am-6pm\"\n  },\n  \"active_context\": {\n    \"projects\": [\n      {\n        \"name\": \"EvoClaw\",\n        \"description\": \"Self-evolving agent framework\",\n        \"status\": \"Active - BSC integration for hackathon\"\n      }\n    ],\n    \"events\": [\n      {\"text\": \"Hackathon deadline Feb 15\", \"timestamp\": 1707350400}\n    ],\n    \"tasks\": [\n      {\"text\": \"Deploy to BSC testnet\", \"status\": \"pending\", \"timestamp\": 1707350400}\n    ]\n  },\n  \"critical_lessons\": [\n    {\n      \"text\": \"Always test on testnet before mainnet\",\n      \"category\": \"blockchain\",\n      \"importance\": 0.9,\n      \"timestamp\": 1707350400\n    }\n  ]\n}\n\n\nAuto-pruning:\n\nLessons: Max 20, removes lowest-importance when full\nEvents: Keeps last 10 only\nTasks: Max 10 pending\nTotal size: Hard limit at 5KB, progressively prunes if exceeded\n\nGenerates: MEMORY.md — auto-rebuilt from structured hot state\n\n🟡 Warm Memory (50KB max, 30-day retention)\n\nPurpose: Recent distilled facts with decay scoring.\n\nEntry format:\n\n{\n  \"id\": \"abc123def456\",\n  \"text\": \"Decided to use zero go-ethereum deps for EvoClaw to keep binary small\",\n  \"category\": \"projects/evoclaw/architecture\",\n  \"importance\": 0.8,\n  \"created_at\": 1707350400,\n  \"access_count\": 3,\n  \"score\": 0.742,\n  \"tier\": \"warm\"\n}\n\n\nScoring:\n\nscore = importance × recency_decay(age) × reinforcement(access_count)\n\nrecency_decay(age) = exp(-age_days / 30)\nreinforcement(access) = 1 + 0.1 × access_count\n\n\nTier classification:\n\nscore >= 0.7 → Hot (promote to hot state)\nscore >= 0.3 → Warm (keep)\nscore >= 0.05 → Cold (archive)\nscore < 0.05 → Frozen (delete after retention period)\n\nEviction triggers:\n\nAge > 30 days AND score < 0.3\nTotal warm size > 50KB (evicts lowest-scored)\nManual consolidation\n🟢 Cold Memory (Unlimited, Turso)\n\nPurpose: Long-term archive, queryable but never bulk-loaded.\n\nSchema:\n\nCREATE TABLE cold_memories (\n  id TEXT PRIMARY KEY,\n  agent_id TEXT NOT NULL,\n  text TEXT NOT NULL,\n  category TEXT NOT NULL,\n  importance REAL DEFAULT 0.5,\n  created_at INTEGER NOT NULL,\n  access_count INTEGER DEFAULT 0\n);\n\nCREATE TABLE critical_state (\n  agent_id TEXT PRIMARY KEY,\n  data TEXT NOT NULL,  -- {hot_state, tree_nodes, timestamp}\n  updated_at INTEGER NOT NULL\n);\n\n\nRetention: 10 years (configurable) Cleanup: Monthly consolidation removes frozen entries older than retention period\n\nTree Index\n\nPurpose: Hierarchical category map for O(log n) retrieval.\n\nConstraints:\n\nMax 50 nodes\nMax depth 4 levels\nMax 2KB serialized\nMax 10 children per node\n\nExample:\n\nMemory Tree Index\n==================================================\n📂 Root (warm:15, cold:234)\n  📁 owner — Owner profile and preferences\n     Memories: warm=5, cold=89\n  📁 projects — Active projects\n     Memories: warm=8, cold=67\n    📁 projects/evoclaw — EvoClaw framework\n       Memories: warm=6, cold=45\n      📁 projects/evoclaw/bsc — BSC integration\n         Memories: warm=3, cold=12\n  📁 technical — Technical setup and config\n     Memories: warm=2, cold=34\n  📁 lessons — Learned lessons and rules\n     Memories: warm=0, cold=44\n\nNodes: 7/50\nSize: 1842 / 2048 bytes\n\n\nOperations:\n\n--add PATH DESC — Add category node\n--remove PATH — Remove node (only if no data)\n--prune — Remove dead nodes (no activity in 60+ days)\n--show — Pretty-print tree\nDistillation Engine\n\nPurpose: Three-stage compression of conversations.\n\nPipeline:\n\nRaw conversation (500B)\n  ↓ Stage 1→2: Extract structured info\nDistilled fact (80B)\n  ↓ Stage 2→3: Generate one-line summary\nCore summary (20B)\n\nStage 1→2: Raw → Distilled\n\nInput: Raw conversation text Output: Structured JSON\n\n{\n  \"fact\": \"User decided to use raw JSON-RPC for BSC to avoid go-ethereum dependency\",\n  \"emotion\": \"determined\",\n  \"people\": [\"User\"],\n  \"topics\": [\"blockchain\", \"architecture\", \"dependencies\"],\n  \"actions\": [\"decided to use raw JSON-RPC\", \"avoid go-ethereum\"],\n  \"outcome\": \"positive\"\n}\n\n\nModes:\n\nrule: Regex/heuristic extraction (fast, no LLM)\nllm: LLM-powered extraction (accurate, requires endpoint)\n\nUsage:\n\n# Rule-based (default)\ndistiller.py --text \"Had a productive chat about the BSC integration...\" --mode rule\n\n# LLM-powered\ndistiller.py --text \"...\" --mode llm --llm-endpoint http://localhost:8080/complete\n\n# With core summary\ndistiller.py --text \"...\" --mode rule --core-summary\n\nStage 2→3: Distilled → Core Summary\n\nPurpose: One-line summary for tree index\n\nExample:\n\nDistilled: {\n  \"fact\": \"User decided raw JSON-RPC for BSC, no go-ethereum\",\n  \"outcome\": \"positive\"\n}\n\nCore summary: \"BSC integration: raw JSON-RPC (no deps)\"\n\n\nTarget: <30 bytes\n\nLLM-Powered Tree Search\n\nPurpose: Semantic search through tree structure using LLM reasoning.\n\nHow it works:\n\nBuild prompt with tree structure + query\nLLM reasons about which categories are relevant\nReturns category paths with relevance scores\nFetches memories from those categories\n\nExample:\n\nQuery: \"What did we decide about the hackathon deadline?\"\n\nKeyword search returns:\n\nprojects/evoclaw (0.8)\ntechnical/deployment (0.4)\n\nLLM search reasons:\n\nprojects/evoclaw/bsc (0.95) — \"BSC integration for hackathon\"\nactive_context/events (0.85) — \"Deadline mentioned here\"\n\nLLM prompt template:\n\nYou are a memory retrieval system. Given a memory tree index and a query, \nidentify which categories are relevant.\n\nMemory Tree Index:\n  projects/evoclaw — EvoClaw framework (warm:6, cold:45)\n  projects/evoclaw/bsc — BSC integration (warm:3, cold:12)\n  ...\n\nUser Query: What did we decide about the hackathon deadline?\n\nOutput (JSON):\n[\n  {\"path\": \"projects/evoclaw/bsc\", \"relevance\": 0.95, \"reason\": \"BSC work for hackathon\"},\n  {\"path\": \"active_context/events\", \"relevance\": 0.85, \"reason\": \"deadline tracking\"}\n]\n\n\nUsage:\n\n# Keyword search (fast)\ntree_search.py --query \"BSC integration\" --tree-file memory-tree.json --mode keyword\n\n# LLM search (accurate)\ntree_search.py --query \"what did we decide about hackathon?\" \\\n  --tree-file memory-tree.json --mode llm --llm-endpoint http://localhost:8080/complete\n\n# Generate prompt for external LLM\ntree_search.py --query \"...\" --tree-file memory-tree.json \\\n  --mode llm --llm-prompt-file prompt.txt\n\nMulti-Agent Support\n\nAgent ID scoping — All operations support --agent-id flag.\n\nFile layout:\n\nmemory/\n  default/\n    warm-memory.json\n    memory-tree.json\n    hot-memory-state.json\n    metrics.json\n  agent-2/\n    warm-memory.json\n    memory-tree.json\n    ...\nMEMORY.md              # default agent\nMEMORY-agent-2.md      # agent-2\n\n\nCold storage: Agent-scoped queries via agent_id column\n\nUsage:\n\n# Store for agent-2\nmemory_cli.py store --text \"...\" --category \"...\" --agent-id agent-2\n\n# Retrieve for agent-2\nmemory_cli.py retrieve --query \"...\" --agent-id agent-2\n\n# Consolidate agent-2\nmemory_cli.py consolidate --mode daily --agent-id agent-2\n\nConsolidation Modes\n\nPurpose: Periodic memory maintenance and optimization.\n\nQuick (hourly)\nWarm eviction (score-based)\nArchive expired to cold\nRecalculate all scores\nRebuild MEMORY.md\nDaily\nEverything in Quick\nTree prune (remove dead nodes, 60+ days no activity)\nMonthly\nEverything in Daily\nTree rebuild (LLM-powered restructuring, future)\nCold cleanup (delete frozen entries older than retention)\nFull\nEverything in Monthly\nFull recalculation of all scores\nDeep tree analysis\nGenerate consolidation report\n\nUsage:\n\n# Quick consolidation (default)\nmemory_cli.py consolidate\n\n# Daily (run via cron)\nmemory_cli.py consolidate --mode daily\n\n# Monthly (run via cron)\nmemory_cli.py consolidate --mode monthly --db-url \"$TURSO_URL\" --auth-token \"$TURSO_TOKEN\"\n\n\nRecommended schedule:\n\nQuick: Every 2-4 hours (heartbeat)\nDaily: Midnight via cron\nMonthly: 1st of month via cron\nCritical Sync (Cloud-First)\n\nPurpose: Cloud backup of hot state + tree after every conversation.\n\nWhat syncs:\n\nHot memory state (identity, owner profile, active context, lessons)\nTree index (structure + counts)\nTimestamp\n\nRecovery: If device lost, restore from cloud in <2 minutes\n\nUsage:\n\n# Manual critical sync\nmemory_cli.py sync-critical --db-url \"$TURSO_URL\" --auth-token \"$TURSO_TOKEN\" --agent-id default\n\n# Automatic: Call after every important conversation\n# In agent code:\n#   1. Process conversation\n#   2. Store distilled facts\n#   3. Call sync-critical\n\n\nRetry strategy: Exponential backoff if cloud unreachable (5s, 10s, 20s, 40s)\n\nMetrics & Observability\n\nTracked metrics:\n\n{\n  \"tree_index_size_bytes\": 1842,\n  \"tree_node_count\": 37,\n  \"hot_memory_size_bytes\": 4200,\n  \"warm_memory_count\": 145,\n  \"warm_memory_size_kb\": 38.2,\n  \"retrieval_count\": 234,\n  \"evictions_today\": 12,\n  \"reinforcements_today\": 67,\n  \"consolidation_count\": 8,\n  \"last_consolidation\": 1707350400,\n  \"context_tokens_saved\": 47800,\n  \"timestamp\": \"2026-02-10T14:30:00\"\n}\n\n\nUsage:\n\nmemory_cli.py metrics --agent-id default\n\n\nKey metrics:\n\ncontext_tokens_saved — Estimated tokens saved vs. flat MEMORY.md\nretrieval_count — How often memories are accessed\nevictions_today — Memory pressure indicator\nwarm_memory_size_kb — Storage usage\nCommands Reference\nStore\nmemory_cli.py store --text \"Fact text\" --category \"path/to/category\" [--importance 0.8] [--agent-id default]\n\n\nImportance guide:\n\n0.9-1.0 — Critical decisions, credentials, core identity\n0.7-0.8 — Project decisions, architecture, preferences\n0.5-0.6 — General facts, daily events\n0.3-0.4 — Casual mentions, low priority\n\nExample:\n\nmemory_cli.py store \\\n  --text \"Decided to deploy EvoClaw on BSC testnet before mainnet\" \\\n  --category \"projects/evoclaw/deployment\" \\\n  --importance 0.85 \\\n  --db-url \"$TURSO_URL\" --auth-token \"$TURSO_TOKEN\"\n\n# Store with explicit metadata (v2.1.0+)\nmemory_cli.py store \\\n  --text \"Z-Image ComfyUI model for photorealistic images\" \\\n  --category \"tools/image-generation\" \\\n  --importance 0.8 \\\n  --url \"https://docs.comfy.org/tutorials/image/z-image/z-image\" \\\n  --command \"huggingface-cli download Tongyi-MAI/Z-Image\" \\\n  --path \"/home/user/models/\"\n\nValidate (v2.1.0)\nmemory_cli.py validate [--file PATH] [--agent-id default]\n\n\nPurpose: Check daily notes for incomplete information (missing URLs, commands, next steps).\n\nExample:\n\n# Validate today's daily notes\nmemory_cli.py validate\n\n# Validate specific file\nmemory_cli.py validate --file memory/2026-02-13.md\n\n\nOutput:\n\n{\n  \"status\": \"warning\",\n  \"warnings_count\": 2,\n  \"warnings\": [\n    \"Tool 'Z-Image' mentioned without URL/documentation link\",\n    \"Action 'install' mentioned without command example\"\n  ],\n  \"suggestions\": [\n    \"Add URLs for mentioned tools/services\",\n    \"Include command examples for setup/installation steps\",\n    \"Document next steps after decisions\"\n  ]\n}\n\nExtract Metadata (v2.1.0)\nmemory_cli.py extract-metadata --file PATH\n\n\nPurpose: Extract structured metadata (URLs, commands, paths) from a file.\n\nExample:\n\nmemory_cli.py extract-metadata --file memory/2026-02-13.md\n\n\nOutput:\n\n{\n  \"file\": \"memory/2026-02-13.md\",\n  \"metadata\": {\n    \"urls\": [\n      \"https://docs.comfy.org/tutorials/image/z-image/z-image\",\n      \"https://github.com/Lightricks/LTX-Video\"\n    ],\n    \"commands\": [\n      \"huggingface-cli download Tongyi-MAI/Z-Image\",\n      \"git clone https://github.com/Lightricks/LTX-Video.git\"\n    ],\n    \"paths\": [\n      \"/home/peter/ai-stack/comfyui/models\",\n      \"./configs/ltx-video-2-config.yaml\"\n    ]\n  },\n  \"summary\": {\n    \"urls_count\": 2,\n    \"commands_count\": 2,\n    \"paths_count\": 2\n  }\n}\n\nSearch by URL (v2.1.0)\nmemory_cli.py search-url --url FRAGMENT [--limit 5] [--agent-id default]\n\n\nPurpose: Search facts by URL fragment.\n\nExample:\n\n# Find all facts with comfy.org URLs\nmemory_cli.py search-url --url \"comfy.org\"\n\n# Find GitHub repos\nmemory_cli.py search-url --url \"github.com\" --limit 10\n\n\nOutput:\n\n{\n  \"query\": \"comfy.org\",\n  \"results_count\": 1,\n  \"results\": [\n    {\n      \"id\": \"abc123\",\n      \"text\": \"Z-Image ComfyUI model for photorealistic images\",\n      \"category\": \"tools/image-generation\",\n      \"metadata\": {\n        \"urls\": [\"https://docs.comfy.org/tutorials/image/z-image/z-image\"],\n        \"commands\": [\"huggingface-cli download Tongyi-MAI/Z-Image\"],\n        \"paths\": []\n      }\n    }\n  ]\n}\n\nRetrieve\nmemory_cli.py retrieve --query \"search query\" [--limit 5] [--llm] [--llm-endpoint URL] [--agent-id default]\n\n\nModes:\n\nDefault: Keyword-based tree + warm + cold search\n--llm: LLM-powered semantic tree search\n\nExample:\n\n# Keyword search\nmemory_cli.py retrieve --query \"BSC deployment decision\" --limit 5\n\n# LLM search (more accurate)\nmemory_cli.py retrieve \\\n  --query \"what did we decide about blockchain integration?\" \\\n  --llm --llm-endpoint http://localhost:8080/complete \\\n  --db-url \"$TURSO_URL\" --auth-token \"$TURSO_TOKEN\"\n\nDistill\nmemory_cli.py distill --text \"raw conversation\" [--llm] [--llm-endpoint URL]\n\n\nExample:\n\n# Rule-based distillation\nmemory_cli.py distill --text \"User: Let's deploy to testnet first. Agent: Good idea, safer that way.\"\n\n# LLM distillation\nmemory_cli.py distill \\\n  --text \"Long conversation with nuance...\" \\\n  --llm --llm-endpoint http://localhost:8080/complete\n\n\nOutput:\n\n{\n  \"distilled\": {\n    \"fact\": \"Decided to deploy to testnet before mainnet\",\n    \"emotion\": \"cautious\",\n    \"people\": [],\n    \"topics\": [\"deployment\", \"testnet\", \"safety\"],\n    \"actions\": [\"deploy to testnet\"],\n    \"outcome\": \"positive\"\n  },\n  \"mode\": \"rule\",\n  \"original_size\": 87,\n  \"distilled_size\": 156\n}\n\nHot Memory\n# Update hot state\nmemory_cli.py hot --update KEY JSON [--agent-id default]\n\n# Rebuild MEMORY.md\nmemory_cli.py hot --rebuild [--agent-id default]\n\n# Show current hot state\nmemory_cli.py hot [--agent-id default]\n\n\nKeys:\n\nidentity — Agent/owner identity info\nowner_profile — Owner preferences, personality\nlesson — Add critical lesson\nevent — Add event to active context\ntask — Add task to active context\nproject — Add/update project\n\nExamples:\n\n# Update owner profile\nmemory_cli.py hot --update owner_profile '{\"timezone\": \"Australia/Sydney\", \"work_hours\": \"9am-6pm\"}'\n\n# Add lesson\nmemory_cli.py hot --update lesson '{\"text\": \"Always test on testnet first\", \"category\": \"blockchain\", \"importance\": 0.9}'\n\n# Add project\nmemory_cli.py hot --update project '{\"name\": \"EvoClaw\", \"status\": \"Active\", \"description\": \"Self-evolving agent framework\"}'\n\n# Rebuild MEMORY.md\nmemory_cli.py hot --rebuild\n\nTree\n# Show tree\nmemory_cli.py tree --show [--agent-id default]\n\n# Add node\nmemory_cli.py tree --add \"path/to/category\" \"Description\" [--agent-id default]\n\n# Remove node\nmemory_cli.py tree --remove \"path/to/category\" [--agent-id default]\n\n# Prune dead nodes\nmemory_cli.py tree --prune [--agent-id default]\n\n\nExamples:\n\n# Add category\nmemory_cli.py tree --add \"projects/evoclaw/bsc\" \"BSC blockchain integration\"\n\n# Remove empty category\nmemory_cli.py tree --remove \"old/unused/path\"\n\n# Prune dead nodes (60+ days no activity)\nmemory_cli.py tree --prune\n\nCold Storage\n# Initialize Turso tables\nmemory_cli.py cold --init --db-url URL --auth-token TOKEN\n\n# Query cold storage\nmemory_cli.py cold --query \"search term\" [--limit 10] [--agent-id default] --db-url URL --auth-token TOKEN\n\n\nExamples:\n\n# Init tables (once)\nmemory_cli.py cold --init --db-url \"https://your-db.turso.io\" --auth-token \"your-token\"\n\n# Query cold archive\nmemory_cli.py cold --query \"blockchain decision\" --limit 10 --db-url \"$TURSO_URL\" --auth-token \"$TURSO_TOKEN\"\n\nConfiguration\n\nFile: config.json (optional, uses defaults if not present)\n\n{\n  \"agent_id\": \"default\",\n  \"hot\": {\n    \"max_bytes\": 5120,\n    \"max_lessons\": 20,\n    \"max_events\": 10,\n    \"max_tasks\": 10\n  },\n  \"warm\": {\n    \"max_kb\": 50,\n    \"retention_days\": 30,\n    \"eviction_threshold\": 0.3\n  },\n  \"cold\": {\n    \"backend\": \"turso\",\n    \"retention_years\": 10\n  },\n  \"scoring\": {\n    \"half_life_days\": 30,\n    \"reinforcement_boost\": 0.1\n  },\n  \"tree\": {\n    \"max_nodes\": 50,\n    \"max_depth\": 4,\n    \"max_size_bytes\": 2048\n  },\n  \"distillation\": {\n    \"aggression\": 0.7,\n    \"max_distilled_bytes\": 100,\n    \"mode\": \"rule\"\n  },\n  \"consolidation\": {\n    \"warm_eviction\": \"hourly\",\n    \"tree_prune\": \"daily\",\n    \"tree_rebuild\": \"monthly\"\n  }\n}\n\nIntegration with OpenClaw Agents\nAfter Conversation\nimport subprocess\nimport json\n\ndef process_conversation(user_message, agent_response, category=\"conversations\"):\n    # 1. Distill conversation\n    text = f\"User: {user_message}\\nAgent: {agent_response}\"\n    result = subprocess.run(\n        [\"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"distill\", \"--text\", text],\n        capture_output=True, text=True\n    )\n    distilled = json.loads(result.stdout)\n    \n    # 2. Determine importance\n    importance = 0.7 if \"decision\" in distilled[\"distilled\"][\"outcome\"] else 0.5\n    \n    # 3. Store\n    subprocess.run([\n        \"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"store\",\n        \"--text\", distilled[\"distilled\"][\"fact\"],\n        \"--category\", category,\n        \"--importance\", str(importance),\n        \"--db-url\", os.getenv(\"TURSO_URL\"),\n        \"--auth-token\", os.getenv(\"TURSO_TOKEN\")\n    ])\n    \n    # 4. Critical sync\n    subprocess.run([\n        \"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"sync-critical\",\n        \"--db-url\", os.getenv(\"TURSO_URL\"),\n        \"--auth-token\", os.getenv(\"TURSO_TOKEN\")\n    ])\n\nBefore Responding (Retrieval)\ndef get_relevant_context(query):\n    result = subprocess.run(\n        [\n            \"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"retrieve\",\n            \"--query\", query,\n            \"--limit\", \"5\",\n            \"--llm\",\n            \"--llm-endpoint\", \"http://localhost:8080/complete\",\n            \"--db-url\", os.getenv(\"TURSO_URL\"),\n            \"--auth-token\", os.getenv(\"TURSO_TOKEN\")\n        ],\n        capture_output=True, text=True\n    )\n    \n    memories = json.loads(result.stdout)\n    return \"\\n\".join([f\"- {m['text']}\" for m in memories])\n\nHeartbeat Consolidation\nimport schedule\n\n# Hourly quick consolidation\nschedule.every(2).hours.do(lambda: subprocess.run([\n    \"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"consolidate\",\n    \"--mode\", \"quick\",\n    \"--db-url\", os.getenv(\"TURSO_URL\"),\n    \"--auth-token\", os.getenv(\"TURSO_TOKEN\")\n]))\n\n# Daily tree prune\nschedule.every().day.at(\"00:00\").do(lambda: subprocess.run([\n    \"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"consolidate\",\n    \"--mode\", \"daily\",\n    \"--db-url\", os.getenv(\"TURSO_URL\"),\n    \"--auth-token\", os.getenv(\"TURSO_TOKEN\")\n]))\n\n# Monthly full consolidation\nschedule.every().month.do(lambda: subprocess.run([\n    \"python3\", \"skills/tiered-memory/scripts/memory_cli.py\", \"consolidate\",\n    \"--mode\", \"monthly\",\n    \"--db-url\", os.getenv(\"TURSO_URL\"),\n    \"--auth-token\", os.getenv(\"TURSO_TOKEN\")\n]))\n\nLLM Integration\nModel Recommendations\n\nFor Distillation & Tree Search:\n\nClaude 3 Haiku (fast, cheap, excellent structure)\nGPT-4o-mini (good balance)\nGemini 1.5 Flash (very fast)\n\nFor Tree Rebuilding:\n\nClaude 3.5 Sonnet (better reasoning)\nGPT-4o (strong planning)\nCost Optimization\nUse cheaper models for frequent operations (distill, search)\nBatch distillation — Queue conversations, distill in batch\nCache tree prompts — Tree structure doesn't change often\nSkip LLM for simple — Use rule-based for short conversations\nExample LLM Endpoint\nfrom flask import Flask, request, jsonify\n\napp = Flask(__name__)\n\n@app.route(\"/complete\", methods=[\"POST\"])\ndef complete():\n    data = request.json\n    prompt = data[\"prompt\"]\n    \n    # Call your LLM (OpenAI, Anthropic, local model, etc.)\n    response = llm_client.complete(prompt)\n    \n    return jsonify({\"text\": response})\n\nif __name__ == \"__main__\":\n    app.run(port=8080)\n\nPerformance Characteristics\n\nContext Size:\n\nHot: ~5KB (always loaded)\nTree: ~2KB (always loaded)\nRetrieved: ~1-3KB per query\nTotal: ~8-15KB (constant, regardless of agent age)\n\nRetrieval Speed:\n\nKeyword: 10-20ms\nLLM tree search: 300-600ms\nCold query: 50-100ms\n\n5-Year Scenario:\n\nHot: Still 5KB (living document)\nWarm: Last 30 days (~50KB)\nCold: ~50MB in Turso (compressed distilled facts)\nTree: Still 2KB (different nodes, same size)\nContext per session: Same as day 1\nComparison with Alternatives\nSystem\tMemory Model\tScaling\tAccuracy\tCost\nFlat MEMORY.md\tLinear text\t❌ Months\t⚠️ Degrades\t❌ Linear\nVector RAG\tEmbeddings\t✅ Years\t⚠️ Similarity≠relevance\t⚠️ Moderate\nEvoClaw Tiered\tTree + tiers\t✅ Decades\t✅ Reasoning-based\t✅ Fixed\n\nWhy tree > vectors:\n\nAccuracy: 98%+ vs. 70-80% (PageIndex benchmark)\nExplainable: \"Projects → EvoClaw → BSC\" vs. \"cosine 0.73\"\nMulti-hop: Natural vs. poor\nFalse positives: Low vs. high\nTroubleshooting\nTree size exceeding limit\n# Prune dead nodes\nmemory_cli.py tree --prune\n\n# Check which nodes are largest\nmemory_cli.py tree --show | grep \"Memories:\"\n\n# Manually remove unused categories\nmemory_cli.py tree --remove \"unused/category\"\n\nWarm memory filling up\n# Run consolidation\nmemory_cli.py consolidate --mode daily --db-url \"$TURSO_URL\" --auth-token \"$TURSO_TOKEN\"\n\n# Check stats\nmemory_cli.py metrics\n\n# Lower eviction threshold (keeps less in warm)\n# Edit config.json: \"eviction_threshold\": 0.4\n\nHot memory exceeding 5KB\n# Hot auto-prunes, but check structure\nmemory_cli.py hot\n\n# Remove old projects/tasks manually\nmemory_cli.py hot --update project '{\"name\": \"OldProject\", \"status\": \"Completed\"}'\n\n# Rebuild to force pruning\nmemory_cli.py hot --rebuild\n\nLLM search failing\n# Fallback to keyword search (automatic)\nmemory_cli.py retrieve --query \"...\" --limit 5\n\n# Test LLM endpoint\ncurl -X POST http://localhost:8080/complete -d '{\"prompt\": \"test\"}'\n\n# Generate prompt for external testing\ntree_search.py --query \"...\" --tree-file memory/memory-tree.json --mode llm --llm-prompt-file test.txt\n\nMigration from v1.x\n\nBackward compatible: Existing warm-memory.json and memory-tree.json files work as-is.\n\nNew files:\n\nconfig.json (optional, uses defaults)\nhot-memory-state.json (auto-created)\nmetrics.json (auto-created)\n\nSteps:\n\nUpdate skill: clawhub update tiered-memory\nRun consolidation to rebuild hot state: memory_cli.py consolidate\nInitialize cold storage (optional): memory_cli.py cold --init --db-url ... --auth-token ...\nConfigure agent to use new commands (see Integration section)\nMigration from v2.0 to v2.1\n\nFully backward compatible: Existing memory files work without changes.\n\nWhat's new:\n\n✅ Metadata automatically extracted from existing facts when loaded\n✅ New commands: validate, extract-metadata, search-url\n✅ store command now accepts --url, --command, --path flags\n✅ Distillation preserves URLs and technical details\n✅ No action required - just update and use new features\n\nTesting the upgrade:\n\n# Update skill\nclawhub update tiered-memory\n\n# Test metadata extraction\nmemory_cli.py extract-metadata --file memory/2026-02-13.md\n\n# Validate your recent notes\nmemory_cli.py validate\n\n# Search by URL\nmemory_cli.py search-url --url \"github.com\"\n\nReferences\nDesign: /docs/TIERED-MEMORY.md (EvoClaw)\nCloud Sync: /docs/CLOUD-SYNC.md (EvoClaw)\nInspiration: PageIndex (tree-based retrieval)\n\nv2.1.0 — A mind that remembers everything is as useless as one that remembers nothing. The art is knowing what to keep. Now with structured metadata to remember HOW, not just WHAT. 🧠🌲🔗"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/bowen31337/tiered-memory",
    "publisherUrl": "https://clawhub.ai/bowen31337/tiered-memory",
    "owner": "bowen31337",
    "version": "2.2.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/tiered-memory",
    "downloadUrl": "https://openagent3.xyz/downloads/tiered-memory",
    "agentUrl": "https://openagent3.xyz/skills/tiered-memory/agent",
    "manifestUrl": "https://openagent3.xyz/skills/tiered-memory/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/tiered-memory/agent.md"
  }
}