# Send Tiered Memory to your agent
Use the source page and any available docs to guide the install because the item currently does not return a direct package file.
## Fast path
- Open the source page via Open source listing.
- If you can obtain the package, extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the source page and extracted files.
## Suggested prompts
### New install

```text
I tried to install a skill package from Yavira, but the item currently does not return a direct package file. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required. Then review README.md for any prerequisites, environment setup, or post-install checks.
```
### Upgrade existing

```text
I tried to upgrade a skill package from Yavira, but the item currently does not return a direct package file. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need. Then review README.md for any prerequisites, environment setup, or post-install checks.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "tiered-memory",
    "name": "Tiered Memory",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/bowen31337/tiered-memory",
    "canonicalUrl": "https://clawhub.ai/bowen31337/tiered-memory",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/tiered-memory",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=tiered-memory",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "API_ENDPOINTS.md",
      "METRICS_TRACKER.md",
      "README.md",
      "SKILL.md",
      "config.json",
      "scripts/distiller.py"
    ],
    "downloadMode": "manual_only",
    "sourceHealth": {
      "source": "tencent",
      "slug": "tiered-memory",
      "status": "source_issue",
      "reason": "not_found",
      "recommendedAction": "review_source",
      "checkedAt": "2026-05-04T04:01:47.250Z",
      "expiresAt": "2026-05-05T04:01:47.250Z",
      "httpStatus": 404,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=tiered-memory",
      "contentType": "text/plain",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=tiered-memory",
        "contentDisposition": null,
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "tiered-memory"
      },
      "scope": "item",
      "summary": "Known item issue.",
      "detail": "This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.",
      "primaryActionLabel": "Open source listing",
      "primaryActionHref": "https://clawhub.ai/bowen31337/tiered-memory"
    },
    "validation": {
      "installChecklist": [
        "Open the source listing and confirm there is a real package or setup artifact available.",
        "Review SKILL.md before asking your agent to continue.",
        "Treat this source as manual setup until the upstream download flow is fixed."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/tiered-memory",
    "downloadUrl": "https://openagent3.xyz/downloads/tiered-memory",
    "agentUrl": "https://openagent3.xyz/skills/tiered-memory/agent",
    "manifestUrl": "https://openagent3.xyz/skills/tiered-memory/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/tiered-memory/agent.md"
  }
}
```
## Documentation

### Tiered Memory System v2.2.0

A mind that remembers everything is as useless as one that remembers nothing. The art is knowing what to keep. 🧠

EvoClaw-compatible three-tier memory system inspired by human cognition and PageIndex tree retrieval.

### What's New in v2.2.0

🔄 Automatic Daily Note Ingestion

Consolidation (daily/monthly/full modes) now auto-runs ingest-daily
Bridges memory/YYYY-MM-DD.md files → tiered memory system
No more manual ingestion required — facts flow automatically
Fixes the "two disconnected data paths" problem

### What's New in v2.1.0

🎯 Structured Metadata Extraction

Automatic extraction of URLs, shell commands, and file paths from facts
Preserved during distillation and consolidation
Searchable by URL fragment

✅ Memory Completeness Validation

Check daily notes for missing URLs, commands, and next steps
Proactive warnings for incomplete information
Actionable suggestions for improvement

🔍 Enhanced Search

Search facts by URL fragment
Get all stored URLs from warm memory
Metadata-aware fact storage

🛡️ URL Preservation

URLs explicitly preserved during LLM distillation
Fallback metadata extraction if LLM misses them
Command-line support for adding metadata manually

### Architecture

┌─────────────────────────────────────────────────────┐
│              AGENT CONTEXT (~8-15KB)                │
│                                                     │
│  ┌──────────┐  ┌────────────────────────────────┐  │
│  │  Tree    │  │  Retrieved Memory Nodes         │  │
│  │  Index   │  │  (on-demand, 1-3KB)            │  │
│  │  (~2KB)  │  │                                │  │
│  │          │  │  Fetched per conversation      │  │
│  │  Always  │  │  based on tree reasoning       │  │
│  │  loaded  │  │                                │  │
│  └────┬─────┘  └────────────────────────────────┘  │
│       │                                             │
└───────┼─────────────────────────────────────────────┘
        │
        │ LLM-powered tree search
        │
┌───────▼─────────────────────────────────────────────┐
│              MEMORY TIERS                           │
│                                                     │
│  🔴 HOT (5KB)      🟡 WARM (50KB)     🟢 COLD (∞)  │
│                                                     │
│  Core memory       Scored facts      Full archive  │
│  - Identity        - 30-day         - Turso DB     │
│  - Owner profile   - Decaying       - Queryable    │
│  - Active context  - On-device      - 10-year      │
│  - Lessons (20 max)                                │
│                                                     │
│  Always in         Retrieved via     Retrieved via │
│  context           tree search       tree search   │
└─────────────────────────────────────────────────────┘

### From Human Memory

Consolidation — Short-term → long-term happens during consolidation cycles
Relevance Decay — Unused memories fade; accessed memories strengthen
Strategic Forgetting — Not remembering everything is a feature
Hierarchical Organization — Navigate categories, not scan linearly

### From PageIndex

Vectorless Retrieval — LLM reasoning instead of embedding similarity
Tree-Structured Index — O(log n) navigation, not O(n) scan
Explainable Results — Every retrieval traces a path through categories
Reasoning-Based Search — "Why relevant?" not "how similar?"

### Cloud-First (EvoClaw)

Device is replaceable — Soul lives in cloud (Turso)
Critical sync — Hot + tree sync after every conversation
Disaster recovery — Full restore in <2 minutes
Multi-device — Same agent across phone/desktop/embedded

### 🔴 Hot Memory (5KB max)

Purpose: Core identity and active context, always in agent's context window.

Structure:

{
  "identity": {
    "agent_name": "Agent",
    "owner_name": "User",
    "owner_preferred_name": "User",
    "relationship_start": "2026-01-15",
    "trust_level": 0.95
  },
  "owner_profile": {
    "personality": "technical, direct communication",
    "family": ["Sarah (wife)", "Luna (daughter, 3yo)"],
    "topics_loved": ["AI architecture", "blockchain", "system design"],
    "topics_avoid": ["small talk about weather"],
    "timezone": "Australia/Sydney",
    "work_hours": "9am-6pm"
  },
  "active_context": {
    "projects": [
      {
        "name": "EvoClaw",
        "description": "Self-evolving agent framework",
        "status": "Active - BSC integration for hackathon"
      }
    ],
    "events": [
      {"text": "Hackathon deadline Feb 15", "timestamp": 1707350400}
    ],
    "tasks": [
      {"text": "Deploy to BSC testnet", "status": "pending", "timestamp": 1707350400}
    ]
  },
  "critical_lessons": [
    {
      "text": "Always test on testnet before mainnet",
      "category": "blockchain",
      "importance": 0.9,
      "timestamp": 1707350400
    }
  ]
}

Auto-pruning:

Lessons: Max 20, removes lowest-importance when full
Events: Keeps last 10 only
Tasks: Max 10 pending
Total size: Hard limit at 5KB, progressively prunes if exceeded

Generates: MEMORY.md — auto-rebuilt from structured hot state

### 🟡 Warm Memory (50KB max, 30-day retention)

Purpose: Recent distilled facts with decay scoring.

Entry format:

{
  "id": "abc123def456",
  "text": "Decided to use zero go-ethereum deps for EvoClaw to keep binary small",
  "category": "projects/evoclaw/architecture",
  "importance": 0.8,
  "created_at": 1707350400,
  "access_count": 3,
  "score": 0.742,
  "tier": "warm"
}

Scoring:

score = importance × recency_decay(age) × reinforcement(access_count)

recency_decay(age) = exp(-age_days / 30)
reinforcement(access) = 1 + 0.1 × access_count

Tier classification:

score >= 0.7 → Hot (promote to hot state)
score >= 0.3 → Warm (keep)
score >= 0.05 → Cold (archive)
score < 0.05 → Frozen (delete after retention period)

Eviction triggers:

Age > 30 days AND score < 0.3
Total warm size > 50KB (evicts lowest-scored)
Manual consolidation

### 🟢 Cold Memory (Unlimited, Turso)

Purpose: Long-term archive, queryable but never bulk-loaded.

Schema:

CREATE TABLE cold_memories (
  id TEXT PRIMARY KEY,
  agent_id TEXT NOT NULL,
  text TEXT NOT NULL,
  category TEXT NOT NULL,
  importance REAL DEFAULT 0.5,
  created_at INTEGER NOT NULL,
  access_count INTEGER DEFAULT 0
);

CREATE TABLE critical_state (
  agent_id TEXT PRIMARY KEY,
  data TEXT NOT NULL,  -- {hot_state, tree_nodes, timestamp}
  updated_at INTEGER NOT NULL
);

Retention: 10 years (configurable)
Cleanup: Monthly consolidation removes frozen entries older than retention period

### Tree Index

Purpose: Hierarchical category map for O(log n) retrieval.

Constraints:

Max 50 nodes
Max depth 4 levels
Max 2KB serialized
Max 10 children per node

Example:

Memory Tree Index
==================================================
📂 Root (warm:15, cold:234)
  📁 owner — Owner profile and preferences
     Memories: warm=5, cold=89
  📁 projects — Active projects
     Memories: warm=8, cold=67
    📁 projects/evoclaw — EvoClaw framework
       Memories: warm=6, cold=45
      📁 projects/evoclaw/bsc — BSC integration
         Memories: warm=3, cold=12
  📁 technical — Technical setup and config
     Memories: warm=2, cold=34
  📁 lessons — Learned lessons and rules
     Memories: warm=0, cold=44

Nodes: 7/50
Size: 1842 / 2048 bytes

Operations:

--add PATH DESC — Add category node
--remove PATH — Remove node (only if no data)
--prune — Remove dead nodes (no activity in 60+ days)
--show — Pretty-print tree

### Distillation Engine

Purpose: Three-stage compression of conversations.

Pipeline:

Raw conversation (500B)
  ↓ Stage 1→2: Extract structured info
Distilled fact (80B)
  ↓ Stage 2→3: Generate one-line summary
Core summary (20B)

### Stage 1→2: Raw → Distilled

Input: Raw conversation text
Output: Structured JSON

{
  "fact": "User decided to use raw JSON-RPC for BSC to avoid go-ethereum dependency",
  "emotion": "determined",
  "people": ["User"],
  "topics": ["blockchain", "architecture", "dependencies"],
  "actions": ["decided to use raw JSON-RPC", "avoid go-ethereum"],
  "outcome": "positive"
}

Modes:

rule: Regex/heuristic extraction (fast, no LLM)
llm: LLM-powered extraction (accurate, requires endpoint)

Usage:

# Rule-based (default)
distiller.py --text "Had a productive chat about the BSC integration..." --mode rule

# LLM-powered
distiller.py --text "..." --mode llm --llm-endpoint http://localhost:8080/complete

# With core summary
distiller.py --text "..." --mode rule --core-summary

### Stage 2→3: Distilled → Core Summary

Purpose: One-line summary for tree index

Example:

Distilled: {
  "fact": "User decided raw JSON-RPC for BSC, no go-ethereum",
  "outcome": "positive"
}

Core summary: "BSC integration: raw JSON-RPC (no deps)"

Target: <30 bytes

### LLM-Powered Tree Search

Purpose: Semantic search through tree structure using LLM reasoning.

How it works:

Build prompt with tree structure + query
LLM reasons about which categories are relevant
Returns category paths with relevance scores
Fetches memories from those categories

Example:

Query: "What did we decide about the hackathon deadline?"

Keyword search returns:

projects/evoclaw (0.8)
technical/deployment (0.4)

LLM search reasons:

projects/evoclaw/bsc (0.95) — "BSC integration for hackathon"
active_context/events (0.85) — "Deadline mentioned here"

LLM prompt template:

You are a memory retrieval system. Given a memory tree index and a query, 
identify which categories are relevant.

Memory Tree Index:
  projects/evoclaw — EvoClaw framework (warm:6, cold:45)
  projects/evoclaw/bsc — BSC integration (warm:3, cold:12)
  ...

User Query: What did we decide about the hackathon deadline?

Output (JSON):
[
  {"path": "projects/evoclaw/bsc", "relevance": 0.95, "reason": "BSC work for hackathon"},
  {"path": "active_context/events", "relevance": 0.85, "reason": "deadline tracking"}
]

Usage:

# Keyword search (fast)
tree_search.py --query "BSC integration" --tree-file memory-tree.json --mode keyword

# LLM search (accurate)
tree_search.py --query "what did we decide about hackathon?" \\
  --tree-file memory-tree.json --mode llm --llm-endpoint http://localhost:8080/complete

# Generate prompt for external LLM
tree_search.py --query "..." --tree-file memory-tree.json \\
  --mode llm --llm-prompt-file prompt.txt

### Multi-Agent Support

Agent ID scoping — All operations support --agent-id flag.

File layout:

memory/
  default/
    warm-memory.json
    memory-tree.json
    hot-memory-state.json
    metrics.json
  agent-2/
    warm-memory.json
    memory-tree.json
    ...
MEMORY.md              # default agent
MEMORY-agent-2.md      # agent-2

Cold storage: Agent-scoped queries via agent_id column

Usage:

# Store for agent-2
memory_cli.py store --text "..." --category "..." --agent-id agent-2

# Retrieve for agent-2
memory_cli.py retrieve --query "..." --agent-id agent-2

# Consolidate agent-2
memory_cli.py consolidate --mode daily --agent-id agent-2

### Consolidation Modes

Purpose: Periodic memory maintenance and optimization.

### Quick (hourly)

Warm eviction (score-based)
Archive expired to cold
Recalculate all scores
Rebuild MEMORY.md

### Daily

Everything in Quick
Tree prune (remove dead nodes, 60+ days no activity)

### Monthly

Everything in Daily
Tree rebuild (LLM-powered restructuring, future)
Cold cleanup (delete frozen entries older than retention)

### Full

Everything in Monthly
Full recalculation of all scores
Deep tree analysis
Generate consolidation report

Usage:

# Quick consolidation (default)
memory_cli.py consolidate

# Daily (run via cron)
memory_cli.py consolidate --mode daily

# Monthly (run via cron)
memory_cli.py consolidate --mode monthly --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN"

Recommended schedule:

Quick: Every 2-4 hours (heartbeat)
Daily: Midnight via cron
Monthly: 1st of month via cron

### Critical Sync (Cloud-First)

Purpose: Cloud backup of hot state + tree after every conversation.

What syncs:

Hot memory state (identity, owner profile, active context, lessons)
Tree index (structure + counts)
Timestamp

Recovery: If device lost, restore from cloud in <2 minutes

Usage:

# Manual critical sync
memory_cli.py sync-critical --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN" --agent-id default

# Automatic: Call after every important conversation
# In agent code:
#   1. Process conversation
#   2. Store distilled facts
#   3. Call sync-critical

Retry strategy: Exponential backoff if cloud unreachable (5s, 10s, 20s, 40s)

### Metrics & Observability

Tracked metrics:

{
  "tree_index_size_bytes": 1842,
  "tree_node_count": 37,
  "hot_memory_size_bytes": 4200,
  "warm_memory_count": 145,
  "warm_memory_size_kb": 38.2,
  "retrieval_count": 234,
  "evictions_today": 12,
  "reinforcements_today": 67,
  "consolidation_count": 8,
  "last_consolidation": 1707350400,
  "context_tokens_saved": 47800,
  "timestamp": "2026-02-10T14:30:00"
}

Usage:

memory_cli.py metrics --agent-id default

Key metrics:

context_tokens_saved — Estimated tokens saved vs. flat MEMORY.md
retrieval_count — How often memories are accessed
evictions_today — Memory pressure indicator
warm_memory_size_kb — Storage usage

### Store

memory_cli.py store --text "Fact text" --category "path/to/category" [--importance 0.8] [--agent-id default]

Importance guide:

0.9-1.0 — Critical decisions, credentials, core identity
0.7-0.8 — Project decisions, architecture, preferences
0.5-0.6 — General facts, daily events
0.3-0.4 — Casual mentions, low priority

Example:

memory_cli.py store \\
  --text "Decided to deploy EvoClaw on BSC testnet before mainnet" \\
  --category "projects/evoclaw/deployment" \\
  --importance 0.85 \\
  --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN"

# Store with explicit metadata (v2.1.0+)
memory_cli.py store \\
  --text "Z-Image ComfyUI model for photorealistic images" \\
  --category "tools/image-generation" \\
  --importance 0.8 \\
  --url "https://docs.comfy.org/tutorials/image/z-image/z-image" \\
  --command "huggingface-cli download Tongyi-MAI/Z-Image" \\
  --path "/home/user/models/"

### Validate (v2.1.0)

memory_cli.py validate [--file PATH] [--agent-id default]

Purpose: Check daily notes for incomplete information (missing URLs, commands, next steps).

Example:

# Validate today's daily notes
memory_cli.py validate

# Validate specific file
memory_cli.py validate --file memory/2026-02-13.md

Output:

{
  "status": "warning",
  "warnings_count": 2,
  "warnings": [
    "Tool 'Z-Image' mentioned without URL/documentation link",
    "Action 'install' mentioned without command example"
  ],
  "suggestions": [
    "Add URLs for mentioned tools/services",
    "Include command examples for setup/installation steps",
    "Document next steps after decisions"
  ]
}

### Extract Metadata (v2.1.0)

memory_cli.py extract-metadata --file PATH

Purpose: Extract structured metadata (URLs, commands, paths) from a file.

Example:

memory_cli.py extract-metadata --file memory/2026-02-13.md

Output:

{
  "file": "memory/2026-02-13.md",
  "metadata": {
    "urls": [
      "https://docs.comfy.org/tutorials/image/z-image/z-image",
      "https://github.com/Lightricks/LTX-Video"
    ],
    "commands": [
      "huggingface-cli download Tongyi-MAI/Z-Image",
      "git clone https://github.com/Lightricks/LTX-Video.git"
    ],
    "paths": [
      "/home/peter/ai-stack/comfyui/models",
      "./configs/ltx-video-2-config.yaml"
    ]
  },
  "summary": {
    "urls_count": 2,
    "commands_count": 2,
    "paths_count": 2
  }
}

### Search by URL (v2.1.0)

memory_cli.py search-url --url FRAGMENT [--limit 5] [--agent-id default]

Purpose: Search facts by URL fragment.

Example:

# Find all facts with comfy.org URLs
memory_cli.py search-url --url "comfy.org"

# Find GitHub repos
memory_cli.py search-url --url "github.com" --limit 10

Output:

{
  "query": "comfy.org",
  "results_count": 1,
  "results": [
    {
      "id": "abc123",
      "text": "Z-Image ComfyUI model for photorealistic images",
      "category": "tools/image-generation",
      "metadata": {
        "urls": ["https://docs.comfy.org/tutorials/image/z-image/z-image"],
        "commands": ["huggingface-cli download Tongyi-MAI/Z-Image"],
        "paths": []
      }
    }
  ]
}

### Retrieve

memory_cli.py retrieve --query "search query" [--limit 5] [--llm] [--llm-endpoint URL] [--agent-id default]

Modes:

Default: Keyword-based tree + warm + cold search
--llm: LLM-powered semantic tree search

Example:

# Keyword search
memory_cli.py retrieve --query "BSC deployment decision" --limit 5

# LLM search (more accurate)
memory_cli.py retrieve \\
  --query "what did we decide about blockchain integration?" \\
  --llm --llm-endpoint http://localhost:8080/complete \\
  --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN"

### Distill

memory_cli.py distill --text "raw conversation" [--llm] [--llm-endpoint URL]

Example:

# Rule-based distillation
memory_cli.py distill --text "User: Let's deploy to testnet first. Agent: Good idea, safer that way."

# LLM distillation
memory_cli.py distill \\
  --text "Long conversation with nuance..." \\
  --llm --llm-endpoint http://localhost:8080/complete

Output:

{
  "distilled": {
    "fact": "Decided to deploy to testnet before mainnet",
    "emotion": "cautious",
    "people": [],
    "topics": ["deployment", "testnet", "safety"],
    "actions": ["deploy to testnet"],
    "outcome": "positive"
  },
  "mode": "rule",
  "original_size": 87,
  "distilled_size": 156
}

### Hot Memory

# Update hot state
memory_cli.py hot --update KEY JSON [--agent-id default]

# Rebuild MEMORY.md
memory_cli.py hot --rebuild [--agent-id default]

# Show current hot state
memory_cli.py hot [--agent-id default]

Keys:

identity — Agent/owner identity info
owner_profile — Owner preferences, personality
lesson — Add critical lesson
event — Add event to active context
task — Add task to active context
project — Add/update project

Examples:

# Update owner profile
memory_cli.py hot --update owner_profile '{"timezone": "Australia/Sydney", "work_hours": "9am-6pm"}'

# Add lesson
memory_cli.py hot --update lesson '{"text": "Always test on testnet first", "category": "blockchain", "importance": 0.9}'

# Add project
memory_cli.py hot --update project '{"name": "EvoClaw", "status": "Active", "description": "Self-evolving agent framework"}'

# Rebuild MEMORY.md
memory_cli.py hot --rebuild

### Tree

# Show tree
memory_cli.py tree --show [--agent-id default]

# Add node
memory_cli.py tree --add "path/to/category" "Description" [--agent-id default]

# Remove node
memory_cli.py tree --remove "path/to/category" [--agent-id default]

# Prune dead nodes
memory_cli.py tree --prune [--agent-id default]

Examples:

# Add category
memory_cli.py tree --add "projects/evoclaw/bsc" "BSC blockchain integration"

# Remove empty category
memory_cli.py tree --remove "old/unused/path"

# Prune dead nodes (60+ days no activity)
memory_cli.py tree --prune

### Cold Storage

# Initialize Turso tables
memory_cli.py cold --init --db-url URL --auth-token TOKEN

# Query cold storage
memory_cli.py cold --query "search term" [--limit 10] [--agent-id default] --db-url URL --auth-token TOKEN

Examples:

# Init tables (once)
memory_cli.py cold --init --db-url "https://your-db.turso.io" --auth-token "your-token"

# Query cold archive
memory_cli.py cold --query "blockchain decision" --limit 10 --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN"

### Configuration

File: config.json (optional, uses defaults if not present)

{
  "agent_id": "default",
  "hot": {
    "max_bytes": 5120,
    "max_lessons": 20,
    "max_events": 10,
    "max_tasks": 10
  },
  "warm": {
    "max_kb": 50,
    "retention_days": 30,
    "eviction_threshold": 0.3
  },
  "cold": {
    "backend": "turso",
    "retention_years": 10
  },
  "scoring": {
    "half_life_days": 30,
    "reinforcement_boost": 0.1
  },
  "tree": {
    "max_nodes": 50,
    "max_depth": 4,
    "max_size_bytes": 2048
  },
  "distillation": {
    "aggression": 0.7,
    "max_distilled_bytes": 100,
    "mode": "rule"
  },
  "consolidation": {
    "warm_eviction": "hourly",
    "tree_prune": "daily",
    "tree_rebuild": "monthly"
  }
}

### After Conversation

import subprocess
import json

def process_conversation(user_message, agent_response, category="conversations"):
    # 1. Distill conversation
    text = f"User: {user_message}\\nAgent: {agent_response}"
    result = subprocess.run(
        ["python3", "skills/tiered-memory/scripts/memory_cli.py", "distill", "--text", text],
        capture_output=True, text=True
    )
    distilled = json.loads(result.stdout)
    
    # 2. Determine importance
    importance = 0.7 if "decision" in distilled["distilled"]["outcome"] else 0.5
    
    # 3. Store
    subprocess.run([
        "python3", "skills/tiered-memory/scripts/memory_cli.py", "store",
        "--text", distilled["distilled"]["fact"],
        "--category", category,
        "--importance", str(importance),
        "--db-url", os.getenv("TURSO_URL"),
        "--auth-token", os.getenv("TURSO_TOKEN")
    ])
    
    # 4. Critical sync
    subprocess.run([
        "python3", "skills/tiered-memory/scripts/memory_cli.py", "sync-critical",
        "--db-url", os.getenv("TURSO_URL"),
        "--auth-token", os.getenv("TURSO_TOKEN")
    ])

### Before Responding (Retrieval)

def get_relevant_context(query):
    result = subprocess.run(
        [
            "python3", "skills/tiered-memory/scripts/memory_cli.py", "retrieve",
            "--query", query,
            "--limit", "5",
            "--llm",
            "--llm-endpoint", "http://localhost:8080/complete",
            "--db-url", os.getenv("TURSO_URL"),
            "--auth-token", os.getenv("TURSO_TOKEN")
        ],
        capture_output=True, text=True
    )
    
    memories = json.loads(result.stdout)
    return "\\n".join([f"- {m['text']}" for m in memories])

### Heartbeat Consolidation

import schedule

# Hourly quick consolidation
schedule.every(2).hours.do(lambda: subprocess.run([
    "python3", "skills/tiered-memory/scripts/memory_cli.py", "consolidate",
    "--mode", "quick",
    "--db-url", os.getenv("TURSO_URL"),
    "--auth-token", os.getenv("TURSO_TOKEN")
]))

# Daily tree prune
schedule.every().day.at("00:00").do(lambda: subprocess.run([
    "python3", "skills/tiered-memory/scripts/memory_cli.py", "consolidate",
    "--mode", "daily",
    "--db-url", os.getenv("TURSO_URL"),
    "--auth-token", os.getenv("TURSO_TOKEN")
]))

# Monthly full consolidation
schedule.every().month.do(lambda: subprocess.run([
    "python3", "skills/tiered-memory/scripts/memory_cli.py", "consolidate",
    "--mode", "monthly",
    "--db-url", os.getenv("TURSO_URL"),
    "--auth-token", os.getenv("TURSO_TOKEN")
]))

### Model Recommendations

For Distillation & Tree Search:

Claude 3 Haiku (fast, cheap, excellent structure)
GPT-4o-mini (good balance)
Gemini 1.5 Flash (very fast)

For Tree Rebuilding:

Claude 3.5 Sonnet (better reasoning)
GPT-4o (strong planning)

### Cost Optimization

Use cheaper models for frequent operations (distill, search)
Batch distillation — Queue conversations, distill in batch
Cache tree prompts — Tree structure doesn't change often
Skip LLM for simple — Use rule-based for short conversations

### Example LLM Endpoint

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route("/complete", methods=["POST"])
def complete():
    data = request.json
    prompt = data["prompt"]
    
    # Call your LLM (OpenAI, Anthropic, local model, etc.)
    response = llm_client.complete(prompt)
    
    return jsonify({"text": response})

if __name__ == "__main__":
    app.run(port=8080)

### Performance Characteristics

Context Size:

Hot: ~5KB (always loaded)
Tree: ~2KB (always loaded)
Retrieved: ~1-3KB per query
Total: ~8-15KB (constant, regardless of agent age)

Retrieval Speed:

Keyword: 10-20ms
LLM tree search: 300-600ms
Cold query: 50-100ms

5-Year Scenario:

Hot: Still 5KB (living document)
Warm: Last 30 days (~50KB)
Cold: ~50MB in Turso (compressed distilled facts)
Tree: Still 2KB (different nodes, same size)
Context per session: Same as day 1

### Comparison with Alternatives

SystemMemory ModelScalingAccuracyCostFlat MEMORY.mdLinear text❌ Months⚠️ Degrades❌ LinearVector RAGEmbeddings✅ Years⚠️ Similarity≠relevance⚠️ ModerateEvoClaw TieredTree + tiers✅ Decades✅ Reasoning-based✅ Fixed

Why tree > vectors:

Accuracy: 98%+ vs. 70-80% (PageIndex benchmark)
Explainable: "Projects → EvoClaw → BSC" vs. "cosine 0.73"
Multi-hop: Natural vs. poor
False positives: Low vs. high

### Tree size exceeding limit

# Prune dead nodes
memory_cli.py tree --prune

# Check which nodes are largest
memory_cli.py tree --show | grep "Memories:"

# Manually remove unused categories
memory_cli.py tree --remove "unused/category"

### Warm memory filling up

# Run consolidation
memory_cli.py consolidate --mode daily --db-url "$TURSO_URL" --auth-token "$TURSO_TOKEN"

# Check stats
memory_cli.py metrics

# Lower eviction threshold (keeps less in warm)
# Edit config.json: "eviction_threshold": 0.4

### Hot memory exceeding 5KB

# Hot auto-prunes, but check structure
memory_cli.py hot

# Remove old projects/tasks manually
memory_cli.py hot --update project '{"name": "OldProject", "status": "Completed"}'

# Rebuild to force pruning
memory_cli.py hot --rebuild

### LLM search failing

# Fallback to keyword search (automatic)
memory_cli.py retrieve --query "..." --limit 5

# Test LLM endpoint
curl -X POST http://localhost:8080/complete -d '{"prompt": "test"}'

# Generate prompt for external testing
tree_search.py --query "..." --tree-file memory/memory-tree.json --mode llm --llm-prompt-file test.txt

### Migration from v1.x

Backward compatible: Existing warm-memory.json and memory-tree.json files work as-is.

New files:

config.json (optional, uses defaults)
hot-memory-state.json (auto-created)
metrics.json (auto-created)

Steps:

Update skill: clawhub update tiered-memory
Run consolidation to rebuild hot state: memory_cli.py consolidate
Initialize cold storage (optional): memory_cli.py cold --init --db-url ... --auth-token ...
Configure agent to use new commands (see Integration section)

### Migration from v2.0 to v2.1

Fully backward compatible: Existing memory files work without changes.

What's new:

✅ Metadata automatically extracted from existing facts when loaded
✅ New commands: validate, extract-metadata, search-url
✅ store command now accepts --url, --command, --path flags
✅ Distillation preserves URLs and technical details
✅ No action required - just update and use new features

Testing the upgrade:

# Update skill
clawhub update tiered-memory

# Test metadata extraction
memory_cli.py extract-metadata --file memory/2026-02-13.md

# Validate your recent notes
memory_cli.py validate

# Search by URL
memory_cli.py search-url --url "github.com"

### References

Design: /docs/TIERED-MEMORY.md (EvoClaw)
Cloud Sync: /docs/CLOUD-SYNC.md (EvoClaw)
Inspiration: PageIndex (tree-based retrieval)

v2.1.0 — A mind that remembers everything is as useless as one that remembers nothing. The art is knowing what to keep. Now with structured metadata to remember HOW, not just WHAT. 🧠🌲🔗
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: bowen31337
- Version: 2.2.0
## Source health
- Status: source_issue
- Known item issue.
- This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.
- Health scope: item
- Reason: not_found
- Checked at: 2026-05-04T04:01:47.250Z
- Expires at: 2026-05-05T04:01:47.250Z
- Recommended action: Open source listing
## Links
- [Detail page](https://openagent3.xyz/skills/tiered-memory)
- [Send to Agent page](https://openagent3.xyz/skills/tiered-memory/agent)
- [JSON manifest](https://openagent3.xyz/skills/tiered-memory/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/tiered-memory/agent.md)
- [Download page](https://openagent3.xyz/downloads/tiered-memory)