Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
The only memory skill that watches on its own. No database. No vectors. No manual saves. Just an LLM observer that compresses your conversations into priorit...
The only memory skill that watches on its own. No database. No vectors. No manual saves. Just an LLM observer that compresses your conversations into priorit...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
The only memory skill that watches on its own. No database. No vectors. No manual saves. Just an LLM observer that compresses your conversations into prioritised notes, consolidates when they grow, and recovers anything missed. Five layers of redundancy, zero maintenance. ~$0.10/month. While other memory skills ask you to remember to remember, this one just pays attention.
Layer 1: Observer (cron, every 15-30 min) β compresses recent messages β observations.md Layer 2: Reflector (auto-triggered when observations > 8000 words) β consolidates, removes superseded info β 40-60% reduction Layer 3: Session Recovery (runs on every /new or /reset) β catches any session the Observer missed Layer 4: Reactive Watcher (inotify daemon, Linux only) β triggers Observer after 40+ new JSONL writes, 5-min cooldown Layer 5: Pre-compaction hook (memoryFlush) β emergency capture before OpenClaw compacts context
Observer reads recent session transcripts (JSONL), sends them to an LLM, and appends compressed observations to observations.md with priority levels (high, medium, low) Reflector kicks in when observations grow too large, consolidating related items and dropping stale low-priority entries Session Recovery runs at session start, checks if the previous session was captured, and does an emergency observation if not Reactive Watcher watches the session directory with inotify so high-activity periods get captured faster than the cron interval Pre-compaction hook fires when OpenClaw is about to compact context, ensuring nothing is lost
clawdhub install total-recall
Add to your .env or OpenClaw config: OPENROUTER_API_KEY=sk-or-v1-xxxxx
bash skills/total-recall/scripts/setup.sh This will: Create the memory directory structure (memory/, logs/, backups) On Linux with inotify + systemd: install the reactive watcher service Print cron job and agent configuration instructions for you to add manually
Add to your agent's workspace context (e.g., MEMORY.md or system prompt): At session startup, read `memory/observations.md` for cross-session context. Or use OpenClaw's memoryFlush.systemPrompt to inject a startup instruction.
PlatformObserver + Reflector + RecoveryReactive WatcherLinux (Debian/Ubuntu/etc.)Full supportWith inotify-toolsmacOSFull supportNot available (cron-only) All core scripts use portable bash. stat, date, and md5 commands are handled cross-platform via _compat.sh.
All scripts read from environment variables with sensible defaults: VariableDefaultDescriptionOPENROUTER_API_KEY(required)OpenRouter API key for LLM callsMEMORY_DIR$OPENCLAW_WORKSPACE/memoryWhere observations.md livesSESSIONS_DIR~/.openclaw/agents/main/sessionsOpenClaw session transcriptsOBSERVER_MODELdeepseek/deepseek-v3.2Primary model for compressionOBSERVER_FALLBACK_MODELgoogle/gemini-2.5-flashFallback if primary failsOBSERVER_LOOKBACK_MIN15Minutes to look back (daytime)OBSERVER_MORNING_LOOKBACK_MIN480Minutes to look back (before 8am)OBSERVER_LINE_THRESHOLD40Lines before reactive trigger (Linux)OBSERVER_COOLDOWN_SECS300Cooldown between reactive triggers (Linux)REFLECTOR_WORD_THRESHOLD8000Words before reflector runsOPENCLAW_WORKSPACE~/your-workspaceWorkspace root
Total Recall uses any OpenAI-compatible chat completion API. Switch providers by setting environment variables: VariableDefaultDescriptionLLM_BASE_URLhttps://openrouter.ai/api/v1API endpointLLM_API_KEYfalls back to OPENROUTER_API_KEYAPI keyLLM_MODELdeepseek/deepseek-v3.2Model to use
# OpenRouter (default) export OPENROUTER_API_KEY="your-key" # Ollama (local) export LLM_BASE_URL="http://localhost:11434/v1" export LLM_API_KEY="ollama" export LLM_MODEL="llama3.1:8b" # Groq export LLM_BASE_URL="https://api.groq.com/openai/v1" export LLM_API_KEY="your-groq-key" export LLM_MODEL="llama-3.3-70b-versatile"
memory/ observations.md # The main observation log (loaded at startup) observation-backups/ # Reflector backups (last 10 kept) .observer-last-run # Timestamp of last observer run .observer-last-hash # Dedup hash of last processed messages logs/ observer.log reflector.log session-recovery.log observer-watcher.log
The setup script creates these OpenClaw cron jobs: JobScheduleDescriptionmemory-observerEvery 15 minCompress recent conversationmemory-reflectorHourlyConsolidate if observations are large
The reactive watcher uses inotifywait to detect session activity and trigger the observer faster than cron alone. Requires Linux with inotify-tools installed. # Install inotify-tools (Debian/Ubuntu) sudo apt install inotify-tools # Check watcher status systemctl --user status total-recall-watcher # View logs journalctl --user -u total-recall-watcher -f
Using DeepSeek v3.2 via OpenRouter: ~$0.03-0.10/month for typical usage (observer + reflector) ~15-30 cron runs/day, each processing a few hundred tokens
Finds recently modified session JSONL files Filters out subagent/cron sessions Extracts user + assistant messages from the lookback window Deduplicates using MD5 hash comparison Sends to LLM with the observer prompt (priority-based compression) Appends result to observations.md If observations exceed the word threshold, triggers reflector
Backs up current observations Sends entire log to LLM with consolidation instructions Validates output is shorter than input (sanity check) Replaces observations with consolidated version Cleans old backups (keeps last 10)
Runs at every /new or /reset Hashes recent lines of the last session file Compares against stored hash from last observer run If mismatch: runs observer in recovery mode (4-hour lookback) Fallback: raw message extraction if observer fails
Uses inotifywait to monitor session directory Counts JSONL writes to main session files only After 40+ lines: triggers observer (with 5-min cooldown) Resets counter when cron/external observer runs are detected
The observer and reflector system prompts are in prompts/: prompts/observer-system.txt β controls how conversations are compressed prompts/reflector-system.txt β controls how observations are consolidated Edit these to match your agent's personality and priorities.
The Dream Cycle is an optional nightly agent that runs after hours to consolidate observations.md. It archives stale items and adds semantic hooks so nothing useful is actually lost. Context stays lean; everything remains findable.
Classifies every observation by impact (critical / high / medium / low / minimal) and age Archives items that have passed their relevance threshold Adds a semantic hook for each archived item (specific keywords + archive reference) Validates the result and rolls back automatically if something goes wrong
Multi-Hook Retrieval β 4-5 alternative search phrasings per archived item. Searches using different words than the original still find the memory. Confidence Scoring β every observation gets a confidence score (0.0-1.0) and source type (explicit, implicit, inference, weak, uncertain). High-confidence items are preserved longer; low-confidence items are archived sooner. Memory Type System β 7 types with per-type TTLs: event (14d), fact (90d), preference (180d), goal (365d), habit (365d), rule (never), context (30d). Embedded as invisible HTML metadata comments in observations.md. Observation Chunking β clusters of 3+ related observations are compressed into single summary entries. Source observations are archived; a chunk hook replaces them. Achieves up to 75% token reduction. Importance Decay β per-type daily decay applied to importance scores before each archival decision. Items that decay below the archive threshold are queued for removal. Rates: event (-0.5/day), fact (-0.1/day), preference (-0.02/day), rule/habit/goal (no decay). Pattern Promotion β scans recent dream logs for recurring themes (3+ occurrences across 3+ separate days). Writes promotion proposals to memory/dream-staging/ for human review. Use staging-review.sh to list, show, approve, or reject proposals. The context type is never promoted automatically.
Run bash skills/total-recall/scripts/setup.sh β creates Dream Cycle directories automatically. Add the nightly cron job: # Dream Cycle β nightly at 3am 0 3 * * * OPENCLAW_WORKSPACE=~/your-workspace bash ~/your-workspace/skills/total-recall/scripts/dream-cycle.sh preflight Configure your cron agent using prompts/dream-cycle-prompt.md as the system prompt. Recommended models: Claude Sonnet for the Dreamer (analysis + decisions), DeepSeek v3.2 for the Observer (cheap, fast). Start with READ_ONLY_MODE=true for the first few nights. Check memory/dream-logs/ after each run to verify what it would have archived. Switch to READ_ONLY_MODE=false once satisfied.
VariableDefaultDescriptionDREAM_TOKEN_TARGET8000Token target for observations.md after consolidationREAD_ONLY_MODEfalseSet true for dry-run analysis without writes
FileDescriptionscripts/dream-cycle.shShell helper: preflight, archive, update-observations, write-log, write-metrics, validate, rollbackprompts/dream-cycle-prompt.mdAgent prompt for the nightly Dream Cycle rundream-cycle/README.mdDream Cycle quick referenceschemas/observation-format.mdExtended observation metadata format
memory/ archive/ observations/ # Archived items (one .md file per night) chunks/ # Chunked observation groups dream-logs/ # Nightly run reports dream-staging/ # Pattern promotion proposals awaiting human review .dream-backups/ # Pre-run safety backups research/ dream-cycle-metrics/ daily/ # JSON metrics per night
Observer not running? Check logs/observer.log for errors Verify OPENROUTER_API_KEY is set and valid Confirm cron is active: crontab -l Observations not being loaded at session start? Ensure your agent's startup instructions include reading memory/observations.md Check MEMORY_DIR points to the right location Reactive watcher not triggering (Linux)? Run systemctl --user status total-recall-watcher Check inotify-tools is installed: which inotifywait View watcher logs: journalctl --user -u total-recall-watcher -f Dream Cycle archiving too aggressively? Enable READ_ONLY_MODE=true and review dream logs before going live Adjust DREAM_TOKEN_TARGET upward to archive less per run Dream Cycle not archiving enough? Lower DREAM_TOKEN_TARGET to trigger more aggressive consolidation
This system is inspired by how human memory works during sleep β the hippocampus (observer) captures experiences, and during sleep consolidation (reflector), important memories are strengthened while noise is discarded. Read more: Your AI Has an Attention Problem "Get your ass to Mars." β Well, get your agent's memory to work.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.