Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Persistent session memory system that prevents knowledge loss after context compaction. Converts session transcripts to searchable Markdown, builds an auto-u...
Persistent session memory system that prevents knowledge loss after context compaction. Converts session transcripts to searchable Markdown, builds an auto-u...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Solve the #1 problem with long-running AI agents: knowledge loss after context compaction.
When sessions compact (summarize old messages to free context), specific details are lost: names, decisions, file paths, reasoning. The agent retains a summary but loses the ability to recall "What exactly did Annika say?" or "When did we decide to use v6 format?"
Layer 1: MEMORY.md โ Curated long-term memory (human-edited) Layer 2: SESSION-GLOSSAR.md โ Auto-generated structured index (people/projects/decisions/timeline) Layer 3: memory/sessions/ โ Full session transcripts as searchable Markdown All three layers live under memory/ and are automatically vectorized by OpenClaw's memory search, creating a navigational hierarchy: glossary finds the right session, session provides the details.
python3 scripts/session-to-memory.py This scans all JSONL session logs in ~/.openclaw/agents/*/sessions/ and converts them to memory/sessions/session-YYYY-MM-DD-HHMM-*.md. Truncates long assistant responses to 2KB, skips system messages, tracks state to avoid re-processing. Options: --new โ Only convert sessions not yet processed (for incremental runs) --agent main โ Specify agent ID (default: main)
python3 scripts/build-glossary.py Scans all session transcripts and builds memory/SESSION-GLOSSAR.md with: People โ Who was mentioned, in how many sessions, date ranges Projects โ Which projects discussed, with relevant topic tags Topics โ Categorized themes (Email Drafts, Website Build, Security, etc.) Timeline โ Per-day summary (session count, people, topics) Decisions โ Extracted decision-like statements with dates Options: --incremental โ Only process new sessions (uses cached scan state)
Create two cron jobs (use a cheap model like Gemini Flash): Job 1: Session sync + glossary rebuild (every 4-6 hours) Task: Run `python3 scripts/session-to-memory.py --new` then `python3 scripts/build-glossary.py --incremental`. Report how many new sessions were converted and indexed. Optional Job 2: Pre-compaction memory flush check Already built into AGENTS.md by default โ just ensure the agent writes to memory/YYYY-MM-DD.md before each compaction.
Edit scripts/build-glossary.py to add your own known people and projects: KNOWN_PEOPLE = { "alice": "Alice Smith โ Project Manager", "bob": "Bob Jones โ CTO", } KNOWN_PROJECTS = { "website-redesign": "Website Redesign โ Q1 Initiative", "api-migration": "API Migration โ v2 to v3", } The glossary also detects topics via regex patterns. Add new patterns in the topic_patterns dict for your domain.
Once set up, memory_search("Alice project decision") will find: The glossary entry for Alice (which sessions she appears in) The actual session transcript where the decision was discussed Any MEMORY.md entry about Alice This gives the agent a navigation layer (glossary) plus detail access (transcripts) โ much better than either alone.
memory/ โโโ MEMORY.md โ Curated (you maintain this) โโโ SESSION-GLOSSAR.md โ Auto-generated index โโโ YYYY-MM-DD.md โ Daily notes โโโ .glossary-state.json โ Glossary builder state โโโ .glossary-scans.json โ Cached scan results โโโ sessions/ โโโ .state.json โ Converter state โโโ session-2026-01-15-0830-abc123.md โโโ session-2026-01-15-1200-def456.md โโโ ...
Cron jobs run in isolated sessions with zero memory context. The optimizer analyzes your cron jobs and suggests memory-enhanced versions: python3 scripts/cron-optimizer.py This scans ~/.openclaw/cron/jobs.json, identifies jobs that would benefit from memory context, and generates memory/cron-optimization-report.md with before/after prompts and implementation guidance. Example optimization: Original: "Run daily research scout..." Enhanced: "Before starting: Use memory_search to find recent context about research activities. Check memory/SESSION-GLOSSAR.md for relevant people, projects, and recent decisions. Then proceed with the original task using this context. Run daily research scout..." The script is conservative (suggests only, never auto-modifies) and skips monitoring jobs that don't need context.
Run the full rebuild (python3 scripts/build-glossary.py without --incremental) occasionally to pick up improvements to entity detection The glossary is most useful when KNOWN_PEOPLE and KNOWN_PROJECTS are populated โ spend 5 minutes adding your key contacts and projects For agents that run 24/7, the cron job keeps everything current automatically Session transcripts can get large (our 297 sessions = 24MB) โ this is fine, OpenClaw's vector search handles it efficiently Use the cron optimizer after setting up memory to enhance existing automation
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.