โ† All skills
Tencent SkillHub ยท Developer Tools

Faya Session Memory

Persistent session memory system that prevents knowledge loss after context compaction. Converts session transcripts to searchable Markdown, builds an auto-u...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Persistent session memory system that prevents knowledge loss after context compaction. Converts session transcripts to searchable Markdown, builds an auto-u...

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, scripts/build-glossary.py, scripts/cron-optimizer.py, scripts/session-to-memory.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 11 sections Open source page

Session Memory

Solve the #1 problem with long-running AI agents: knowledge loss after context compaction.

The Problem

When sessions compact (summarize old messages to free context), specific details are lost: names, decisions, file paths, reasoning. The agent retains a summary but loses the ability to recall "What exactly did Annika say?" or "When did we decide to use v6 format?"

The Solution: Three-Layer Memory Architecture

Layer 1: MEMORY.md โ€” Curated long-term memory (human-edited) Layer 2: SESSION-GLOSSAR.md โ€” Auto-generated structured index (people/projects/decisions/timeline) Layer 3: memory/sessions/ โ€” Full session transcripts as searchable Markdown All three layers live under memory/ and are automatically vectorized by OpenClaw's memory search, creating a navigational hierarchy: glossary finds the right session, session provides the details.

Step 1: Convert existing sessions to Markdown

python3 scripts/session-to-memory.py This scans all JSONL session logs in ~/.openclaw/agents/*/sessions/ and converts them to memory/sessions/session-YYYY-MM-DD-HHMM-*.md. Truncates long assistant responses to 2KB, skips system messages, tracks state to avoid re-processing. Options: --new โ€” Only convert sessions not yet processed (for incremental runs) --agent main โ€” Specify agent ID (default: main)

Step 2: Build the glossary

python3 scripts/build-glossary.py Scans all session transcripts and builds memory/SESSION-GLOSSAR.md with: People โ€” Who was mentioned, in how many sessions, date ranges Projects โ€” Which projects discussed, with relevant topic tags Topics โ€” Categorized themes (Email Drafts, Website Build, Security, etc.) Timeline โ€” Per-day summary (session count, people, topics) Decisions โ€” Extracted decision-like statements with dates Options: --incremental โ€” Only process new sessions (uses cached scan state)

Step 3: Set up cron jobs for auto-updates

Create two cron jobs (use a cheap model like Gemini Flash): Job 1: Session sync + glossary rebuild (every 4-6 hours) Task: Run `python3 scripts/session-to-memory.py --new` then `python3 scripts/build-glossary.py --incremental`. Report how many new sessions were converted and indexed. Optional Job 2: Pre-compaction memory flush check Already built into AGENTS.md by default โ€” just ensure the agent writes to memory/YYYY-MM-DD.md before each compaction.

Customizing Entity Detection

Edit scripts/build-glossary.py to add your own known people and projects: KNOWN_PEOPLE = { "alice": "Alice Smith โ€” Project Manager", "bob": "Bob Jones โ€” CTO", } KNOWN_PROJECTS = { "website-redesign": "Website Redesign โ€” Q1 Initiative", "api-migration": "API Migration โ€” v2 to v3", } The glossary also detects topics via regex patterns. Add new patterns in the topic_patterns dict for your domain.

How It Works With memory_search

Once set up, memory_search("Alice project decision") will find: The glossary entry for Alice (which sessions she appears in) The actual session transcript where the decision was discussed Any MEMORY.md entry about Alice This gives the agent a navigation layer (glossary) plus detail access (transcripts) โ€” much better than either alone.

File Structure After Setup

memory/ โ”œโ”€โ”€ MEMORY.md โ€” Curated (you maintain this) โ”œโ”€โ”€ SESSION-GLOSSAR.md โ€” Auto-generated index โ”œโ”€โ”€ YYYY-MM-DD.md โ€” Daily notes โ”œโ”€โ”€ .glossary-state.json โ€” Glossary builder state โ”œโ”€โ”€ .glossary-scans.json โ€” Cached scan results โ””โ”€โ”€ sessions/ โ”œโ”€โ”€ .state.json โ€” Converter state โ”œโ”€โ”€ session-2026-01-15-0830-abc123.md โ”œโ”€โ”€ session-2026-01-15-1200-def456.md โ””โ”€โ”€ ...

Cron Memory Optimizer

Cron jobs run in isolated sessions with zero memory context. The optimizer analyzes your cron jobs and suggests memory-enhanced versions: python3 scripts/cron-optimizer.py This scans ~/.openclaw/cron/jobs.json, identifies jobs that would benefit from memory context, and generates memory/cron-optimization-report.md with before/after prompts and implementation guidance. Example optimization: Original: "Run daily research scout..." Enhanced: "Before starting: Use memory_search to find recent context about research activities. Check memory/SESSION-GLOSSAR.md for relevant people, projects, and recent decisions. Then proceed with the original task using this context. Run daily research scout..." The script is conservative (suggests only, never auto-modifies) and skips monitoring jobs that don't need context.

Tips

Run the full rebuild (python3 scripts/build-glossary.py without --incremental) occasionally to pick up improvements to entity detection The glossary is most useful when KNOWN_PEOPLE and KNOWN_PROJECTS are populated โ€” spend 5 minutes adding your key contacts and projects For agents that run 24/7, the cron job keeps everything current automatically Session transcripts can get large (our 297 sessions = 24MB) โ€” this is fine, OpenClaw's vector search handles it efficiently Use the cron optimizer after setting up memory to enhance existing automation

Category context

Code helpers, APIs, CLIs, browser automation, testing, and developer operations.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Scripts1 Docs
  • SKILL.md Primary doc
  • scripts/build-glossary.py Scripts
  • scripts/cron-optimizer.py Scripts
  • scripts/session-to-memory.py Scripts