← All skills
Tencent SkillHub Β· AI

Memento

Local persistent memory for OpenClaw agents. Captures conversations, extracts structured facts via LLM, and auto-recalls relevant knowledge before each turn....

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Local persistent memory for OpenClaw agents. Captures conversations, extracts structured facts via LLM, and auto-recalls relevant knowledge before each turn....

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
CHANGELOG.md, DESIGN.md, MIGRATION-SPEC.md, PHASE2-SPEC.md, README.md, ROADMAP.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
0.6.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 15 sections Open source page

Memento β€” Local Persistent Memory for OpenClaw Agents

Memento gives your agents long-term memory. It captures conversations, extracts structured facts using an LLM, and auto-injects relevant knowledge before each AI turn. All stored data stays on your machine β€” no cloud sync, no subscriptions. Extraction uses your configured LLM provider; use a local model (Ollama) for fully air-gapped operation. ⚠️ Privacy note: When autoExtract is enabled, conversation segments are sent to your configured LLM provider for fact extraction. If you use a cloud provider (Anthropic, OpenAI, Mistral), that text leaves your machine. For fully local operation, set extractionModel to ollama/<model> and keep Ollama running locally.

What It Does

Captures every conversation turn, buffered per session Extracts structured facts (preferences, decisions, people, action items) via configurable LLM (opt-in β€” see Privacy section) Recalls relevant facts before each AI turn using FTS5 keyword search + optional semantic embeddings (BGE-M3) Respects privacy β€” facts are classified as shared, private, or secret based on content, with hard overrides for sensitive categories (medical, financial, credentials) Cross-agent knowledge β€” shared facts flow between agents with provenance tags; private/secret facts never cross boundaries

Quick Start

Install the plugin, restart your gateway, and Memento starts capturing automatically. Extraction is off by default β€” enable it explicitly when ready.

Optional: Semantic Search

Download a local embedding model for richer recall: mkdir -p ~/.node-llama-cpp/models curl -L -o ~/.node-llama-cpp/models/bge-m3-Q8_0.gguf \ "https://huggingface.co/gpustack/bge-m3-GGUF/resolve/main/bge-m3-Q8_0.gguf"

Environment Variables

All environment variables are optional β€” you only need the one matching your chosen LLM provider: VariableWhen NeededANTHROPIC_API_KEYUsing anthropic/* models for extractionOPENAI_API_KEYUsing openai/* models for extractionMISTRAL_API_KEYUsing mistral/* models for extractionMEMENTO_API_KEYGeneric fallback for any providerMEMENTO_WORKSPACE_MAINMigration only: path to agent workspace for bootstrapping No API key needed for ollama/* models (local inference).

Configuration

Add to your openclaw.json under plugins.entries.memento.config: { "memento": { "autoCapture": true, "extractionModel": "anthropic/claude-sonnet-4-6", "extraction": { "autoExtract": true, "minTurnsForExtraction": 3 }, "recall": { "autoRecall": true, "maxFacts": 20, "crossAgentRecall": true, "autoQueryPlanning": false } } } autoExtract: true is an explicit opt-in (default: false). When enabled, conversation segments are sent to the configured extractionModel for LLM-based fact extraction. Omit or set to false to keep everything local. autoQueryPlanning: true is an explicit opt-in (default: false). When enabled, a fast LLM call runs before each recall search to expand the query with synonyms and identify relevant categories β€” improving precision at the cost of one extra LLM call per turn.

Data Storage

Memento stores all data locally: PathContents~/.engram/conversations.sqliteMain database: conversations, facts, embeddings~/.engram/segments/*.jsonlHuman-readable conversation backups~/.engram/migration-config.jsonOptional: migration workspace paths (only for bootstrapping)

Privacy & Data Flow

FeatureData leaves machine?DetailsautoCapture (default: true)❌ NoWrites to local SQLite + JSONL onlyautoExtract (default: false)⚠️ Yes, if cloud LLMSends conversation text to configured provider. Use ollama/* for local.autoRecall (default: true)❌ NoReads from local SQLite onlySecret facts❌ NeverFiltered from extraction context β€” never sent to any LLMMigration❌ NoReads local workspace files, writes to local SQLite

Migration (Bootstrap from Existing Memory Files)

Migration is an optional, one-time process to seed Memento from existing agent memory/markdown files. It is user-initiated only β€” never runs automatically.

What it reads

Migration reads only the files you explicitly list in the config. It does not scan your filesystem, read arbitrary files, or access anything outside the configured paths.

Setup

Create ~/.engram/migration-config.json or set MEMENTO_WORKSPACE_MAIN: { "agents": [ { "agentId": "main", "workspace": "/path/to/your-workspace", "paths": ["MEMORY.md", "memory/*.md"] } ] } Always dry-run first to verify exactly which files will be read: npx tsx src/extraction/migrate.ts --all --dry-run The dry-run prints every file path it would read β€” review this before proceeding. Run the actual migration: npx tsx src/extraction/migrate.ts --all

Security notes

Migration only reads files matching the glob patterns you configure Extracted facts inherit visibility classification (shared/private/secret) Secret-classified facts are never sent to cloud LLM providers Migration config file is optional β€” if absent, migration is completely inert The migration script has no network access beyond the configured extraction LLM

Architecture

Capture layer β€” hooks message:received + message:sent, buffers multi-turn segments Extraction layer β€” async LLM extraction with deduplication, occurrence tracking, temporal state transitions (previous_value), and knowledge graph relations (including causal edges with causal_weight) Storage layer β€” SQLite schema v7 (better-sqlite3) with FTS5 full-text search + optional vector embeddings; knowledge graph (fact_relations with causal_weight), multi-layer clusters, and temporal transition tracking (previous_value) Recall layer β€” optional LLM query planning pre-pass (autoQueryPlanning), multi-factor scoring (recency Γ— frequency Γ— category weight), 1-hop graph traversal with causal edge 1.5Γ— boost, injected via before_prompt_build hook

Requirements

OpenClaw 2026.2.20+ Node.js 18+ An API key for your preferred LLM provider (for extraction β€” not needed if extraction is disabled or using Ollama) Optional: GPU for accelerated embedding search (falls back to CPU gracefully)

Install

# From ClawHub clawhub install memento # Or for local development git clone https://github.com/braibaud/Memento cd Memento npm install Note: better-sqlite3 includes native bindings that compile during npm install. This is expected behavior for SQLite access.

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
6 Docs
  • CHANGELOG.md Docs
  • DESIGN.md Docs
  • MIGRATION-SPEC.md Docs
  • PHASE2-SPEC.md Docs
  • README.md Docs
  • ROADMAP.md Docs