← All skills
Tencent SkillHub · AI

DeepRecall

Recursive memory recall for persistent AI agents using RLM (Recursive Language Models). Implements the Anamnesis Architecture — "The soul stays small, the mi...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Recursive memory recall for persistent AI agents using RLM (Recursive Language Models). Implements the Anamnesis Architecture — "The soul stays small, the mi...

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
deep_recall.py, memory_indexer.py, memory_scanner.py, model_pairs.py, provider_bridge.py, SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.8

Documentation

ClawHub primary doc Primary doc: SKILL.md 25 sections Open source page

DeepRecall v2 — OpenClaw Skill

Pure-Python recursive memory for persistent AI agents. Implements the Anamnesis Architecture: "The soul stays small, the mind scales forever."

Description

DeepRecall gives AI agents infinite memory by recursively querying their own memory files through a manager→workers→synthesis RLM loop — entirely in Python. No Deno runtime, no fast-rlm subprocess, no vector database. Just markdown files and HTTP calls to any OpenAI-compatible LLM endpoint. When the agent needs to recall something, DeepRecall: Scans the workspace for memory files (scoped by category) Indexes file metadata — headers, topics, dates, people Manager selects the most relevant files from the index Workers (parallel) extract exact verbatim quotes from each file Synthesis combines quotes into a cited, grounded answer Workers are constrained by anti-hallucination prompts to return only verbatim quotes. The synthesis step cites every claim with (filename:line).

Installation

pip install deep-recall Or install from source: git clone https://github.com/Stefan27-4/DeepRecall cd DeepRecall && pip install .

Dependencies

httpx (preferred) or requests — HTTP client for LLM calls PyYAML — config parsing Python ≥ 3.10 An LLM provider configured in OpenClaw v2 breaking change: Deno and fast-rlm are no longer required. The entire RLM loop runs in-process as pure Python.

Quick Start

from deep_recall import recall result = recall("What did we decide about the project architecture?") print(result)

recall(query, scope, workspace, verbose, config_overrides) → str

The primary entry point. Runs the full manager→workers→synthesis loop. from deep_recall import recall result = recall( "Find all mentions of budget discussions", scope="memory", # "memory" | "identity" | "project" | "all" verbose=True, # print progress to stdout config_overrides={ "max_files": 5, # max files the manager can select }, ) ParameterTypeDefaultDescriptionquerystr(required)What to recall / search forscopestr"memory"File scope — see ScopesworkspacePath | Noneauto-detectOverride workspace pathverboseboolFalsePrint provider, model, file selection infoconfig_overridesdict | NoneNoneOverride max_files and other settings Returns: A string containing the recalled information with source citations, or a [DeepRecall] status message if no files/results were found.

recall_quick(query, verbose) → str

Fast, cheap recall scoped to identity files. Best for simple lookups. from deep_recall import recall_quick name = recall_quick("What is my human's name?") Equivalent to recall(query, scope="identity", config_overrides={"max_files": 2}).

recall_deep(query, verbose) → str

Thorough recall across all workspace files. Best for cross-referencing. from deep_recall import recall_deep summary = recall_deep("Summarize all decisions from March") Equivalent to recall(query, scope="all", config_overrides={"max_files": 5}).

CLI

python deep_recall.py <query> [scope] # Examples python deep_recall.py "What was the first project we worked on?" python deep_recall.py "Find budget discussions" all

Scopes

Scopes control which files DeepRecall searches. Narrower scopes are faster and cheaper. ScopeFiles IncludedSpeedCostUse CaseidentitySOUL.md, IDENTITY.md, MEMORY.md, USER.md, TOOLS.md, HEARTBEAT.md, AGENTS.md⚡ FastestCheapest"What's my name?"memoryIdentity files + memory/LONG_TERM.md + memory/*.md daily logs🔄 FastLow"What did we do last week?"projectAll readable workspace files (skips binaries, node_modules, .git)🐢 SlowerMedium"Find that config change"allIdentity + memory + project (everything)🐌 SlowestHighest"Search everything"

File Categories

DeepRecall classifies discovered files into categories: soul — SOUL.md, IDENTITY.md — who the agent IS (always in context) mind — MEMORY.md, USER.md, TOOLS.md, HEARTBEAT.md, AGENTS.md — compact orientation long-term — memory/LONG_TERM.md — full detailed memories, grows forever daily-log — memory/YYYY-MM-DD.md — raw daily logs workspace — everything else (project files, configs, docs)

Configuration

DeepRecall reads your existing OpenClaw setup — no additional config files needed.

Provider Resolution

Provider, API key, and model are resolved automatically from: ~/.openclaw/openclaw.json — primary model setting ~/.openclaw/agents/main/agent/models.json — provider base URLs ~/.openclaw/credentials/ — cached tokens (e.g. GitHub Copilot) Environment variables — fallback (ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY, etc. (18+ providers supported, all optional))

Supported Providers (20+)

Anthropic · OpenAI · Google (Gemini) · GitHub Copilot · OpenRouter · Ollama · DeepSeek · Mistral · Together · Groq · Fireworks · Cohere · Perplexity · SambaNova · Cerebras · xAI · Minimax · Zhipu (GLM) · Moonshot (Kimi) · Qwen

Auto Model Pairing

The manager and synthesis steps use your primary model. Workers use a cheaper sub-agent model automatically: Primary ModelWorker ModelClaude Opus 4 / 4.6Claude Sonnet 4Claude Sonnet 4 / 4.5Claude Haiku 3.5GPT-4o / GPT-4GPT-4o-miniGemini 2.5 ProGemini 2.0 FlashDeepSeek ReasonerDeepSeek ChatLlama 3.1 70BLlama 3.1 8B

config_overrides

Pass overrides via the config_overrides parameter: recall("query", config_overrides={ "max_files": 5, # max files manager can select (default: 3) })

Skill Files

FilePurposedeep_recall.pyPublic API — recall, recall_quick, recall_deep, RLM loopprovider_bridge.pyResolves LLM provider, API key, base URL from OpenClaw configmodel_pairs.pyMaps primary models to cheaper worker modelsmemory_scanner.pyDiscovers and categorises workspace files by scopememory_indexer.pyBuilds a structured Memory Index (topics, people, timeline)__init__.pyPackage exports

Memory Layout

Recommended workspace structure for the Anamnesis Architecture: ~/.openclaw/workspace/ ├── SOUL.md # Identity — always in context, never grows ├── IDENTITY.md # Core agent facts ├── MEMORY.md # Compact index (~100 lines), auto-loaded each session ├── USER.md # About the human ├── AGENTS.md # Agent behavior rules ├── TOOLS.md # Tool-specific notes └── memory/ ├── LONG_TERM.md # Full memories — grows forever, searched via DeepRecall ├── 2026-03-05.md # Daily raw log ├── 2026-03-04.md └── ...

⚠️ Privacy Notice

DeepRecall reads your workspace memory files and sends their contents to your configured LLM provider (Anthropic, OpenAI, Gemini, etc.) to perform recall. This is how it works — there is no local-only mode. What gets sent: File metadata (names, headings, topics) → to the manager LLM Full file contents of selected files → to worker LLMs This may include personal notes, daily logs, project files What is NOT sent: API keys and credentials (read locally for auth, never in prompts) Files outside your workspace Credentials used locally: ~/.openclaw/openclaw.json and ~/.openclaw/credentials/* — to resolve your LLM provider Env vars (ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY, etc. (18+ providers supported, all optional)) — as fallback if no OpenClaw config found

Recommended Memory Architecture

DeepRecall works best with a two-tier memory system:

Tier 1: MEMORY.md (The Index)

Auto-loaded every session — keep it SMALL (~100 lines) Contains: quick reference facts, active projects, key metrics, a table of contents pointing to LONG_TERM.md sections Think of it as your orientation file — "what do I need to know right now?" If it grows past ~120 lines, move details to LONG_TERM.md

Tier 2: memory/LONG_TERM.md (The Encyclopedia)

Never auto-loaded — searched via DeepRecall when needed Contains: full context, decisions with reasoning, timestamps, bugs with fixes, architecture details Grows forever — never delete, only append The devil is in the details — "the diary entry, not the Wikipedia summary"

Tier 3: memory/YYYY-MM-DD.md (Daily Logs)

Raw notes of what happened each day Distilled into LONG_TERM.md at end of day

Nightly Sync Routine

At the end of each day (or via cron/heartbeat): Read today's daily log Append key events, decisions, lessons, bugs, metrics to LONG_TERM.md Update MEMORY.md index table if new topics appeared ⚠️ Setting this up? Ask your human before restructuring existing memory files. Show them this recommendation and let them decide how to organize their agent's memory.

License

MIT — see LICENSE.

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
5 Scripts1 Docs
  • SKILL.md Primary doc
  • deep_recall.py Scripts
  • memory_indexer.py Scripts
  • memory_scanner.py Scripts
  • model_pairs.py Scripts
  • provider_bridge.py Scripts