# Send Smart Memory to your agent
Use the source page and any available docs to guide the install because the item is currently unstable or timing out.
## Fast path
- Open the source page via Review source status.
- If you can obtain the package, extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the source page and extracted files.
## Suggested prompts
### New install

```text
I tried to install a skill package from Yavira, but the item is currently unstable or timing out. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required.
```
### Upgrade existing

```text
I tried to upgrade a skill package from Yavira, but the item is currently unstable or timing out. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "smart-memory",
    "name": "Smart Memory",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/BluePointDigital/smart-memory",
    "canonicalUrl": "https://clawhub.ai/BluePointDigital/smart-memory",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/smart-memory",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=smart-memory",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      ".gitignore",
      "AGENTS.md",
      "CHANGELOG.md",
      "cognitive_memory_system.py",
      "HOT_MEMORY_EXTENSION.md",
      "hot_memory_manager.py"
    ],
    "downloadMode": "manual_only",
    "sourceHealth": {
      "source": "tencent",
      "slug": "smart-memory",
      "status": "unstable",
      "reason": "timeout",
      "recommendedAction": "retry_later",
      "checkedAt": "2026-05-09T03:04:30.202Z",
      "expiresAt": "2026-05-09T15:04:30.202Z",
      "httpStatus": null,
      "finalUrl": null,
      "contentType": null,
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=smart-memory",
        "error": "Timed out after 5000ms",
        "slug": "smart-memory"
      },
      "scope": "item",
      "summary": "Item is unstable.",
      "detail": "This item is timing out or returning errors right now. Review the source page and try again later.",
      "primaryActionLabel": "Review source status",
      "primaryActionHref": "https://clawhub.ai/BluePointDigital/smart-memory"
    },
    "validation": {
      "installChecklist": [
        "Wait for the source to recover or retry later.",
        "Review SKILL.md only after the download returns a real package.",
        "Treat this source as transient until the upstream errors clear."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/smart-memory",
    "downloadUrl": "https://openagent3.xyz/downloads/smart-memory",
    "agentUrl": "https://openagent3.xyz/skills/smart-memory/agent",
    "manifestUrl": "https://openagent3.xyz/skills/smart-memory/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/smart-memory/agent.md"
  }
}
```
## Documentation

### Smart Memory v2 Skill

Smart Memory v2 is a persistent cognitive memory runtime, not a legacy vector-memory CLI.

Core runtime:

Node adapter: smart-memory/index.js
Local API: server.py (FastAPI)
Orchestrator: cognitive_memory_system.py

### Core Capabilities

Structured long-term memory (episodic, semantic, belief, goal)
Entity-aware retrieval and reranking
Hot working memory
Background cognition (reflection, consolidation, decay, conflict resolution)
Strict token-bounded prompt composition
Observability endpoints (/health, /memories, /memory/{id}, /insights/pending)

### Native OpenClaw Integration (v2.5)

Use the native OpenClaw skill package:

skills/smart-memory-v25/index.js
Optional hook helper: skills/smart-memory-v25/openclaw-hooks.js
Skill descriptor: skills/smart-memory-v25/SKILL.md

Primary exports:

createSmartMemorySkill(options)
createOpenClawHooks({ skill, agentIdentity, summarizeWithLLM })

### Tool Interface (for agent tool use)

memory_search

Purpose: query long-term memory.
Input:

query (string, required)
type (all|semantic|episodic|belief|goal, default all)
limit (number, default 5)
min_relevance (number, default 0.6)


Behavior: checks /health first, then retrieves via /retrieve and returns formatted memory results.

memory_commit

Purpose: explicitly persist important facts/decisions/beliefs/goals.
Input:

content (string, required)
type (semantic|episodic|belief|goal, required)
importance (1-10, default 5)
tags (string array, optional)


Behavior:

checks /health first
auto-tags if missing (working_question, decision heuristics)
commits are serialized (sequential) to protect local CPU embedding throughput
if server is unreachable, payload is queued to .memory_retry_queue.json
unreachable response is explicit:

Memory commit failed - server unreachable. Queued for retry.

memory_insights

Purpose: surface pending background insights.
Input:

limit (number, default 10)


Behavior: checks /health first, calls /insights/pending, returns formatted insight list.

### Reliability Guarantees

Mandatory health gate before each tool call (GET /health).
Retry queue flushes automatically on healthy tool calls and heartbeat.
Heartbeat supports automatic retry recovery and background maintenance.

### Session Arc Lifecycle Hooks

The v2.5 skill supports episodic session arc capture:

checkpoint capture every 20 turns
session-end capture during teardown/reset

Flow:

Extract recent conversation turns (up to 20).
Run summarization with prompt:

Summarize this session arc: What was the goal? What approaches were tried? What decisions were made? What remains open?


Persist summary through internal memory_commit as:

type: "episodic"
tags: ["session_arc", "YYYY-MM-DD"]

### Passive Context Injection

Use inject_active_context (or createOpenClawHooks().beforeModelResponse) before response generation.

This adds the standardized block:

[ACTIVE CONTEXT]
Status: {status}
Active Projects: {active_projects}
Working Questions: {working_questions}
Top of Mind: {top_of_mind}

Pending Insights:
- {insight_1}
- {insight_2}
[/ACTIVE CONTEXT]

Add this guidance line to your agent base prompt:

If pending insights appear in your context that relate to the current conversation, surface them naturally to the user. Do not force it - but if there is a genuine connection, seamlessly bring it up.

### Minimal OpenClaw Wiring Example

const {
  createSmartMemorySkill,
  createOpenClawHooks,
} = require("./skills/smart-memory-v25");

const memory = createSmartMemorySkill({
  baseUrl: "http://127.0.0.1:8000",
  summarizeSessionArc: async ({ prompt, conversationText }) => {
    return openclaw.llm.complete({ system: prompt, user: conversationText });
  },
});

const hooks = createOpenClawHooks({
  skill: memory.skill,
  agentIdentity: "OpenClaw Agent",
  summarizeWithLLM: async ({ prompt, conversationText }) => {
    return openclaw.llm.complete({ system: prompt, user: conversationText });
  },
});

// Register memory.tools as callable tools:
// - memory_search
// - memory_commit
// - memory_insights
// and call hooks.beforeModelResponse / hooks.onTurn / hooks.onSessionEnd at lifecycle points.

### Node Adapter Methods (Base Adapter)

start() / init()
ingestMessage(interaction)
retrieveContext({ user_message, conversation_history })
getPromptContext(promptComposerRequest)
runBackground(scheduled)
stop()

### API Endpoints

GET /health
POST /ingest
POST /retrieve
POST /compose
POST /run_background
GET /memories
GET /memory/{memory_id}
GET /insights/pending

### Install (CPU-Only Required)

For Docker, WSL, and laptops without NVIDIA GPUs, use CPU-only PyTorch.

# from repository root
cd smart-memory

# Create Python venv
python3 -m venv .venv
source .venv/bin/activate  # Windows: .venv\\Scripts\\activate

# Install CPU-only PyTorch FIRST
pip install torch --index-url https://download.pytorch.org/whl/cpu

# Then install remaining dependencies
pip install -r requirements-cognitive.txt

# Finally, install Node dependencies
npm install

### PyTorch Policy

Smart Memory v2 supports CPU-only PyTorch only.
Do not install GPU/CUDA PyTorch builds for this project.
Use the bundled installer flow (npm install -> postinstall.js) so CPU wheels are always used.

### Deprecated

Legacy vector-memory CLI artifacts (smart_memory.js, vector_memory_local.js, focus_agent.js) are removed in v2.
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: BluePointDigital
- Version: 2.5.0
## Source health
- Status: unstable
- Item is unstable.
- This item is timing out or returning errors right now. Review the source page and try again later.
- Health scope: item
- Reason: timeout
- Checked at: 2026-05-09T03:04:30.202Z
- Expires at: 2026-05-09T15:04:30.202Z
- Recommended action: Review source status
## Links
- [Detail page](https://openagent3.xyz/skills/smart-memory)
- [Send to Agent page](https://openagent3.xyz/skills/smart-memory/agent)
- [JSON manifest](https://openagent3.xyz/skills/smart-memory/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/smart-memory/agent.md)
- [Download page](https://openagent3.xyz/downloads/smart-memory)