{
  "schemaVersion": "1.0",
  "item": {
    "slug": "smart-memory",
    "name": "Smart Memory",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/BluePointDigital/smart-memory",
    "canonicalUrl": "https://clawhub.ai/BluePointDigital/smart-memory",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/smart-memory",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=smart-memory",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      ".gitignore",
      "AGENTS.md",
      "CHANGELOG.md",
      "cognitive_memory_system.py",
      "HOT_MEMORY_EXTENSION.md",
      "hot_memory_manager.py"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-07T17:22:31.273Z",
      "expiresAt": "2026-05-14T17:22:31.273Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
        "contentDisposition": "attachment; filename=\"afrexai-annual-report-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/smart-memory"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/smart-memory",
    "agentPageUrl": "https://openagent3.xyz/skills/smart-memory/agent",
    "manifestUrl": "https://openagent3.xyz/skills/smart-memory/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/smart-memory/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Smart Memory v2 Skill",
        "body": "Smart Memory v2 is a persistent cognitive memory runtime, not a legacy vector-memory CLI.\n\nCore runtime:\n\nNode adapter: smart-memory/index.js\nLocal API: server.py (FastAPI)\nOrchestrator: cognitive_memory_system.py"
      },
      {
        "title": "Core Capabilities",
        "body": "Structured long-term memory (episodic, semantic, belief, goal)\nEntity-aware retrieval and reranking\nHot working memory\nBackground cognition (reflection, consolidation, decay, conflict resolution)\nStrict token-bounded prompt composition\nObservability endpoints (/health, /memories, /memory/{id}, /insights/pending)"
      },
      {
        "title": "Native OpenClaw Integration (v2.5)",
        "body": "Use the native OpenClaw skill package:\n\nskills/smart-memory-v25/index.js\nOptional hook helper: skills/smart-memory-v25/openclaw-hooks.js\nSkill descriptor: skills/smart-memory-v25/SKILL.md\n\nPrimary exports:\n\ncreateSmartMemorySkill(options)\ncreateOpenClawHooks({ skill, agentIdentity, summarizeWithLLM })"
      },
      {
        "title": "Tool Interface (for agent tool use)",
        "body": "memory_search\n\nPurpose: query long-term memory.\nInput:\n\nquery (string, required)\ntype (all|semantic|episodic|belief|goal, default all)\nlimit (number, default 5)\nmin_relevance (number, default 0.6)\n\n\nBehavior: checks /health first, then retrieves via /retrieve and returns formatted memory results.\n\nmemory_commit\n\nPurpose: explicitly persist important facts/decisions/beliefs/goals.\nInput:\n\ncontent (string, required)\ntype (semantic|episodic|belief|goal, required)\nimportance (1-10, default 5)\ntags (string array, optional)\n\n\nBehavior:\n\nchecks /health first\nauto-tags if missing (working_question, decision heuristics)\ncommits are serialized (sequential) to protect local CPU embedding throughput\nif server is unreachable, payload is queued to .memory_retry_queue.json\nunreachable response is explicit:\n\nMemory commit failed - server unreachable. Queued for retry.\n\nmemory_insights\n\nPurpose: surface pending background insights.\nInput:\n\nlimit (number, default 10)\n\n\nBehavior: checks /health first, calls /insights/pending, returns formatted insight list."
      },
      {
        "title": "Reliability Guarantees",
        "body": "Mandatory health gate before each tool call (GET /health).\nRetry queue flushes automatically on healthy tool calls and heartbeat.\nHeartbeat supports automatic retry recovery and background maintenance."
      },
      {
        "title": "Session Arc Lifecycle Hooks",
        "body": "The v2.5 skill supports episodic session arc capture:\n\ncheckpoint capture every 20 turns\nsession-end capture during teardown/reset\n\nFlow:\n\nExtract recent conversation turns (up to 20).\nRun summarization with prompt:\n\nSummarize this session arc: What was the goal? What approaches were tried? What decisions were made? What remains open?\n\n\nPersist summary through internal memory_commit as:\n\ntype: \"episodic\"\ntags: [\"session_arc\", \"YYYY-MM-DD\"]"
      },
      {
        "title": "Passive Context Injection",
        "body": "Use inject_active_context (or createOpenClawHooks().beforeModelResponse) before response generation.\n\nThis adds the standardized block:\n\n[ACTIVE CONTEXT]\nStatus: {status}\nActive Projects: {active_projects}\nWorking Questions: {working_questions}\nTop of Mind: {top_of_mind}\n\nPending Insights:\n- {insight_1}\n- {insight_2}\n[/ACTIVE CONTEXT]\n\nAdd this guidance line to your agent base prompt:\n\nIf pending insights appear in your context that relate to the current conversation, surface them naturally to the user. Do not force it - but if there is a genuine connection, seamlessly bring it up."
      },
      {
        "title": "Minimal OpenClaw Wiring Example",
        "body": "const {\n  createSmartMemorySkill,\n  createOpenClawHooks,\n} = require(\"./skills/smart-memory-v25\");\n\nconst memory = createSmartMemorySkill({\n  baseUrl: \"http://127.0.0.1:8000\",\n  summarizeSessionArc: async ({ prompt, conversationText }) => {\n    return openclaw.llm.complete({ system: prompt, user: conversationText });\n  },\n});\n\nconst hooks = createOpenClawHooks({\n  skill: memory.skill,\n  agentIdentity: \"OpenClaw Agent\",\n  summarizeWithLLM: async ({ prompt, conversationText }) => {\n    return openclaw.llm.complete({ system: prompt, user: conversationText });\n  },\n});\n\n// Register memory.tools as callable tools:\n// - memory_search\n// - memory_commit\n// - memory_insights\n// and call hooks.beforeModelResponse / hooks.onTurn / hooks.onSessionEnd at lifecycle points."
      },
      {
        "title": "Node Adapter Methods (Base Adapter)",
        "body": "start() / init()\ningestMessage(interaction)\nretrieveContext({ user_message, conversation_history })\ngetPromptContext(promptComposerRequest)\nrunBackground(scheduled)\nstop()"
      },
      {
        "title": "API Endpoints",
        "body": "GET /health\nPOST /ingest\nPOST /retrieve\nPOST /compose\nPOST /run_background\nGET /memories\nGET /memory/{memory_id}\nGET /insights/pending"
      },
      {
        "title": "Install (CPU-Only Required)",
        "body": "For Docker, WSL, and laptops without NVIDIA GPUs, use CPU-only PyTorch.\n\n# from repository root\ncd smart-memory\n\n# Create Python venv\npython3 -m venv .venv\nsource .venv/bin/activate  # Windows: .venv\\Scripts\\activate\n\n# Install CPU-only PyTorch FIRST\npip install torch --index-url https://download.pytorch.org/whl/cpu\n\n# Then install remaining dependencies\npip install -r requirements-cognitive.txt\n\n# Finally, install Node dependencies\nnpm install"
      },
      {
        "title": "PyTorch Policy",
        "body": "Smart Memory v2 supports CPU-only PyTorch only.\nDo not install GPU/CUDA PyTorch builds for this project.\nUse the bundled installer flow (npm install -> postinstall.js) so CPU wheels are always used."
      },
      {
        "title": "Deprecated",
        "body": "Legacy vector-memory CLI artifacts (smart_memory.js, vector_memory_local.js, focus_agent.js) are removed in v2."
      }
    ],
    "body": "Smart Memory v2 Skill\n\nSmart Memory v2 is a persistent cognitive memory runtime, not a legacy vector-memory CLI.\n\nCore runtime:\n\nNode adapter: smart-memory/index.js\nLocal API: server.py (FastAPI)\nOrchestrator: cognitive_memory_system.py\nCore Capabilities\nStructured long-term memory (episodic, semantic, belief, goal)\nEntity-aware retrieval and reranking\nHot working memory\nBackground cognition (reflection, consolidation, decay, conflict resolution)\nStrict token-bounded prompt composition\nObservability endpoints (/health, /memories, /memory/{id}, /insights/pending)\nNative OpenClaw Integration (v2.5)\n\nUse the native OpenClaw skill package:\n\nskills/smart-memory-v25/index.js\nOptional hook helper: skills/smart-memory-v25/openclaw-hooks.js\nSkill descriptor: skills/smart-memory-v25/SKILL.md\n\nPrimary exports:\n\ncreateSmartMemorySkill(options)\ncreateOpenClawHooks({ skill, agentIdentity, summarizeWithLLM })\nTool Interface (for agent tool use)\nmemory_search\nPurpose: query long-term memory.\nInput:\nquery (string, required)\ntype (all|semantic|episodic|belief|goal, default all)\nlimit (number, default 5)\nmin_relevance (number, default 0.6)\nBehavior: checks /health first, then retrieves via /retrieve and returns formatted memory results.\nmemory_commit\nPurpose: explicitly persist important facts/decisions/beliefs/goals.\nInput:\ncontent (string, required)\ntype (semantic|episodic|belief|goal, required)\nimportance (1-10, default 5)\ntags (string array, optional)\nBehavior:\nchecks /health first\nauto-tags if missing (working_question, decision heuristics)\ncommits are serialized (sequential) to protect local CPU embedding throughput\nif server is unreachable, payload is queued to .memory_retry_queue.json\nunreachable response is explicit:\nMemory commit failed - server unreachable. Queued for retry.\nmemory_insights\nPurpose: surface pending background insights.\nInput:\nlimit (number, default 10)\nBehavior: checks /health first, calls /insights/pending, returns formatted insight list.\nReliability Guarantees\nMandatory health gate before each tool call (GET /health).\nRetry queue flushes automatically on healthy tool calls and heartbeat.\nHeartbeat supports automatic retry recovery and background maintenance.\nSession Arc Lifecycle Hooks\n\nThe v2.5 skill supports episodic session arc capture:\n\ncheckpoint capture every 20 turns\nsession-end capture during teardown/reset\n\nFlow:\n\nExtract recent conversation turns (up to 20).\nRun summarization with prompt:\nSummarize this session arc: What was the goal? What approaches were tried? What decisions were made? What remains open?\nPersist summary through internal memory_commit as:\ntype: \"episodic\"\ntags: [\"session_arc\", \"YYYY-MM-DD\"]\nPassive Context Injection\n\nUse inject_active_context (or createOpenClawHooks().beforeModelResponse) before response generation.\n\nThis adds the standardized block:\n\n[ACTIVE CONTEXT]\nStatus: {status}\nActive Projects: {active_projects}\nWorking Questions: {working_questions}\nTop of Mind: {top_of_mind}\n\nPending Insights:\n- {insight_1}\n- {insight_2}\n[/ACTIVE CONTEXT]\n\n\nAdd this guidance line to your agent base prompt:\n\nIf pending insights appear in your context that relate to the current conversation, surface them naturally to the user. Do not force it - but if there is a genuine connection, seamlessly bring it up.\n\nMinimal OpenClaw Wiring Example\nconst {\n  createSmartMemorySkill,\n  createOpenClawHooks,\n} = require(\"./skills/smart-memory-v25\");\n\nconst memory = createSmartMemorySkill({\n  baseUrl: \"http://127.0.0.1:8000\",\n  summarizeSessionArc: async ({ prompt, conversationText }) => {\n    return openclaw.llm.complete({ system: prompt, user: conversationText });\n  },\n});\n\nconst hooks = createOpenClawHooks({\n  skill: memory.skill,\n  agentIdentity: \"OpenClaw Agent\",\n  summarizeWithLLM: async ({ prompt, conversationText }) => {\n    return openclaw.llm.complete({ system: prompt, user: conversationText });\n  },\n});\n\n// Register memory.tools as callable tools:\n// - memory_search\n// - memory_commit\n// - memory_insights\n// and call hooks.beforeModelResponse / hooks.onTurn / hooks.onSessionEnd at lifecycle points.\n\nNode Adapter Methods (Base Adapter)\nstart() / init()\ningestMessage(interaction)\nretrieveContext({ user_message, conversation_history })\ngetPromptContext(promptComposerRequest)\nrunBackground(scheduled)\nstop()\nAPI Endpoints\nGET /health\nPOST /ingest\nPOST /retrieve\nPOST /compose\nPOST /run_background\nGET /memories\nGET /memory/{memory_id}\nGET /insights/pending\nInstall (CPU-Only Required)\n\nFor Docker, WSL, and laptops without NVIDIA GPUs, use CPU-only PyTorch.\n\n# from repository root\ncd smart-memory\n\n# Create Python venv\npython3 -m venv .venv\nsource .venv/bin/activate  # Windows: .venv\\Scripts\\activate\n\n# Install CPU-only PyTorch FIRST\npip install torch --index-url https://download.pytorch.org/whl/cpu\n\n# Then install remaining dependencies\npip install -r requirements-cognitive.txt\n\n# Finally, install Node dependencies\nnpm install\n\nPyTorch Policy\nSmart Memory v2 supports CPU-only PyTorch only.\nDo not install GPU/CUDA PyTorch builds for this project.\nUse the bundled installer flow (npm install -> postinstall.js) so CPU wheels are always used.\nDeprecated\n\nLegacy vector-memory CLI artifacts (smart_memory.js, vector_memory_local.js, focus_agent.js) are removed in v2."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/BluePointDigital/smart-memory",
    "publisherUrl": "https://clawhub.ai/BluePointDigital/smart-memory",
    "owner": "BluePointDigital",
    "version": "2.5.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/smart-memory",
    "downloadUrl": "https://openagent3.xyz/downloads/smart-memory",
    "agentUrl": "https://openagent3.xyz/skills/smart-memory/agent",
    "manifestUrl": "https://openagent3.xyz/skills/smart-memory/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/smart-memory/agent.md"
  }
}