{
  "schemaVersion": "1.0",
  "item": {
    "slug": "openclaw-memvid-logger",
    "name": "MemSync Dual Memory System",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/stackBlock/openclaw-memvid-logger",
    "canonicalUrl": "https://clawhub.ai/stackBlock/openclaw-memvid-logger",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/openclaw-memvid-logger",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=openclaw-memvid-logger",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "license.txt",
      "TEMPLATE.md",
      "README-clawhub.md",
      "instructions.md",
      "README.md",
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/openclaw-memvid-logger"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/openclaw-memvid-logger",
    "agentPageUrl": "https://openagent3.xyz/skills/openclaw-memvid-logger/agent",
    "manifestUrl": "https://openagent3.xyz/skills/openclaw-memvid-logger/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/openclaw-memvid-logger/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Unified Conversation Logger v1.2.5",
        "body": "Version: 1.2.5 (Critical Fixes Edition)\nAuthor: stackBlock\nLicense: MIT\nOpenClaw: >= 2026.2.12\n\nA dual-output conversation logger for OpenClaw that captures everything - user messages, assistant responses, sub-agent conversations, tool calls, and system events - to both JSONL (backup) and Memvid (semantic search) formats.\n\nMemvid: A single-file memory layer for AI agents with instant retrieval and long-term memory. Persistent, versioned, and portable memory, without databases.\n\"Replace complex RAG pipelines with a single portable file you own, and give your agent instant retrieval and long-term memory.\""
      },
      {
        "title": "⚠️ Security & Privacy Notice",
        "body": "Before installing, please understand:\n\nThis skill captures everything - by design. It logs all user messages, assistant responses, sub-agent conversations, tool outputs, and system events to local files. This enables powerful long-term memory but requires trust.\n\nWhat you should know:\n\nBroad capture scope: This is intentional - the skill's purpose is complete conversation logging\nSensitive data risk: Tool outputs (commands, API responses, file contents) are logged. Review what tools expose.\nContinuous logging: Once installed, it runs automatically on every assistant response until removed\nOptional cloud mode: API mode with MEMVID_API_KEY sends data to memvid.com (third-party service). Free/local modes keep data on your machine only.\nYour responsibility: Secure the JSONL/.mv2 files, rotate logs regularly, and audit what gets captured.\n\nMitigations available:\n\nUse Free/Sharding mode to keep data local (no API key needed)\nChange default paths to encrypted locations\nReview tools/log.py before installing to understand exactly what gets logged\nFile permissions: restrict access to log files (chmod 600)\n\nThis skill is for users who want complete conversation memory and accept the privacy trade-offs."
      },
      {
        "title": "✨ What Makes This Different",
        "body": "📝 Dual Storage - Every message saved to JSONL + Memvid simultaneously\n🔍 Semantic Search - Ask \"What did the researcher agent find about Tesla?\" not just keyword search\n🤖 Full Context - Captures user input, assistant output, agent chatter, tool results\n💾 Three Modes - API (unlimited), Free (50MB), or Sharding (multi-file)\n🚀 Always On - Hooks into OpenClaw automatically"
      },
      {
        "title": "Critical Fixes",
        "body": "Memvid Tag Format Fixed: Updated to KEY=VALUE format for Memvid 2.0+ compatibility\n\nOld (broken): --tag \"user,telegram\"\nNew (fixed): --tag \"role=user\" --tag \"source=telegram\"\n\n\nEnvironment Variable Documentation: Added /etc/environment instructions (.bashrc doesn't work for background services)\nHook Handler Format: Documented JavaScript (.js) requirement for OpenClaw 2026.2.12+\nComprehensive Troubleshooting: Added detailed troubleshooting section for common setup issues"
      },
      {
        "title": "Compatibility",
        "body": "Verified with OpenClaw 2026.2.12\nVerified with Memvid CLI 2.0+"
      },
      {
        "title": "v1.2.4",
        "body": "Neural Search Default: Updated search guidance to use --mode neural as default for maximum accuracy\nPerformance Documentation: Clarified latency trade-offs (~200ms for neural vs ~8ms for lexical)\nSearch Mode Policy: Recommends neural for semantic understanding, lexical only when speed is critical"
      },
      {
        "title": "v1.2.3",
        "body": "Version Cohesion: All files synchronized to v1.2.3\nDocumentation Consistency: README and SKILL.md now have matching content\nSecurity Improvements: Generic paths (no hardcoded user directories), install script asks permission\nRegistry Compliance: Complete metadata (env vars, credentials, warnings) for ClawHub transparency\nPrivacy Documentation: Comprehensive Security & Privacy Notice explaining data capture scope\nRole Tagging: Distinguishes user, assistant, agent:*, system, and tool messages\nFull Context: Captures sub-agent chatter, tool results, background processes\nThree Storage Modes: API mode (single file), Free mode (50MB), Sharding mode (monthly rotation)\nSemantic Search: Ask \"What did the researcher agent find?\" or \"What did I say about X?\""
      },
      {
        "title": "Option 1: API Mode (Recommended) - Near Limitless Memory",
        "body": "Best for: Heavy users, unified search across everything\nCost: $59-299/month via memvid.com\n\n# 1. Get API key from memvid.com ($59/month for 1GB, $299 for 25GB)\nexport MEMVID_API_KEY=\"your_api_key_here\"\nexport MEMVID_MODE=\"single\"\n\n# 2. Install\nnpm install -g memvid\ngit clone https://github.com/stackBlock/openclaw-memvid-logger.git\ncp -r openclaw-memvid-logger ~/.openclaw/workspace/skills/\n\n# 3. Create unified memory file\nmemvid create ~/memory.mv2\n\n# 4. Start OpenClaw - everything logs to one searchable file\n\nSearch everything at once:\n\nmemvid ask memory.mv2 \"What did we discuss about BadjAI?\"\nmemvid ask memory.mv2 \"What did the researcher agent find about Tesla?\"\nmemvid ask memory.mv2 \"Show me all the Python scripts I asked for\""
      },
      {
        "title": "Option 2: Free Mode (50MB Limit) - Complete Memory in One Place",
        "body": "Best for: Testing, light usage, single searchable file\nCost: FREE\n\n# 1. Install (no API key needed)\nnpm install -g memvid\ngit clone https://github.com/stackBlock/openclaw-memvid-logger.git\ncp -r openclaw-memvid-logger ~/.openclaw/workspace/skills/\nexport MEMVID_MODE=\"single\"\n\n# 2. Create memory file\nmemvid create ~/memory.mv2\n\n# 3. Start OpenClaw\n\n⚠️ Limit: 50MB (~5,000 conversation turns). When you hit it:\n\nArchive and start fresh, OR\nUpgrade to API mode ($59-299/month), OR\nSwitch to Sharding mode"
      },
      {
        "title": "Option 3: Sharding Mode - More Than 50MB, Free Forever",
        "body": "Best for: Long-term use, staying under free tier\nCost: FREE\nTrade-off: Multi-file search\n\n# 1. Install (no API key needed)\nnpm install -g memvid\ngit clone https://github.com/stackBlock/openclaw-memvid-logger.git\ncp -r openclaw-memvid-logger ~/.openclaw/workspace/skills/\nexport MEMVID_MODE=\"monthly\"  # This is the default\n\n# 2. Start OpenClaw - auto-creates monthly files\n\nHow it works:\n\nmemory_2026-02.mv2 (February)\nmemory_2026-03.mv2 (March - auto-created)\nEach file stays under 50MB\n\n⚠️ Sharding Search Differences:\n\nSingle-file search (API/Free modes):\n\n# One search gets everything\nmemvid ask memory.mv2 \"What car did I decide to buy?\"\n# Returns: Results from ALL conversations across ALL time\n\nSharding search (requires multiple queries):\n\n# Must search each month separately\nmemvid ask memory_2026-02.mv2 \"car decision\"  # Recent\nmemvid ask memory_2026-01.mv2 \"car decision\"  # January\n\n# Or use a wrapper script to search all files\nfor file in memory_*.mv2; do\n    echo \"=== $file ===\"\n    memvid ask \"$file\" \"car decision\" 2>/dev/null | head -5\ndone\n\n# You must know which month the conversation happened\n# No cross-month context - \"compare this month to last month\" won't work\n\nWhy sharding is harder:\n\nCan't ask \"what did we discuss in the past 3 months?\" in one query\nNo unified timeline across months\nMust remember which month you talked about what\nNo cross-file semantic comparison"
      },
      {
        "title": "Role Tags (Automatic)",
        "body": "RoleTagExample SearchUser[user]\"What did I say about Mercedes?\"Assistant[assistant]\"What did you recommend?\"Sub-agents[agent:researcher], [agent:coder]\"What did the researcher find?\"System[system]\"When did the cron job run?\"Tools[tool:exec], [tool:browser]\"What commands were run?\""
      },
      {
        "title": "Everything Captured",
        "body": "✅ User messages (what you type)\n✅ Assistant responses (what I say back)\n✅ Sub-agent conversations (researcher, coder, vision, math, etc.)\n✅ Tool executions (bash commands, browser actions, file edits)\n✅ Background processes (cron jobs, heartbeats, scheduled tasks)\n✅ System events (config changes, restarts, errors)"
      },
      {
        "title": "Architecture",
        "body": "┌─────────────────────────────────────────┐\n│           OpenClaw Ecosystem            │\n│  ┌─────────┐  ┌─────────┐  ┌─────────┐ │\n│  │  User   │  │Assistant│  │  Agents │ │\n│  │ Messages│  │Responses│  │Research │ │\n│  └────┬────┘  └────┬────┘  └────┬────┘ │\n│       └─────────────┴─────────────┘     │\n│                     │                   │\n│              ┌──────▼──────┐            │\n│              │  log.py     │            │\n│              │  (this skill)│           │\n│              └──────┬──────┘            │\n└─────────────────────┼───────────────────┘\n                      │\n    ┌─────────────────┼─────────────────┐\n    ↓                 ↓                 ↓\n┌───────┐      ┌─────────────┐    ┌──────────┐\n│ JSONL │      │   Memvid    │    │  Search  │\n│ File  │      │   Files     │    │  Query   │\n└───────┘      └─────────────┘    └──────────┘\n    │                 │\n    ↓                 ↓\n grep/jq       memvid ask/find"
      },
      {
        "title": "Natural Language Search",
        "body": "# What did you say about...?\nmemvid ask memory_2026-02.mv2 \"What was your recommendation about the Mercedes vs Tesla?\"\n\n# What did I ask for...?\nmemvid ask memory_2026-02.mv2 \"What Python scripts did I request last week?\"\n\n# What did agents do...?\nmemvid ask memory_2026-02.mv2 \"What did the researcher agent find about options trading?\"\n\n# System events...?\nmemvid ask memory_2026-02.mv2 \"When did the PowerSchool grades cron job run?\""
      },
      {
        "title": "Keyword Search",
        "body": "# Find specific terms\nmemvid find memory_2026-02.mv2 --query \"Mercedes\"\n\n# With filters\nmemvid find memory_2026-02.mv2 --query \"script\" --tag agent:coder"
      },
      {
        "title": "Temporal Queries",
        "body": "memvid when memory_2026-02.mv2 \"yesterday\"\nmemvid when memory_2026-02.mv2 \"last Tuesday\"\nmemvid when memory_2026-02.mv2 \"3 days ago\""
      },
      {
        "title": "⚡ Search Performance Guide",
        "body": "Memvid has three search modes. This skill uses --mode neural by default for maximum accuracy:"
      },
      {
        "title": "Default: Neural Search (Recommended)",
        "body": "# Always use neural for semantic understanding and context\nmemvid ask memory.mv2 \"What supplements did Dr. Sinclair recommend?\" --mode neural\nmemvid ask memory.mv2 \"What did we discuss about BadjAI?\" --mode neural\nmemvid ask memory.mv2 \"Show me the Python scripts I requested\" --mode neural\n\nSpeed: ~200ms | Best for: Semantic understanding, context, synonyms, conceptual relationships"
      },
      {
        "title": "Alternative Modes (Use When Explicitly Requested)",
        "body": "Mode 1: Lexical Search (Fastest)\n\n# Use only for exact keyword matching when speed is critical\nmemvid find memory.mv2 --mode lex --query \"metformin\"\n\nSpeed: ~8ms | Use when: Exact word matching needed, latency is critical\n\nMode 2: Hybrid Search (Balanced)\n\n# Combines lexical + neural\nmemvid find memory.mv2 --mode hybrid --query \"diabetes medications\"\n\nSpeed: ~300-500ms | Use when: You want both exact matches and semantic similarity"
      },
      {
        "title": "Why Neural as Default?",
        "body": "ModeSpeedAccuracyUse Caseneural~200msHighestDefault - semantic understandinglex~8msKeyword onlySpeed-critical exact matcheshybrid~300-500msHighBalanced approach\n\nThe ~200ms trade-off is worth it: Neural mode understands context, handles paraphrases, and finds conceptually related information that lexical search misses entirely."
      },
      {
        "title": "JSONL Backup",
        "body": "# Quick grep\ngrep \"Mercedes\" conversation_log.jsonl\n\n# Complex queries with jq\njq 'select(.role_tag == \"user\" and .content | contains(\"Python\"))' conversation_log.jsonl\n\n# Time range\njq 'select(.timestamp >= \"2026-02-01\" and .timestamp < \"2026-03-01\")' conversation_log.jsonl"
      },
      {
        "title": "Environment Variables",
        "body": "VariableDefaultModeDescriptionMEMVID_API_KEY(none)APIYour memvid.com API keyMEMVID_MODEmonthlyAllsingle or monthlyJSONL_LOG_PATH~/workspace/conversation_log.jsonlAllBackup log fileMEMVID_PATH~/workspace/memory.mv2AllBase path for memory filesMEMVID_BIN~/.npm-global/bin/memvidAllPath to memvid CLI"
      },
      {
        "title": "OpenClaw Hooks (Advanced)",
        "body": "Add to openclaw.json:\n\n{\n  \"hooks\": {\n    \"internal\": {\n      \"enabled\": true,\n      \"entries\": {\n        \"conversation-logger\": {\n          \"enabled\": true,\n          \"command\": \"python3 ~/.openclaw/workspace/skills/unified-logger/tools/log.py\"\n        }\n      }\n    }\n  }\n}"
      },
      {
        "title": "Mode 1: Single File (API or Free Mode)",
        "body": "memory.mv2\n├── [user] messages\n├── [assistant] responses  \n├── [agent:researcher] findings\n├── [agent:coder] code\n├── [tool:exec] commands\n└── [system] events"
      },
      {
        "title": "Mode 2: Sharding (Monthly Rotation)",
        "body": "memory_2026-01.mv2  (January conversations)\nmemory_2026-02.mv2  (February conversations) ← Current\nmemory_2026-03.mv2  (March, auto-created on March 1)"
      },
      {
        "title": "\"Free tier limit exceeded\" (Free Mode)",
        "body": "# Option 1: Archive and start fresh\nmv memory.mv2 memory_archive.mv2\nmemvid create memory.mv2\n\n# Option 2: Switch to monthly sharding\nexport MEMVID_MODE=\"monthly\"\n\n# Option 3: Get API key\nexport MEMVID_API_KEY=\"your_key\"  # $59-299/month at memvid.com"
      },
      {
        "title": "\"Cannot find memory file\" (Sharding Mode)",
        "body": "Current month's file auto-creates. If missing:\n\nmemvid create memory_$(date +%Y-%m).mv2"
      },
      {
        "title": "Missing agent conversations",
        "body": "Agents log to their own sessions. Ensure skill is installed in main agent workspace and sub-agents inherit it."
      },
      {
        "title": "Search returns wrong speaker",
        "body": "Memvid uses semantic search. Be specific:\n\n❌ \"Mercedes\" → Returns all mentions\n✅ \"What did I say about Mercedes\" → Targets [user] frames\n✅ \"Your recommendation about Mercedes\" → Targets [assistant] frames"
      },
      {
        "title": "Comparing the Three Modes",
        "body": "FeatureAPI ModeFree ModeSharding ModeCost$59-299/moFREEFREECapacity1-25GB+50MBUnlimited (files)Files11Multiple (monthly)Unified Search✅ Yes✅ Yes❌ Per-file onlyCross-Context Search✅ Full history✅ Full history❌ Month isolatedBest ForPower usersTestingLong-term free useComplexitySimpleSimpleMust track files"
      },
      {
        "title": "💸 The Pricing Gap (AKA Why Sharding Exists)",
        "body": "The situation: Memvid's pricing goes from $0 (50MB) straight to $59/month (25GB).\nThe problem: That's like buying a Ferrari when you just need a Honda Civic for your commute.\n\nWhat we're doing about it:\nI reached out. While they consider it, Sharding Mode exists so you don't have to pay Ferrari prices for Honda Civic usage.\n\nYou can help:\nIf you also think $0 → $59 is a bit much, reach out to Memvid at memvid.com and tell them stackBlock sent you. The more voices, the faster we get that $10-20 middle tier for the rest of us.\n\nUntil then: Sharding Mode. Because startups shouldn't have to choose between ramen and memory. 🍜"
      },
      {
        "title": "Future Enhancements",
        "body": "Auto-archive old months to cold storage\n Web UI for browsing conversations\n Cross-file search wrapper script\n Export to other formats (Markdown, PDF)\n Conversation threading visualization"
      },
      {
        "title": "Support",
        "body": "GitHub Issues: github.com/stackBlock/openclaw-memvid-logger\nOpenClaw Discord: discord.com/invite/clawd\nMemvid Support: memvid.com/docs"
      },
      {
        "title": "License",
        "body": "MIT - See LICENSE\n\nAbout Memvid:\n\nMemvid is a single-file memory layer for AI agents with instant retrieval and long-term memory.\nPersistent, versioned, and portable memory, without databases.\nReplace complex RAG pipelines with a single portable file you own, and give your agent\ninstant retrieval and long-term memory."
      }
    ],
    "body": "Unified Conversation Logger v1.2.5\n\nVersion: 1.2.5 (Critical Fixes Edition)\nAuthor: stackBlock\nLicense: MIT\nOpenClaw: >= 2026.2.12\n\nA dual-output conversation logger for OpenClaw that captures everything - user messages, assistant responses, sub-agent conversations, tool calls, and system events - to both JSONL (backup) and Memvid (semantic search) formats.\n\nMemvid: A single-file memory layer for AI agents with instant retrieval and long-term memory. Persistent, versioned, and portable memory, without databases.\n\n\"Replace complex RAG pipelines with a single portable file you own, and give your agent instant retrieval and long-term memory.\"\n\n⚠️ Security & Privacy Notice\n\nBefore installing, please understand:\n\nThis skill captures everything - by design. It logs all user messages, assistant responses, sub-agent conversations, tool outputs, and system events to local files. This enables powerful long-term memory but requires trust.\n\nWhat you should know:\n\nBroad capture scope: This is intentional - the skill's purpose is complete conversation logging\nSensitive data risk: Tool outputs (commands, API responses, file contents) are logged. Review what tools expose.\nContinuous logging: Once installed, it runs automatically on every assistant response until removed\nOptional cloud mode: API mode with MEMVID_API_KEY sends data to memvid.com (third-party service). Free/local modes keep data on your machine only.\nYour responsibility: Secure the JSONL/.mv2 files, rotate logs regularly, and audit what gets captured.\n\nMitigations available:\n\nUse Free/Sharding mode to keep data local (no API key needed)\nChange default paths to encrypted locations\nReview tools/log.py before installing to understand exactly what gets logged\nFile permissions: restrict access to log files (chmod 600)\n\nThis skill is for users who want complete conversation memory and accept the privacy trade-offs.\n\n✨ What Makes This Different\n📝 Dual Storage - Every message saved to JSONL + Memvid simultaneously\n🔍 Semantic Search - Ask \"What did the researcher agent find about Tesla?\" not just keyword search\n🤖 Full Context - Captures user input, assistant output, agent chatter, tool results\n💾 Three Modes - API (unlimited), Free (50MB), or Sharding (multi-file)\n🚀 Always On - Hooks into OpenClaw automatically\nWhat's New in v1.2.5\nCritical Fixes\nMemvid Tag Format Fixed: Updated to KEY=VALUE format for Memvid 2.0+ compatibility\nOld (broken): --tag \"user,telegram\"\nNew (fixed): --tag \"role=user\" --tag \"source=telegram\"\nEnvironment Variable Documentation: Added /etc/environment instructions (.bashrc doesn't work for background services)\nHook Handler Format: Documented JavaScript (.js) requirement for OpenClaw 2026.2.12+\nComprehensive Troubleshooting: Added detailed troubleshooting section for common setup issues\nCompatibility\nVerified with OpenClaw 2026.2.12\nVerified with Memvid CLI 2.0+\nPrevious Versions\nv1.2.4\nNeural Search Default: Updated search guidance to use --mode neural as default for maximum accuracy\nPerformance Documentation: Clarified latency trade-offs (~200ms for neural vs ~8ms for lexical)\nSearch Mode Policy: Recommends neural for semantic understanding, lexical only when speed is critical\nv1.2.3\nVersion Cohesion: All files synchronized to v1.2.3\nDocumentation Consistency: README and SKILL.md now have matching content\nSecurity Improvements: Generic paths (no hardcoded user directories), install script asks permission\nRegistry Compliance: Complete metadata (env vars, credentials, warnings) for ClawHub transparency\nPrivacy Documentation: Comprehensive Security & Privacy Notice explaining data capture scope\nRole Tagging: Distinguishes user, assistant, agent:*, system, and tool messages\nFull Context: Captures sub-agent chatter, tool results, background processes\nThree Storage Modes: API mode (single file), Free mode (50MB), Sharding mode (monthly rotation)\nSemantic Search: Ask \"What did the researcher agent find?\" or \"What did I say about X?\"\nQuick Install (Choose Your Mode)\nOption 1: API Mode (Recommended) - Near Limitless Memory\n\nBest for: Heavy users, unified search across everything\nCost: $59-299/month via memvid.com\n\n# 1. Get API key from memvid.com ($59/month for 1GB, $299 for 25GB)\nexport MEMVID_API_KEY=\"your_api_key_here\"\nexport MEMVID_MODE=\"single\"\n\n# 2. Install\nnpm install -g memvid\ngit clone https://github.com/stackBlock/openclaw-memvid-logger.git\ncp -r openclaw-memvid-logger ~/.openclaw/workspace/skills/\n\n# 3. Create unified memory file\nmemvid create ~/memory.mv2\n\n# 4. Start OpenClaw - everything logs to one searchable file\n\n\nSearch everything at once:\n\nmemvid ask memory.mv2 \"What did we discuss about BadjAI?\"\nmemvid ask memory.mv2 \"What did the researcher agent find about Tesla?\"\nmemvid ask memory.mv2 \"Show me all the Python scripts I asked for\"\n\nOption 2: Free Mode (50MB Limit) - Complete Memory in One Place\n\nBest for: Testing, light usage, single searchable file\nCost: FREE\n\n# 1. Install (no API key needed)\nnpm install -g memvid\ngit clone https://github.com/stackBlock/openclaw-memvid-logger.git\ncp -r openclaw-memvid-logger ~/.openclaw/workspace/skills/\nexport MEMVID_MODE=\"single\"\n\n# 2. Create memory file\nmemvid create ~/memory.mv2\n\n# 3. Start OpenClaw\n\n\n⚠️ Limit: 50MB (~5,000 conversation turns). When you hit it:\n\nArchive and start fresh, OR\nUpgrade to API mode ($59-299/month), OR\nSwitch to Sharding mode\nOption 3: Sharding Mode - More Than 50MB, Free Forever\n\nBest for: Long-term use, staying under free tier\nCost: FREE\nTrade-off: Multi-file search\n\n# 1. Install (no API key needed)\nnpm install -g memvid\ngit clone https://github.com/stackBlock/openclaw-memvid-logger.git\ncp -r openclaw-memvid-logger ~/.openclaw/workspace/skills/\nexport MEMVID_MODE=\"monthly\"  # This is the default\n\n# 2. Start OpenClaw - auto-creates monthly files\n\n\nHow it works:\n\nmemory_2026-02.mv2 (February)\nmemory_2026-03.mv2 (March - auto-created)\nEach file stays under 50MB\n\n⚠️ Sharding Search Differences:\n\nSingle-file search (API/Free modes):\n\n# One search gets everything\nmemvid ask memory.mv2 \"What car did I decide to buy?\"\n# Returns: Results from ALL conversations across ALL time\n\n\nSharding search (requires multiple queries):\n\n# Must search each month separately\nmemvid ask memory_2026-02.mv2 \"car decision\"  # Recent\nmemvid ask memory_2026-01.mv2 \"car decision\"  # January\n\n# Or use a wrapper script to search all files\nfor file in memory_*.mv2; do\n    echo \"=== $file ===\"\n    memvid ask \"$file\" \"car decision\" 2>/dev/null | head -5\ndone\n\n# You must know which month the conversation happened\n# No cross-month context - \"compare this month to last month\" won't work\n\n\nWhy sharding is harder:\n\nCan't ask \"what did we discuss in the past 3 months?\" in one query\nNo unified timeline across months\nMust remember which month you talked about what\nNo cross-file semantic comparison\nWhat Gets Logged\nRole Tags (Automatic)\nRole\tTag\tExample Search\nUser\t[user]\t\"What did I say about Mercedes?\"\nAssistant\t[assistant]\t\"What did you recommend?\"\nSub-agents\t[agent:researcher], [agent:coder]\t\"What did the researcher find?\"\nSystem\t[system]\t\"When did the cron job run?\"\nTools\t[tool:exec], [tool:browser]\t\"What commands were run?\"\nEverything Captured\n✅ User messages (what you type)\n✅ Assistant responses (what I say back)\n✅ Sub-agent conversations (researcher, coder, vision, math, etc.)\n✅ Tool executions (bash commands, browser actions, file edits)\n✅ Background processes (cron jobs, heartbeats, scheduled tasks)\n✅ System events (config changes, restarts, errors)\nArchitecture\n┌─────────────────────────────────────────┐\n│           OpenClaw Ecosystem            │\n│  ┌─────────┐  ┌─────────┐  ┌─────────┐ │\n│  │  User   │  │Assistant│  │  Agents │ │\n│  │ Messages│  │Responses│  │Research │ │\n│  └────┬────┘  └────┬────┘  └────┬────┘ │\n│       └─────────────┴─────────────┘     │\n│                     │                   │\n│              ┌──────▼──────┐            │\n│              │  log.py     │            │\n│              │  (this skill)│           │\n│              └──────┬──────┘            │\n└─────────────────────┼───────────────────┘\n                      │\n    ┌─────────────────┼─────────────────┐\n    ↓                 ↓                 ↓\n┌───────┐      ┌─────────────┐    ┌──────────┐\n│ JSONL │      │   Memvid    │    │  Search  │\n│ File  │      │   Files     │    │  Query   │\n└───────┘      └─────────────┘    └──────────┘\n    │                 │\n    ↓                 ↓\n grep/jq       memvid ask/find\n\nUsage Examples\nNatural Language Search\n# What did you say about...?\nmemvid ask memory_2026-02.mv2 \"What was your recommendation about the Mercedes vs Tesla?\"\n\n# What did I ask for...?\nmemvid ask memory_2026-02.mv2 \"What Python scripts did I request last week?\"\n\n# What did agents do...?\nmemvid ask memory_2026-02.mv2 \"What did the researcher agent find about options trading?\"\n\n# System events...?\nmemvid ask memory_2026-02.mv2 \"When did the PowerSchool grades cron job run?\"\n\nKeyword Search\n# Find specific terms\nmemvid find memory_2026-02.mv2 --query \"Mercedes\"\n\n# With filters\nmemvid find memory_2026-02.mv2 --query \"script\" --tag agent:coder\n\nTemporal Queries\nmemvid when memory_2026-02.mv2 \"yesterday\"\nmemvid when memory_2026-02.mv2 \"last Tuesday\"\nmemvid when memory_2026-02.mv2 \"3 days ago\"\n\n⚡ Search Performance Guide\n\nMemvid has three search modes. This skill uses --mode neural by default for maximum accuracy:\n\nDefault: Neural Search (Recommended)\n# Always use neural for semantic understanding and context\nmemvid ask memory.mv2 \"What supplements did Dr. Sinclair recommend?\" --mode neural\nmemvid ask memory.mv2 \"What did we discuss about BadjAI?\" --mode neural\nmemvid ask memory.mv2 \"Show me the Python scripts I requested\" --mode neural\n\n\nSpeed: ~200ms | Best for: Semantic understanding, context, synonyms, conceptual relationships\n\nAlternative Modes (Use When Explicitly Requested)\n\nMode 1: Lexical Search (Fastest)\n\n# Use only for exact keyword matching when speed is critical\nmemvid find memory.mv2 --mode lex --query \"metformin\"\n\n\nSpeed: ~8ms | Use when: Exact word matching needed, latency is critical\n\nMode 2: Hybrid Search (Balanced)\n\n# Combines lexical + neural\nmemvid find memory.mv2 --mode hybrid --query \"diabetes medications\"\n\n\nSpeed: ~300-500ms | Use when: You want both exact matches and semantic similarity\n\nWhy Neural as Default?\nMode\tSpeed\tAccuracy\tUse Case\nneural\t~200ms\tHighest\tDefault - semantic understanding\nlex\t~8ms\tKeyword only\tSpeed-critical exact matches\nhybrid\t~300-500ms\tHigh\tBalanced approach\n\nThe ~200ms trade-off is worth it: Neural mode understands context, handles paraphrases, and finds conceptually related information that lexical search misses entirely.\n\nJSONL Backup\n# Quick grep\ngrep \"Mercedes\" conversation_log.jsonl\n\n# Complex queries with jq\njq 'select(.role_tag == \"user\" and .content | contains(\"Python\"))' conversation_log.jsonl\n\n# Time range\njq 'select(.timestamp >= \"2026-02-01\" and .timestamp < \"2026-03-01\")' conversation_log.jsonl\n\nConfiguration\nEnvironment Variables\nVariable\tDefault\tMode\tDescription\nMEMVID_API_KEY\t(none)\tAPI\tYour memvid.com API key\nMEMVID_MODE\tmonthly\tAll\tsingle or monthly\nJSONL_LOG_PATH\t~/workspace/conversation_log.jsonl\tAll\tBackup log file\nMEMVID_PATH\t~/workspace/memory.mv2\tAll\tBase path for memory files\nMEMVID_BIN\t~/.npm-global/bin/memvid\tAll\tPath to memvid CLI\nOpenClaw Hooks (Advanced)\n\nAdd to openclaw.json:\n\n{\n  \"hooks\": {\n    \"internal\": {\n      \"enabled\": true,\n      \"entries\": {\n        \"conversation-logger\": {\n          \"enabled\": true,\n          \"command\": \"python3 ~/.openclaw/workspace/skills/unified-logger/tools/log.py\"\n        }\n      }\n    }\n  }\n}\n\nMemory File Formats\nMode 1: Single File (API or Free Mode)\nmemory.mv2\n├── [user] messages\n├── [assistant] responses  \n├── [agent:researcher] findings\n├── [agent:coder] code\n├── [tool:exec] commands\n└── [system] events\n\nMode 2: Sharding (Monthly Rotation)\nmemory_2026-01.mv2  (January conversations)\nmemory_2026-02.mv2  (February conversations) ← Current\nmemory_2026-03.mv2  (March, auto-created on March 1)\n\nTroubleshooting\n\"Free tier limit exceeded\" (Free Mode)\n# Option 1: Archive and start fresh\nmv memory.mv2 memory_archive.mv2\nmemvid create memory.mv2\n\n# Option 2: Switch to monthly sharding\nexport MEMVID_MODE=\"monthly\"\n\n# Option 3: Get API key\nexport MEMVID_API_KEY=\"your_key\"  # $59-299/month at memvid.com\n\n\"Cannot find memory file\" (Sharding Mode)\n\nCurrent month's file auto-creates. If missing:\n\nmemvid create memory_$(date +%Y-%m).mv2\n\nMissing agent conversations\n\nAgents log to their own sessions. Ensure skill is installed in main agent workspace and sub-agents inherit it.\n\nSearch returns wrong speaker\n\nMemvid uses semantic search. Be specific:\n\n❌ \"Mercedes\" → Returns all mentions\n✅ \"What did I say about Mercedes\" → Targets [user] frames\n✅ \"Your recommendation about Mercedes\" → Targets [assistant] frames\nComparing the Three Modes\nFeature\tAPI Mode\tFree Mode\tSharding Mode\nCost\t$59-299/mo\tFREE\tFREE\nCapacity\t1-25GB+\t50MB\tUnlimited (files)\nFiles\t1\t1\tMultiple (monthly)\nUnified Search\t✅ Yes\t✅ Yes\t❌ Per-file only\nCross-Context Search\t✅ Full history\t✅ Full history\t❌ Month isolated\nBest For\tPower users\tTesting\tLong-term free use\nComplexity\tSimple\tSimple\tMust track files\n💸 The Pricing Gap (AKA Why Sharding Exists)\n\nThe situation: Memvid's pricing goes from $0 (50MB) straight to $59/month (25GB).\nThe problem: That's like buying a Ferrari when you just need a Honda Civic for your commute.\n\nWhat we're doing about it:\nI reached out. While they consider it, Sharding Mode exists so you don't have to pay Ferrari prices for Honda Civic usage.\n\nYou can help:\nIf you also think $0 → $59 is a bit much, reach out to Memvid at memvid.com and tell them stackBlock sent you. The more voices, the faster we get that $10-20 middle tier for the rest of us.\n\nUntil then: Sharding Mode. Because startups shouldn't have to choose between ramen and memory. 🍜\n\nFuture Enhancements\n Auto-archive old months to cold storage\n Web UI for browsing conversations\n Cross-file search wrapper script\n Export to other formats (Markdown, PDF)\n Conversation threading visualization\nSupport\nGitHub Issues: github.com/stackBlock/openclaw-memvid-logger\nOpenClaw Discord: discord.com/invite/clawd\nMemvid Support: memvid.com/docs\nLicense\n\nMIT - See LICENSE\n\nAbout Memvid:\n\nMemvid is a single-file memory layer for AI agents with instant retrieval and long-term memory. Persistent, versioned, and portable memory, without databases.\n\nReplace complex RAG pipelines with a single portable file you own, and give your agent instant retrieval and long-term memory."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/stackBlock/openclaw-memvid-logger",
    "publisherUrl": "https://clawhub.ai/stackBlock/openclaw-memvid-logger",
    "owner": "stackBlock",
    "version": "1.2.6",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/openclaw-memvid-logger",
    "downloadUrl": "https://openagent3.xyz/downloads/openclaw-memvid-logger",
    "agentUrl": "https://openagent3.xyz/skills/openclaw-memvid-logger/agent",
    "manifestUrl": "https://openagent3.xyz/skills/openclaw-memvid-logger/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/openclaw-memvid-logger/agent.md"
  }
}