{
  "schemaVersion": "1.0",
  "item": {
    "slug": "ollama-memory-embeddings",
    "name": "Ollama Memory Embeddings",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/vidarbrekke/ollama-memory-embeddings",
    "canonicalUrl": "https://clawhub.ai/vidarbrekke/ollama-memory-embeddings",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/ollama-memory-embeddings",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=ollama-memory-embeddings",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "LICENSE.md",
      "uninstall.sh",
      "install.sh",
      "verify.sh",
      "README.md",
      "watchdog.sh"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/ollama-memory-embeddings"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/ollama-memory-embeddings",
    "agentPageUrl": "https://openagent3.xyz/skills/ollama-memory-embeddings/agent",
    "manifestUrl": "https://openagent3.xyz/skills/ollama-memory-embeddings/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/ollama-memory-embeddings/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Ollama Memory Embeddings",
        "body": "This skill configures OpenClaw memory search to use Ollama as the embeddings\nserver via its OpenAI-compatible /v1/embeddings endpoint.\n\nEmbeddings only. This skill does not affect chat/completions routing —\nit only changes how memory-search embedding vectors are generated."
      },
      {
        "title": "What it does",
        "body": "Installs this skill under ~/.openclaw/skills/ollama-memory-embeddings\nVerifies Ollama is installed and reachable\nLets the user choose an embedding model:\n\nembeddinggemma (default — closest to OpenClaw built-in)\nnomic-embed-text (strong quality, efficient)\nall-minilm (smallest/fastest)\nmxbai-embed-large (highest quality, larger)\n\n\nOptionally imports an existing local embedding GGUF into Ollama via\nollama create (currently detects embeddinggemma, nomic-embed, all-minilm,\nand mxbai-embed GGUFs in known cache directories)\nNormalizes model names (handles :latest tag automatically)\nUpdates agents.defaults.memorySearch in OpenClaw config (surgical — only\ntouches keys this skill owns):\n\nprovider = \"openai\"\nmodel = <selected model>:latest\nremote.baseUrl = \"http://127.0.0.1:11434/v1/\"\nremote.apiKey = \"ollama\" (required by client, ignored by Ollama)\n\n\nPerforms a post-write config sanity check (reads back and validates JSON)\nOptionally restarts the OpenClaw gateway (with detection of available\nrestart methods: openclaw gateway restart, systemd, launchd)\nOptional memory reindex during install (openclaw memory index --force --verbose)\nRuns a two-step verification:\n\nChecks model exists in ollama list\nCalls the embeddings endpoint and validates the response\n\n\nAdds an idempotent drift-enforcement command (enforce.sh)\nAdds optional config drift auto-healing watchdog (watchdog.sh)"
      },
      {
        "title": "Install",
        "body": "bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh\n\nFrom this repository:\n\nbash skills/ollama-memory-embeddings/install.sh"
      },
      {
        "title": "Non-interactive usage",
        "body": "bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \\\n  --non-interactive \\\n  --model embeddinggemma \\\n  --reindex-memory auto\n\nBulletproof setup (install watchdog):\n\nbash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \\\n  --non-interactive \\\n  --model embeddinggemma \\\n  --reindex-memory auto \\\n  --install-watchdog \\\n  --watchdog-interval 60\n\nNote: In non-interactive mode, --import-local-gguf auto is treated as\nno (safe default). Use --import-local-gguf yes to explicitly opt in.\n\nOptions:\n\n--model <id>: one of embeddinggemma, nomic-embed-text, all-minilm, mxbai-embed-large\n--import-local-gguf <auto|yes|no>: default no (safer default; opt in with yes)\n--import-model-name <name>: default embeddinggemma-local\n--restart-gateway <yes|no>: default no (restart only when explicitly requested)\n--skip-restart: deprecated alias for --restart-gateway no\n--openclaw-config <path>: config file path override\n--install-watchdog: install launchd drift auto-heal watchdog (macOS)\n--watchdog-interval <sec>: watchdog interval (default 60)\n--reindex-memory <auto|yes|no>: memory rebuild mode (default auto)\n--dry-run: print planned changes and commands; make no modifications"
      },
      {
        "title": "Verify",
        "body": "~/.openclaw/skills/ollama-memory-embeddings/verify.sh\n\nUse --verbose to dump raw API response on failure:\n\n~/.openclaw/skills/ollama-memory-embeddings/verify.sh --verbose"
      },
      {
        "title": "Drift enforcement and auto-heal",
        "body": "Manually enforce desired state (safe to run repeatedly):\n\n~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \\\n  --model embeddinggemma \\\n  --openclaw-config ~/.openclaw/openclaw.json\n\nCheck for drift only:\n\n~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \\\n  --check-only \\\n  --model embeddinggemma\n\nRun watchdog once (check + heal):\n\n~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \\\n  --once \\\n  --model embeddinggemma\n\nInstall watchdog via launchd (macOS):\n\n~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \\\n  --install-launchd \\\n  --model embeddinggemma \\\n  --interval-sec 60"
      },
      {
        "title": "GGUF detection scope",
        "body": "The installer searches for embedding GGUFs matching these patterns in known\ncache directories (~/.node-llama-cpp/models, ~/.cache/node-llama-cpp/models,\n~/.cache/openclaw/models):\n\n*embeddinggemma*.gguf\n*nomic-embed*.gguf\n*all-minilm*.gguf\n*mxbai-embed*.gguf\n\nOther embedding GGUFs are not auto-detected. You can always import manually:\n\nollama create my-model -f /path/to/Modelfile"
      },
      {
        "title": "Notes",
        "body": "This does not modify OpenClaw package code. It only updates user config.\nA timestamped backup of config is written before changes.\nIf no local GGUF exists, install proceeds by pulling the selected model from Ollama.\nModel names are normalized with :latest tag for consistent Ollama interaction.\nIf embedding model changes, rebuild/re-embed existing memory vectors to avoid\nretrieval mismatch across incompatible vector spaces.\nWith --reindex-memory auto, installer reindexes only when the effective\nembedding fingerprint changed (provider, model, baseUrl, apiKey presence).\nDrift checks require a non-empty apiKey but do not require a literal \"ollama\" value.\nConfig backups are created only when a write is needed.\nLegacy schema fallback is supported: if agents.defaults.memorySearch is absent,\nthe enforcer reads known legacy paths and mirrors writes to preserve compatibility."
      }
    ],
    "body": "Ollama Memory Embeddings\n\nThis skill configures OpenClaw memory search to use Ollama as the embeddings server via its OpenAI-compatible /v1/embeddings endpoint.\n\nEmbeddings only. This skill does not affect chat/completions routing — it only changes how memory-search embedding vectors are generated.\n\nWhat it does\nInstalls this skill under ~/.openclaw/skills/ollama-memory-embeddings\nVerifies Ollama is installed and reachable\nLets the user choose an embedding model:\nembeddinggemma (default — closest to OpenClaw built-in)\nnomic-embed-text (strong quality, efficient)\nall-minilm (smallest/fastest)\nmxbai-embed-large (highest quality, larger)\nOptionally imports an existing local embedding GGUF into Ollama via ollama create (currently detects embeddinggemma, nomic-embed, all-minilm, and mxbai-embed GGUFs in known cache directories)\nNormalizes model names (handles :latest tag automatically)\nUpdates agents.defaults.memorySearch in OpenClaw config (surgical — only touches keys this skill owns):\nprovider = \"openai\"\nmodel = <selected model>:latest\nremote.baseUrl = \"http://127.0.0.1:11434/v1/\"\nremote.apiKey = \"ollama\" (required by client, ignored by Ollama)\nPerforms a post-write config sanity check (reads back and validates JSON)\nOptionally restarts the OpenClaw gateway (with detection of available restart methods: openclaw gateway restart, systemd, launchd)\nOptional memory reindex during install (openclaw memory index --force --verbose)\nRuns a two-step verification:\nChecks model exists in ollama list\nCalls the embeddings endpoint and validates the response\nAdds an idempotent drift-enforcement command (enforce.sh)\nAdds optional config drift auto-healing watchdog (watchdog.sh)\nInstall\nbash ~/.openclaw/skills/ollama-memory-embeddings/install.sh\n\n\nFrom this repository:\n\nbash skills/ollama-memory-embeddings/install.sh\n\nNon-interactive usage\nbash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \\\n  --non-interactive \\\n  --model embeddinggemma \\\n  --reindex-memory auto\n\n\nBulletproof setup (install watchdog):\n\nbash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \\\n  --non-interactive \\\n  --model embeddinggemma \\\n  --reindex-memory auto \\\n  --install-watchdog \\\n  --watchdog-interval 60\n\n\nNote: In non-interactive mode, --import-local-gguf auto is treated as no (safe default). Use --import-local-gguf yes to explicitly opt in.\n\nOptions:\n\n--model <id>: one of embeddinggemma, nomic-embed-text, all-minilm, mxbai-embed-large\n--import-local-gguf <auto|yes|no>: default no (safer default; opt in with yes)\n--import-model-name <name>: default embeddinggemma-local\n--restart-gateway <yes|no>: default no (restart only when explicitly requested)\n--skip-restart: deprecated alias for --restart-gateway no\n--openclaw-config <path>: config file path override\n--install-watchdog: install launchd drift auto-heal watchdog (macOS)\n--watchdog-interval <sec>: watchdog interval (default 60)\n--reindex-memory <auto|yes|no>: memory rebuild mode (default auto)\n--dry-run: print planned changes and commands; make no modifications\nVerify\n~/.openclaw/skills/ollama-memory-embeddings/verify.sh\n\n\nUse --verbose to dump raw API response on failure:\n\n~/.openclaw/skills/ollama-memory-embeddings/verify.sh --verbose\n\nDrift enforcement and auto-heal\n\nManually enforce desired state (safe to run repeatedly):\n\n~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \\\n  --model embeddinggemma \\\n  --openclaw-config ~/.openclaw/openclaw.json\n\n\nCheck for drift only:\n\n~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \\\n  --check-only \\\n  --model embeddinggemma\n\n\nRun watchdog once (check + heal):\n\n~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \\\n  --once \\\n  --model embeddinggemma\n\n\nInstall watchdog via launchd (macOS):\n\n~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \\\n  --install-launchd \\\n  --model embeddinggemma \\\n  --interval-sec 60\n\nGGUF detection scope\n\nThe installer searches for embedding GGUFs matching these patterns in known cache directories (~/.node-llama-cpp/models, ~/.cache/node-llama-cpp/models, ~/.cache/openclaw/models):\n\n*embeddinggemma*.gguf\n*nomic-embed*.gguf\n*all-minilm*.gguf\n*mxbai-embed*.gguf\n\nOther embedding GGUFs are not auto-detected. You can always import manually:\n\nollama create my-model -f /path/to/Modelfile\n\nNotes\nThis does not modify OpenClaw package code. It only updates user config.\nA timestamped backup of config is written before changes.\nIf no local GGUF exists, install proceeds by pulling the selected model from Ollama.\nModel names are normalized with :latest tag for consistent Ollama interaction.\nIf embedding model changes, rebuild/re-embed existing memory vectors to avoid retrieval mismatch across incompatible vector spaces.\nWith --reindex-memory auto, installer reindexes only when the effective embedding fingerprint changed (provider, model, baseUrl, apiKey presence).\nDrift checks require a non-empty apiKey but do not require a literal \"ollama\" value.\nConfig backups are created only when a write is needed.\nLegacy schema fallback is supported: if agents.defaults.memorySearch is absent, the enforcer reads known legacy paths and mirrors writes to preserve compatibility."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/vidarbrekke/ollama-memory-embeddings",
    "publisherUrl": "https://clawhub.ai/vidarbrekke/ollama-memory-embeddings",
    "owner": "vidarbrekke",
    "version": "1.0.4",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/ollama-memory-embeddings",
    "downloadUrl": "https://openagent3.xyz/downloads/ollama-memory-embeddings",
    "agentUrl": "https://openagent3.xyz/skills/ollama-memory-embeddings/agent",
    "manifestUrl": "https://openagent3.xyz/skills/ollama-memory-embeddings/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/ollama-memory-embeddings/agent.md"
  }
}