โ† All skills
Tencent SkillHub ยท Developer Tools

Ollama Memory Embeddings

Configure OpenClaw memory search to use Ollama as the embeddings server (OpenAI-compatible /v1/embeddings) instead of the built-in node-llama-cpp local GGUF loading. Includes interactive model selection and optional import of an existing local embedding GGUF into Ollama.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Configure OpenClaw memory search to use Ollama as the embeddings server (OpenAI-compatible /v1/embeddings) instead of the built-in node-llama-cpp local GGUF loading. Includes interactive model selection and optional import of an existing local embedding GGUF into Ollama.

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
LICENSE.md, uninstall.sh, install.sh, verify.sh, README.md, watchdog.sh

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.4

Documentation

ClawHub primary doc Primary doc: SKILL.md 8 sections Open source page

Ollama Memory Embeddings

This skill configures OpenClaw memory search to use Ollama as the embeddings server via its OpenAI-compatible /v1/embeddings endpoint. Embeddings only. This skill does not affect chat/completions routing โ€” it only changes how memory-search embedding vectors are generated.

What it does

Installs this skill under ~/.openclaw/skills/ollama-memory-embeddings Verifies Ollama is installed and reachable Lets the user choose an embedding model: embeddinggemma (default โ€” closest to OpenClaw built-in) nomic-embed-text (strong quality, efficient) all-minilm (smallest/fastest) mxbai-embed-large (highest quality, larger) Optionally imports an existing local embedding GGUF into Ollama via ollama create (currently detects embeddinggemma, nomic-embed, all-minilm, and mxbai-embed GGUFs in known cache directories) Normalizes model names (handles :latest tag automatically) Updates agents.defaults.memorySearch in OpenClaw config (surgical โ€” only touches keys this skill owns): provider = "openai" model = <selected model>:latest remote.baseUrl = "http://127.0.0.1:11434/v1/" remote.apiKey = "ollama" (required by client, ignored by Ollama) Performs a post-write config sanity check (reads back and validates JSON) Optionally restarts the OpenClaw gateway (with detection of available restart methods: openclaw gateway restart, systemd, launchd) Optional memory reindex during install (openclaw memory index --force --verbose) Runs a two-step verification: Checks model exists in ollama list Calls the embeddings endpoint and validates the response Adds an idempotent drift-enforcement command (enforce.sh) Adds optional config drift auto-healing watchdog (watchdog.sh)

Install

bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh From this repository: bash skills/ollama-memory-embeddings/install.sh

Non-interactive usage

bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \ --non-interactive \ --model embeddinggemma \ --reindex-memory auto Bulletproof setup (install watchdog): bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \ --non-interactive \ --model embeddinggemma \ --reindex-memory auto \ --install-watchdog \ --watchdog-interval 60 Note: In non-interactive mode, --import-local-gguf auto is treated as no (safe default). Use --import-local-gguf yes to explicitly opt in. Options: --model <id>: one of embeddinggemma, nomic-embed-text, all-minilm, mxbai-embed-large --import-local-gguf <auto|yes|no>: default no (safer default; opt in with yes) --import-model-name <name>: default embeddinggemma-local --restart-gateway <yes|no>: default no (restart only when explicitly requested) --skip-restart: deprecated alias for --restart-gateway no --openclaw-config <path>: config file path override --install-watchdog: install launchd drift auto-heal watchdog (macOS) --watchdog-interval <sec>: watchdog interval (default 60) --reindex-memory <auto|yes|no>: memory rebuild mode (default auto) --dry-run: print planned changes and commands; make no modifications

Verify

~/.openclaw/skills/ollama-memory-embeddings/verify.sh Use --verbose to dump raw API response on failure: ~/.openclaw/skills/ollama-memory-embeddings/verify.sh --verbose

Drift enforcement and auto-heal

Manually enforce desired state (safe to run repeatedly): ~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \ --model embeddinggemma \ --openclaw-config ~/.openclaw/openclaw.json Check for drift only: ~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \ --check-only \ --model embeddinggemma Run watchdog once (check + heal): ~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \ --once \ --model embeddinggemma Install watchdog via launchd (macOS): ~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \ --install-launchd \ --model embeddinggemma \ --interval-sec 60

GGUF detection scope

The installer searches for embedding GGUFs matching these patterns in known cache directories (~/.node-llama-cpp/models, ~/.cache/node-llama-cpp/models, ~/.cache/openclaw/models): *embeddinggemma*.gguf *nomic-embed*.gguf *all-minilm*.gguf *mxbai-embed*.gguf Other embedding GGUFs are not auto-detected. You can always import manually: ollama create my-model -f /path/to/Modelfile

Notes

This does not modify OpenClaw package code. It only updates user config. A timestamped backup of config is written before changes. If no local GGUF exists, install proceeds by pulling the selected model from Ollama. Model names are normalized with :latest tag for consistent Ollama interaction. If embedding model changes, rebuild/re-embed existing memory vectors to avoid retrieval mismatch across incompatible vector spaces. With --reindex-memory auto, installer reindexes only when the effective embedding fingerprint changed (provider, model, baseUrl, apiKey presence). Drift checks require a non-empty apiKey but do not require a literal "ollama" value. Config backups are created only when a write is needed. Legacy schema fallback is supported: if agents.defaults.memorySearch is absent, the enforcer reads known legacy paths and mirrors writes to preserve compatibility.

Category context

Code helpers, APIs, CLIs, browser automation, testing, and developer operations.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
4 Scripts2 Docs
  • LICENSE.md Docs
  • README.md Docs
  • install.sh Scripts
  • uninstall.sh Scripts
  • verify.sh Scripts
  • watchdog.sh Scripts