Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
AI memory system for coding agents — code index + cognitive facts, persistent across sessions.
AI memory system for coding agents — code index + cognitive facts, persistent across sessions.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
AI memory system for coding agents with BM25 + vector hybrid search. Provides 6 MCP tools for intelligent code/doc search and cognitive fact recording.
Dual memory: corpus (code index) + cognition (episodic facts) Hybrid search: BM25 full-text + vector semantic search with RRF fusion Structure-aware chunking: Markdown, Python, Rust, JavaScript, plain text MCP Server: 6 tools (recall, learn, read, status, reindex, config) CJK optimized: Chinese/Japanese/Korean query detection with dynamic weight tuning Built-in ONNX embedding: Vector search works out of the box, no Ollama required Graceful degradation: Works without any embedding service (BM25-only mode)
# Recommended pipx install index1 # Or via pip pip install index1 # Or via npm (auto-installs Python package) npx index1@latest One-click plugin setup: index1 setup # Auto-configure hooks + MCP for Claude Code Verify: index1 --version index1 doctor # Check environment
Create .mcp.json in your project root: { "mcpServers": { "index1": { "type": "stdio", "command": "index1", "args": ["serve"] } } } If index1 is not in PATH, use the full path from which index1.
Add to your project's .claude/CLAUDE.md: ## Search Strategy This project has index1 MCP Server configured (recall + 5 other tools). When searching code: 1. Known identifiers (function/class/file names) -> Grep/Glob directly (4ms) 2. Exploratory questions ("how does XX work") -> recall first, then Grep for details 3. CJK query for English code -> must use recall (Grep can't cross languages) 4. High-frequency keywords (50+ expected matches) -> prefer recall (saves 90%+ context) Impact: Without rules: Grep "search" -> 881 lines -> 35,895 tokens With rules: recall -> 5 summaries -> 460 tokens (97% savings)
index1 index ./src ./docs # Index source and docs index1 status # Check index stats index1 search "your query" # Test search
index1 v2 has built-in ONNX embedding (bge-small-en-v1.5). For better multilingual support: curl -fsSL https://ollama.com/install.sh | sh ollama pull nomic-embed-text # Standard, 270MB # or ollama pull bge-m3 # Best for CJK, 1.2GB index1 config embed_backend ollama index1 doctor # Verify setup Without Ollama, ONNX embedding provides vector search out of the box.
index1 web # Start Web UI on port 6888 index1 web --port 8080 # Custom port
ToolDescriptionrecallUnified search — code + cognitive facts, BM25 + vector hybridlearnRecord insights, decisions, lessons learned (auto-classify + dedup)readRead file content + index metadatastatusIndex and cognition statisticsreindexRebuild index for a path or collectionconfigView or modify configuration
IssueFixTools not showingCheck .mcp.json format and index1 pathAI doesn't use recallAdd search rules to CLAUDE.mdcommand not foundUse full path from which index1Chinese search returns 0Install Ollama + bge-m3 modelNo vector searchBuilt-in ONNX should work; run index1 doctor
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.