Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Advanced Qdrant vector database operations for AI agents. Semantic search, contextual document ingestion with chunking, collection management, snapshots, and...
Advanced Qdrant vector database operations for AI agents. Semantic search, contextual document ingestion with chunking, collection management, snapshots, and...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Production-ready Qdrant vector database operations for AI agents. Complete toolkit for semantic search, document ingestion, collection management, backups, and migrations.
# Set environment variables export QDRANT_HOST="localhost" export QDRANT_PORT="6333" export OPENAI_API_KEY="sk-..." # List collections bash manage.sh list # Create a collection bash manage.sh create my_collection 1536 cosine # Ingest a document bash ingest.sh /path/to/document.txt my_collection paragraph # Search bash search.sh "my search query" my_collection 5
ScriptPurposeKey Featuressearch.shSemantic searchMulti-collection, filters, score thresholdsingest.shDocument ingestionContextual chunking, batch upload, progressmanage.shCollection managementCreate, delete, list, info, optimizebackup.shSnapshotsFull collection snapshots, restore, listmigrate.shMigrationsCollection-to-collection, embedding model upgrades
VariableRequiredDefaultDescriptionQDRANT_HOSTNolocalhostQdrant server hostnameQDRANT_PORTNo6333Qdrant server portOPENAI_API_KEYYes*-OpenAI API key for embeddingsQDRANT_API_KEYNo-Qdrant API key (if auth enabled) *Required for ingest and search operations
bash search.sh <query> <collection> [limit] [filter_json] Examples: # Basic search bash search.sh "machine learning tutorials" my_docs 10 # With metadata filter bash search.sh "deployment guide" my_docs 5 '{"must": [{"key": "category", "match": {"value": "devops"}}]}' # Score threshold bash search.sh "error handling" my_docs 10 "" 0.8 Output: { "results": [ { "id": "doc-001", "score": 0.92, "text": "When handling errors in production...", "metadata": {"source": "docs/error-handling.md"} } ] }
bash ingest.sh <file_path> <collection> [chunk_strategy] [metadata_json] Chunk Strategies: StrategyDescriptionBest ForparagraphSplit by paragraphs (\n\n)Articles, docssentenceSplit by sentencesShort contentfixedFixed 1000 char chunksCode, logssemanticSemantic boundariesLong documents Examples: # Ingest with paragraph chunking bash ingest.sh article.md my_collection paragraph # With custom metadata bash ingest.sh api.md my_collection paragraph '{"category": "api", "version": "2.0"}' # Ingest multiple files for f in docs/*.md; do bash ingest.sh "$f" my_collection paragraph done
bash manage.sh <command> [args...] Commands: CommandArgumentsDescriptionlist-List all collectionscreatename dim distanceCreate new collectiondeletenameDelete collectioninfonameGet collection infooptimizenameOptimize collection Examples: bash manage.sh list bash manage.sh create my_vectors 1536 cosine bash manage.sh create my_vectors 768 euclid bash manage.sh info my_vectors bash manage.sh optimize my_vectors bash manage.sh delete my_vectors
bash backup.sh <command> [args...] Commands: CommandArgumentsDescriptionsnapshotcollection [snapshot_name]Create snapshotrestorecollection snapshot_nameRestore from snapshotlistcollectionList snapshotsdeletecollection snapshot_nameDelete snapshot Examples: # Create snapshot bash backup.sh snapshot my_collection bash backup.sh snapshot my_collection backup_2026_02_10 # List snapshots bash backup.sh list my_collection # Restore bash backup.sh restore my_collection backup_2026_02_10 # Delete old snapshot bash backup.sh delete my_collection old_backup
bash migrate.sh <source_collection> <target_collection> [options] Migration Types: Copy Collection: Same embedding model, different name Model Upgrade: Upgrade to new embedding model (re-embeds) Filter Migration: Migrate subset with filter Examples: # Simple copy bash migrate.sh old_collection new_collection # With model upgrade (re-embeds all content) bash migrate.sh old_collection new_collection --upgrade-model # Filtered migration bash migrate.sh old_collection new_collection --filter '{"category": "public"}' # Batch size for large collections bash migrate.sh old_collection new_collection --batch-size 50
The ingest script provides intelligent chunking to preserve context:
Splits on double newlines Preserves paragraph structure Adds overlap of 2 sentences between chunks Best for: Articles, documentation, blogs
Splits on sentence boundaries Minimal overlap Best for: Short content, tweets, quotes
Fixed 1000 character chunks 200 character overlap Best for: Code files, logs, unstructured text
Uses paragraph + header detection Preserves document structure Best for: Long documents with headers
All scripts use Qdrant REST API: GET /collections # List collections PUT /collections/{name} # Create collection DELETE /collections/{name} # Delete collection GET /collections/{name} # Collection info POST /collections/{name}/points/search # Search PUT /collections/{name}/points # Upsert points POST /snapshots # Create snapshot GET /collections/{name}/snapshots # List snapshots Full docs: https://qdrant.tech/documentation/
Batch uploads: ingest.sh automatically batches uploads (default 100) Optimize after bulk insert: bash manage.sh optimize my_collection Use filters: Narrow search scope with metadata filters Set score thresholds: Filter low-quality matches Index metadata: Add payload indexes for faster filtering
Check Qdrant is running: curl http://$QDRANT_HOST:$QDRANT_PORT/healthz Verify host/port environment variables
List collections: bash manage.sh list Check collection name spelling
Verify documents were ingested: bash manage.sh info my_collection Check vector dimensions match (e.g., 1536 for text-embedding-3-small) Try lowering score threshold
Verify OPENAI_API_KEY is set Check API key has quota available Verify network access to OpenAI API
Check disk space available Verify Qdrant has snapshot permissions For large collections, try during low-traffic periods
Qdrant server v1.0+ curl, python3, bash OpenAI API key (for embeddings) Network access to Qdrant and OpenAI
Qdrant Docs: https://qdrant.tech/documentation/ OpenAI Embeddings: https://platform.openai.com/docs/guides/embeddings Vector Search Guide: https://qdrant.tech/documentation/concepts/search/
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.