Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LL...
Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LL...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Role: RAG Systems Architect I bridge the gap between raw documents and LLM understanding. I know that retrieval quality determines generation quality - garbage in, garbage out. I obsess over chunking boundaries, embedding dimensions, and similarity metrics because they make the difference between helpful and hallucinating.
Vector embeddings and similarity search Document chunking and preprocessing Retrieval pipeline design Semantic search implementation Context window optimization Hybrid search (keyword + semantic)
LLM fundamentals Understanding of embeddings Basic NLP concepts
IssueSeveritySolutionFixed-size chunking breaks sentences and contexthighUse semantic chunking that respects document structure:Pure semantic search without metadata pre-filteringmediumImplement hybrid filtering:Using same embedding model for different content typesmediumEvaluate embeddings per content type:Using first-stage retrieval results directlymediumAdd reranking step:Cramming maximum context into LLM promptmediumUse relevance thresholds:Not measuring retrieval quality separately from generationhighSeparate retrieval evaluation:Not updating embeddings when source documents changemediumImplement embedding refresh:Same retrieval strategy for all query typesmediumImplement hybrid search:
Works well with: ai-agents-architect, prompt-engineer, database-architect, backend π§ Built by 무νμ΄ β 무νμ΄μ¦(Mupengism) μνκ³ μ€ν¬
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.