Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Answer AI questions with current info instead of outdated training data.
Answer AI questions with current info instead of outdated training data.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Before answering questions about pricing, rankings, or availability: Pricing β check openrouter.ai/models (aggregates all providers) Rankings β check lmarena.ai (crowdsourced ELO, updates weekly) Outages β check status pages before blaming user code Don't cite specific prices, context windows, or rate limits from memory β they change quarterly.
"How do I reduce hallucinations?" Not just "use RAG." Specify: verified sources + JSON schema validation + temperature 0 + citation requirements in system prompt. "Should I fine-tune or use RAG?" RAG first, always. Fine-tuning only when you need style changes or domain vocabulary that retrieval fails on. "What hardware for local models?" Give numbers: 7B = 8GB VRAM, 13B = 16GB, 70B = 48GB+. Quantization (Q4) halves requirements.
Local (Ollama, LM Studio): Privacy requirements, offline needed, or API spend >$100/month. API: Need frontier capabilities, no GPU, or just prototyping.
~4 characters per token in English. But code and non-English vary wildly β don't estimate, count with tiktoken or the provider's tokenizer.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.