Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Run and interact with AI agents.
Run and interact with AI agents.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Use when an alternative AI agent is better suited to a task. For example, working with sensitive data or solving simple tasks with a cheap and local agent, or accessing specialist models with unique capabilities.
Use this skill to execute ramalama tasks in a consistent, low-risk workflow. Prefer local discovery (--help, local config files, existing project scripts) before making assumptions about flags or runtime defaults. Prefer ramalama when tasks need: flexible model sourcing (hf://, oci://, rlcr://, url://) containerized local inference with runtime/network/device controls RAG data packaging and serving benchmark/perplexity evaluation model conversion and registry push/pull flows
Run these checks before first invocation in a session: ramalama version podman info >/dev/null 2>&1 || docker info >/dev/null 2>&1 ramalama run --help If serving on default port, verify availability: lsof -i :8080
One-shot inference: ramalama run <model> "<prompt>" Interactive chat loop: ramalama run <model> Serve OpenAI-compatible endpoint: ramalama serve <model> Query an existing endpoint: ramalama chat --url <url> "<prompt>" Build knowledge bundle from files/URLs: ramalama rag <paths...> <destination> Evaluate model performance/quality: ramalama bench <model> and ramalama perplexity <model> Inspect/source lifecycle operations: inspect, pull, push, convert, list, rm
Start with top-level discovery: ramalama --help ramalama version Apply global options before the subcommand when needed: ramalama [--debug|--quiet] [--dryrun] [--engine podman|docker] [--nocontainer] [--runtime llama.cpp|vllm|mlx] [--store <path>] <subcommand> ... Use command-level help before invoking unknown flags: ramalama <subcommand> --help
ramalama run granite3.3:2b "Summarize this in 3 bullets: <text>"
ramalama serve -d granite3.3:2b curl http://localhost:8080/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model":"granite3.3:2b","messages":[{"role":"user","content":"Hello"}]}'
ramalama serve hf://unsloth/gemma-3-270m-it-GGUF
ramalama rag ./docs my-rag ramalama run --rag my-rag granite3.3:2b "What are the auth requirements?"
ramalama bench granite3.3:2b ramalama benchmarks list
For agent automation, prefer explicit and deterministic flags: ramalama --engine podman run -c 4096 --pull missing granite3.3:2b "<prompt>" Recommended defaults: set --engine explicitly when environment is mixed start with smaller -c/--ctx-size on constrained hosts use --pull missing for faster repeat runs use one-shot non-interactive invocation for scripts
Docker socket unavailable: verify Docker is running, or use --engine podman Podman socket unavailable: check podman machine list and start machine if needed timed out during startup: inspect container logs: podman logs <container> reduce context (-c 4096) and retry memory allocation failure: use a smaller model and/or lower context size port conflict on 8080: choose alternate port via -p <port>
serve exposes an OpenAI-compatible endpoint for external clients. Prefer JSON output flags where available (list --json, inspect --json) for robust parsing in automation. Use ramalama chat --url <endpoint> when the model is already served elsewhere.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.