โ† All skills
Tencent SkillHub ยท Developer Tools

RamaLama CLI

Run and interact with AI agents.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Run and interact with AI agents.

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
references/models.md, SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 13 sections Open source page

Ramalama CLI

Use when an alternative AI agent is better suited to a task. For example, working with sensitive data or solving simple tasks with a cheap and local agent, or accessing specialist models with unique capabilities.

Overview

Use this skill to execute ramalama tasks in a consistent, low-risk workflow. Prefer local discovery (--help, local config files, existing project scripts) before making assumptions about flags or runtime defaults. Prefer ramalama when tasks need: flexible model sourcing (hf://, oci://, rlcr://, url://) containerized local inference with runtime/network/device controls RAG data packaging and serving benchmark/perplexity evaluation model conversion and registry push/pull flows

Preflight

Run these checks before first invocation in a session: ramalama version podman info >/dev/null 2>&1 || docker info >/dev/null 2>&1 ramalama run --help If serving on default port, verify availability: lsof -i :8080

Decision Matrix

One-shot inference: ramalama run <model> "<prompt>" Interactive chat loop: ramalama run <model> Serve OpenAI-compatible endpoint: ramalama serve <model> Query an existing endpoint: ramalama chat --url <url> "<prompt>" Build knowledge bundle from files/URLs: ramalama rag <paths...> <destination> Evaluate model performance/quality: ramalama bench <model> and ramalama perplexity <model> Inspect/source lifecycle operations: inspect, pull, push, convert, list, rm

Usage

Start with top-level discovery: ramalama --help ramalama version Apply global options before the subcommand when needed: ramalama [--debug|--quiet] [--dryrun] [--engine podman|docker] [--nocontainer] [--runtime llama.cpp|vllm|mlx] [--store <path>] <subcommand> ... Use command-level help before invoking unknown flags: ramalama <subcommand> --help

1) One-shot run

ramalama run granite3.3:2b "Summarize this in 3 bullets: <text>"

2) Detached service + API call

ramalama serve -d granite3.3:2b curl http://localhost:8080/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model":"granite3.3:2b","messages":[{"role":"user","content":"Hello"}]}'

3) Direct Hugging Face source

ramalama serve hf://unsloth/gemma-3-270m-it-GGUF

4) RAG package then query

ramalama rag ./docs my-rag ramalama run --rag my-rag granite3.3:2b "What are the auth requirements?"

5) Benchmark and list benchmark history

ramalama bench granite3.3:2b ramalama benchmarks list

Reliability Defaults

For agent automation, prefer explicit and deterministic flags: ramalama --engine podman run -c 4096 --pull missing granite3.3:2b "<prompt>" Recommended defaults: set --engine explicitly when environment is mixed start with smaller -c/--ctx-size on constrained hosts use --pull missing for faster repeat runs use one-shot non-interactive invocation for scripts

Troubleshooting

Docker socket unavailable: verify Docker is running, or use --engine podman Podman socket unavailable: check podman machine list and start machine if needed timed out during startup: inspect container logs: podman logs <container> reduce context (-c 4096) and retry memory allocation failure: use a smaller model and/or lower context size port conflict on 8080: choose alternate port via -p <port>

Notes

serve exposes an OpenAI-compatible endpoint for external clients. Prefer JSON output flags where available (list --json, inspect --json) for robust parsing in automation. Use ramalama chat --url <endpoint> when the model is already served elsewhere.

Category context

Code helpers, APIs, CLIs, browser automation, testing, and developer operations.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
2 Docs
  • SKILL.md Primary doc
  • references/models.md Docs