{
  "schemaVersion": "1.0",
  "item": {
    "slug": "ramalama-cli",
    "name": "RamaLama CLI",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/ieaves/ramalama-cli",
    "canonicalUrl": "https://clawhub.ai/ieaves/ramalama-cli",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/ramalama-cli",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=ramalama-cli",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "references/models.md",
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-07T17:22:31.273Z",
      "expiresAt": "2026-05-14T17:22:31.273Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
        "contentDisposition": "attachment; filename=\"afrexai-annual-report-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/ramalama-cli"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/ramalama-cli",
    "agentPageUrl": "https://openagent3.xyz/skills/ramalama-cli/agent",
    "manifestUrl": "https://openagent3.xyz/skills/ramalama-cli/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/ramalama-cli/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Ramalama CLI",
        "body": "Use when an alternative AI agent is better suited to a task. For example, working with sensitive data or solving simple tasks with a cheap and local agent, or accessing specialist models with unique capabilities."
      },
      {
        "title": "Overview",
        "body": "Use this skill to execute ramalama tasks in a consistent, low-risk workflow.\nPrefer local discovery (--help, local config files, existing project scripts) before making assumptions about flags or runtime defaults.\n\nPrefer ramalama when tasks need:\n\nflexible model sourcing (hf://, oci://, rlcr://, url://)\ncontainerized local inference with runtime/network/device controls\nRAG data packaging and serving\nbenchmark/perplexity evaluation\nmodel conversion and registry push/pull flows"
      },
      {
        "title": "Preflight",
        "body": "Run these checks before first invocation in a session:\n\nramalama version\npodman info >/dev/null 2>&1 || docker info >/dev/null 2>&1\nramalama run --help\n\nIf serving on default port, verify availability:\n\nlsof -i :8080"
      },
      {
        "title": "Decision Matrix",
        "body": "One-shot inference: ramalama run <model> \"<prompt>\"\nInteractive chat loop: ramalama run <model>\nServe OpenAI-compatible endpoint: ramalama serve <model>\nQuery an existing endpoint: ramalama chat --url <url> \"<prompt>\"\nBuild knowledge bundle from files/URLs: ramalama rag <paths...> <destination>\nEvaluate model performance/quality: ramalama bench <model> and ramalama perplexity <model>\nInspect/source lifecycle operations: inspect, pull, push, convert, list, rm"
      },
      {
        "title": "Usage",
        "body": "Start with top-level discovery:\n\nramalama --help\nramalama version\n\nApply global options before the subcommand when needed:\n\nramalama [--debug|--quiet] [--dryrun] [--engine podman|docker] [--nocontainer] [--runtime llama.cpp|vllm|mlx] [--store <path>] <subcommand> ...\n\nUse command-level help before invoking unknown flags:\n\nramalama <subcommand> --help"
      },
      {
        "title": "1) One-shot run",
        "body": "ramalama run granite3.3:2b \"Summarize this in 3 bullets: <text>\""
      },
      {
        "title": "2) Detached service + API call",
        "body": "ramalama serve -d granite3.3:2b\ncurl http://localhost:8080/v1/chat/completions \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\":\"granite3.3:2b\",\"messages\":[{\"role\":\"user\",\"content\":\"Hello\"}]}'"
      },
      {
        "title": "3) Direct Hugging Face source",
        "body": "ramalama serve hf://unsloth/gemma-3-270m-it-GGUF"
      },
      {
        "title": "4) RAG package then query",
        "body": "ramalama rag ./docs my-rag\nramalama run --rag my-rag granite3.3:2b \"What are the auth requirements?\""
      },
      {
        "title": "5) Benchmark and list benchmark history",
        "body": "ramalama bench granite3.3:2b\nramalama benchmarks list"
      },
      {
        "title": "Reliability Defaults",
        "body": "For agent automation, prefer explicit and deterministic flags:\n\nramalama --engine podman run -c 4096 --pull missing granite3.3:2b \"<prompt>\"\n\nRecommended defaults:\n\nset --engine explicitly when environment is mixed\nstart with smaller -c/--ctx-size on constrained hosts\nuse --pull missing for faster repeat runs\nuse one-shot non-interactive invocation for scripts"
      },
      {
        "title": "Troubleshooting",
        "body": "Docker socket unavailable:\n\nverify Docker is running, or use --engine podman\n\n\nPodman socket unavailable:\n\ncheck podman machine list and start machine if needed\n\n\ntimed out during startup:\n\ninspect container logs: podman logs <container>\nreduce context (-c 4096) and retry\n\n\nmemory allocation failure:\n\nuse a smaller model and/or lower context size\n\n\nport conflict on 8080:\n\nchoose alternate port via -p <port>"
      },
      {
        "title": "Notes",
        "body": "serve exposes an OpenAI-compatible endpoint for external clients.\nPrefer JSON output flags where available (list --json, inspect --json) for robust parsing in automation.\nUse ramalama chat --url <endpoint> when the model is already served elsewhere."
      }
    ],
    "body": "Ramalama CLI\n\nUse when an alternative AI agent is better suited to a task. For example, working with sensitive data or solving simple tasks with a cheap and local agent, or accessing specialist models with unique capabilities.\n\nOverview\n\nUse this skill to execute ramalama tasks in a consistent, low-risk workflow. Prefer local discovery (--help, local config files, existing project scripts) before making assumptions about flags or runtime defaults.\n\nPrefer ramalama when tasks need:\n\nflexible model sourcing (hf://, oci://, rlcr://, url://)\ncontainerized local inference with runtime/network/device controls\nRAG data packaging and serving\nbenchmark/perplexity evaluation\nmodel conversion and registry push/pull flows\nPreflight\n\nRun these checks before first invocation in a session:\n\nramalama version\npodman info >/dev/null 2>&1 || docker info >/dev/null 2>&1\nramalama run --help\n\n\nIf serving on default port, verify availability:\n\nlsof -i :8080\n\nDecision Matrix\nOne-shot inference: ramalama run <model> \"<prompt>\"\nInteractive chat loop: ramalama run <model>\nServe OpenAI-compatible endpoint: ramalama serve <model>\nQuery an existing endpoint: ramalama chat --url <url> \"<prompt>\"\nBuild knowledge bundle from files/URLs: ramalama rag <paths...> <destination>\nEvaluate model performance/quality: ramalama bench <model> and ramalama perplexity <model>\nInspect/source lifecycle operations: inspect, pull, push, convert, list, rm\nUsage\n\nStart with top-level discovery:\n\nramalama --help\nramalama version\n\n\nApply global options before the subcommand when needed:\n\nramalama [--debug|--quiet] [--dryrun] [--engine podman|docker] [--nocontainer] [--runtime llama.cpp|vllm|mlx] [--store <path>] <subcommand> ...\n\n\nUse command-level help before invoking unknown flags:\n\nramalama <subcommand> --help\n\nKnown-Good Recipes\n1) One-shot run\nramalama run granite3.3:2b \"Summarize this in 3 bullets: <text>\"\n\n2) Detached service + API call\nramalama serve -d granite3.3:2b\ncurl http://localhost:8080/v1/chat/completions \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\":\"granite3.3:2b\",\"messages\":[{\"role\":\"user\",\"content\":\"Hello\"}]}'\n\n3) Direct Hugging Face source\nramalama serve hf://unsloth/gemma-3-270m-it-GGUF\n\n4) RAG package then query\nramalama rag ./docs my-rag\nramalama run --rag my-rag granite3.3:2b \"What are the auth requirements?\"\n\n5) Benchmark and list benchmark history\nramalama bench granite3.3:2b\nramalama benchmarks list\n\nReliability Defaults\n\nFor agent automation, prefer explicit and deterministic flags:\n\nramalama --engine podman run -c 4096 --pull missing granite3.3:2b \"<prompt>\"\n\n\nRecommended defaults:\n\nset --engine explicitly when environment is mixed\nstart with smaller -c/--ctx-size on constrained hosts\nuse --pull missing for faster repeat runs\nuse one-shot non-interactive invocation for scripts\nTroubleshooting\nDocker socket unavailable:\nverify Docker is running, or use --engine podman\nPodman socket unavailable:\ncheck podman machine list and start machine if needed\ntimed out during startup:\ninspect container logs: podman logs <container>\nreduce context (-c 4096) and retry\nmemory allocation failure:\nuse a smaller model and/or lower context size\nport conflict on 8080:\nchoose alternate port via -p <port>\nNotes\nserve exposes an OpenAI-compatible endpoint for external clients.\nPrefer JSON output flags where available (list --json, inspect --json) for robust parsing in automation.\nUse ramalama chat --url <endpoint> when the model is already served elsewhere."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/ieaves/ramalama-cli",
    "publisherUrl": "https://clawhub.ai/ieaves/ramalama-cli",
    "owner": "ieaves",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/ramalama-cli",
    "downloadUrl": "https://openagent3.xyz/downloads/ramalama-cli",
    "agentUrl": "https://openagent3.xyz/skills/ramalama-cli/agent",
    "manifestUrl": "https://openagent3.xyz/skills/ramalama-cli/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/ramalama-cli/agent.md"
  }
}