{
  "schemaVersion": "1.0",
  "item": {
    "slug": "grago",
    "name": "Grago",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/solsuk/grago",
    "canonicalUrl": "https://clawhub.ai/solsuk/grago",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/grago",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=grago",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "README.md",
      "SECURITY.md",
      "SKILL.md",
      "grago.sh",
      "install.sh"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/grago"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/grago",
    "agentPageUrl": "https://openagent3.xyz/skills/grago/agent",
    "manifestUrl": "https://openagent3.xyz/skills/grago/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/grago/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Grago",
        "body": "Delegate research and data-fetch tasks to a free local LLM. Save tokens. Use your machine.\n\nGrago bridges the gap between your OpenClaw agent and local LLMs (Ollama, llama.cpp, etc.) that can't use tools natively. It runs shell scripts to fetch live data from the web, APIs, and local files — then pipes the results into your local model with a focused prompt.\n\nYour cloud model stays sharp. Your local machine does the grunt work. Your token bill drops."
      },
      {
        "title": "⚠️ Security Model",
        "body": "Grago executes shell commands. This is intentional — it's the only way to give tool-less local LLMs access to external data.\n\nSafe for: Trusted, single-user environments (your own Mac Mini, VPS, workstation)\nNOT safe for: Multi-tenant systems, public APIs, untrusted agents\n\nIf your OpenClaw agent is compromised via prompt injection, Grago can execute arbitrary commands. This is the trade-off for free local compute. Read SECURITY.md in the repo for full details."
      },
      {
        "title": "When to Use This Skill",
        "body": "Use Grago when:\n\nYou need live data fetched (web pages, APIs, RSS feeds, logs)\nThe task is research-heavy and doesn't need your primary model\nYou want to keep data on your own machine (privacy)\nYou want to save tokens by offloading analysis to a local LLM"
      },
      {
        "title": "How It Works",
        "body": "Fetch — Shell scripts pull live data (curl, jq, grep, etc.)\nAnalyze — Results are piped to your local Ollama model with a prompt\nReturn — Structured analysis comes back to your OpenClaw agent"
      },
      {
        "title": "Usage",
        "body": "# Fetch a URL and analyze locally\ngrago fetch \"https://example.com\" \\\n  --analyze \"Summarize the key points\" \\\n  --model gemma2\n\n# Multi-source research from a YAML config\ngrago research \\\n  --sources sources.yaml \\\n  --prompt \"What are the main themes across these sources?\"\n\n# Pipe any shell command into your local model\ngrago pipe \\\n  --fetch \"curl -s https://api.example.com/data\" \\\n  --transform \"jq .results\" \\\n  --analyze \"Identify trends and flag outliers\""
      },
      {
        "title": "Configuration",
        "body": "Config file: ~/.grago/config.yaml\n\ndefault_model: gemma2        # Your preferred Ollama model\ntimeout: 30                  # Seconds per fetch\nmax_input_chars: 16000       # Input truncation limit\noutput_format: markdown      # markdown | json | text"
      },
      {
        "title": "Requirements",
        "body": "Ollama installed and running locally (install.sh handles this)\nAt least one model pulled in Ollama (gemma2, mistral, llama3, etc.)\nbash, curl, jq"
      },
      {
        "title": "Installation",
        "body": "git clone https://github.com/solsuk/grago.git\ncd grago && ./install.sh"
      },
      {
        "title": "Notes for the Agent",
        "body": "Prefer pipe mode over fetch --analyze for reliability (avoids Ollama TTY spinner issues)\nDefault model is whatever is set in ~/.grago/config.yaml; override per-call with --model\nInput is truncated to max_input_chars before being sent to the local model\nLocal model responses can be slow (5–30s depending on hardware and model size) — this is expected\nGrago is for research and fetch delegation — not for tasks requiring your primary model's reasoning"
      }
    ],
    "body": "Grago\n\nDelegate research and data-fetch tasks to a free local LLM. Save tokens. Use your machine.\n\nGrago bridges the gap between your OpenClaw agent and local LLMs (Ollama, llama.cpp, etc.) that can't use tools natively. It runs shell scripts to fetch live data from the web, APIs, and local files — then pipes the results into your local model with a focused prompt.\n\nYour cloud model stays sharp. Your local machine does the grunt work. Your token bill drops.\n\n⚠️ Security Model\n\nGrago executes shell commands. This is intentional — it's the only way to give tool-less local LLMs access to external data.\n\nSafe for: Trusted, single-user environments (your own Mac Mini, VPS, workstation)\nNOT safe for: Multi-tenant systems, public APIs, untrusted agents\n\nIf your OpenClaw agent is compromised via prompt injection, Grago can execute arbitrary commands. This is the trade-off for free local compute. Read SECURITY.md in the repo for full details.\n\nWhen to Use This Skill\n\nUse Grago when:\n\nYou need live data fetched (web pages, APIs, RSS feeds, logs)\nThe task is research-heavy and doesn't need your primary model\nYou want to keep data on your own machine (privacy)\nYou want to save tokens by offloading analysis to a local LLM\nHow It Works\nFetch — Shell scripts pull live data (curl, jq, grep, etc.)\nAnalyze — Results are piped to your local Ollama model with a prompt\nReturn — Structured analysis comes back to your OpenClaw agent\nUsage\n# Fetch a URL and analyze locally\ngrago fetch \"https://example.com\" \\\n  --analyze \"Summarize the key points\" \\\n  --model gemma2\n\n# Multi-source research from a YAML config\ngrago research \\\n  --sources sources.yaml \\\n  --prompt \"What are the main themes across these sources?\"\n\n# Pipe any shell command into your local model\ngrago pipe \\\n  --fetch \"curl -s https://api.example.com/data\" \\\n  --transform \"jq .results\" \\\n  --analyze \"Identify trends and flag outliers\"\n\nConfiguration\n\nConfig file: ~/.grago/config.yaml\n\ndefault_model: gemma2        # Your preferred Ollama model\ntimeout: 30                  # Seconds per fetch\nmax_input_chars: 16000       # Input truncation limit\noutput_format: markdown      # markdown | json | text\n\nRequirements\nOllama installed and running locally (install.sh handles this)\nAt least one model pulled in Ollama (gemma2, mistral, llama3, etc.)\nbash, curl, jq\nInstallation\ngit clone https://github.com/solsuk/grago.git\ncd grago && ./install.sh\n\nNotes for the Agent\nPrefer pipe mode over fetch --analyze for reliability (avoids Ollama TTY spinner issues)\nDefault model is whatever is set in ~/.grago/config.yaml; override per-call with --model\nInput is truncated to max_input_chars before being sent to the local model\nLocal model responses can be slow (5–30s depending on hardware and model size) — this is expected\nGrago is for research and fetch delegation — not for tasks requiring your primary model's reasoning"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/solsuk/grago",
    "publisherUrl": "https://clawhub.ai/solsuk/grago",
    "owner": "solsuk",
    "version": "1.0.1",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/grago",
    "downloadUrl": "https://openagent3.xyz/downloads/grago",
    "agentUrl": "https://openagent3.xyz/skills/grago/agent",
    "manifestUrl": "https://openagent3.xyz/skills/grago/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/grago/agent.md"
  }
}