{
  "schemaVersion": "1.0",
  "item": {
    "slug": "parallel-ai-search",
    "name": "Parallel AI search",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/tristanmanchester/parallel-ai-search",
    "canonicalUrl": "https://clawhub.ai/tristanmanchester/parallel-ai-search",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/parallel-ai-search",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=parallel-ai-search",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "references/command-templates.md",
      "references/troubleshooting.md",
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "slug": "parallel-ai-search",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-06T21:52:05.334Z",
      "expiresAt": "2026-05-13T21:52:05.334Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=parallel-ai-search",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=parallel-ai-search",
        "contentDisposition": "attachment; filename=\"parallel-ai-search-1.0.3.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "parallel-ai-search"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/parallel-ai-search"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/parallel-ai-search",
    "agentPageUrl": "https://openagent3.xyz/skills/parallel-ai-search/agent",
    "manifestUrl": "https://openagent3.xyz/skills/parallel-ai-search/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/parallel-ai-search/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Parallel AI Search (CLI Master)",
        "body": "This is a single “master” skill that replaces the earlier Node-script-based version of parallel-ai-search.\n\nIt routes to the right parallel-cli capability for the task:\n\nSearch: quick web lookup with citations (parallel-cli search)\nExtract: turn URLs (including PDFs and JS-heavy pages) into clean, LLM-ready text (parallel-cli extract)\nDeep research: multi-source reports with processor tiers (parallel-cli research ...)\nEnrich: add web-sourced columns to CSV/JSON (parallel-cli enrich ...)\nFindAll: discover entities from the web with optional enrichments (parallel-cli findall ...)\nMonitor: track web changes on a cadence, optionally via webhook (parallel-cli monitor ...)"
      },
      {
        "title": "Routing rules (pick ONE)",
        "body": "Choose the smallest / cheapest action that solves the user’s request:\n\nExtract — if the user gives one or more URLs or says “read/summarise this page”, “extract”, “quote”, “pull the content”, “what does this page say”.\nDeep research — ONLY if the user explicitly asks for deep, exhaustive, comprehensive, thorough investigation, or a multi-source “report”.\nEnrich — if the user provides a list/table (CSV/JSON/inline objects) and wants new columns like CEO, revenue, funding, contact info, etc.\nFindAll — if the user wants you to discover many entities (companies/people/venues/etc.) that match criteria.\nMonitor — if the user wants ongoing tracking (“alert me”, “track changes”, “monitor this weekly”) rather than a one-off answer.\nSearch — default for everything else that needs current web info or citations.\n\nOptional manual prefixes if the user invoked this skill directly:\n\nsearch: ...\nextract: ...\nresearch: ...\nenrich: ...\nfindall: ...\nmonitor: ...\n\nIf a prefix is present, honour it."
      },
      {
        "title": "Setup and authentication (only when needed)",
        "body": "Before running any Parallel command, ensure auth works:\n\nparallel-cli auth\n\nIf parallel-cli is missing, install it:\n\ncurl -fsSL https://parallel.ai/install.sh | bash\n\nIf you cannot use the install script, use pipx:\n\npipx install \"parallel-web-tools[cli]\"\npipx ensurepath\n\nThen authenticate (choose one):\n\n# Interactive OAuth (opens browser)\nparallel-cli login\n\n# Headless / SSH / CI\nparallel-cli login --device\n\n# Or environment variable\nexport PARALLEL_API_KEY=\"your_api_key\""
      },
      {
        "title": "Output & citation rules",
        "body": "Always cite web-sourced facts with inline markdown links: [Source Title](https://...).\nEnd with a Sources list whenever you used Search/Extract/Research output.\nPrefer official/primary sources when available.\nFor long outputs, save to files in /tmp/ and summarise in-chat."
      },
      {
        "title": "Search (default web lookup)",
        "body": "Use Search for fast, cost-effective answers with citations."
      },
      {
        "title": "Command template",
        "body": "parallel-cli search \"$OBJECTIVE\"   --mode agentic   --max-results 10   --json\n\nAdd any of these only when relevant:\n\n--after-date YYYY-MM-DD (freshness constraint)\n--include-domains a.com b.org (restrict sources)\n--exclude-domains spam.com (block sources)\none or more -q \"keyword query\" flags (extra keyword probes)\n-o \"/tmp/$SLUG.search.json\" (save full JSON to a file)"
      },
      {
        "title": "Parse + respond",
        "body": "From the JSON results, extract title, url, and any publish_date / excerpt fields.\nAnswer the user’s question, and cite each claim inline."
      },
      {
        "title": "Extract (read one or more URLs)",
        "body": "Use Extract when you need the actual contents of specific URLs (webpages, PDFs, JS-heavy sites)."
      },
      {
        "title": "Command template",
        "body": "parallel-cli extract \"$URL\" --json\n\nAdd when relevant:\n\n--objective \"Focus area\" (e.g., pricing, API usage, constraints)\n--full-content (only if the user needs the whole page)\n--no-excerpts (if you only want full content)\n-o \"/tmp/$SLUG.extract.json\" (save full JSON to a file)"
      },
      {
        "title": "Respond",
        "body": "If the user asked for a summary, summarise with citations to the extracted URL.\nIf the user asked for the verbatim text, provide the extracted markdown only if it is reasonably sized; otherwise provide the key sections + offer to read more from the saved output."
      },
      {
        "title": "Deep research (only when explicitly requested)",
        "body": "Deep research is slower and may cost more than Search. Use it only when the user explicitly wants depth."
      },
      {
        "title": "Step 1 — start (always async)",
        "body": "parallel-cli research run \"$QUESTION\" --processor pro-fast --no-wait --json\n\nParse run_id (and any monitoring URL) from JSON and tell the user the run started."
      },
      {
        "title": "Step 2 — poll (bounded timeout)",
        "body": "Choose a short slug filename (lowercase-hyphen), then:\n\nparallel-cli research poll \"$RUN_ID\" -o \"/tmp/$SLUG\" --timeout 540\n\nShare the executive summary printed by the poll command.\nMention the output files:\n\n/tmp/$SLUG.md\n/tmp/$SLUG.json\n\nIf polling times out, re-run the same poll command — the run continues server-side."
      },
      {
        "title": "Enrich (CSV/JSON or inline data)",
        "body": "Use Enrich to add web-sourced columns to structured data."
      },
      {
        "title": "Step 1 — (optional) suggest columns",
        "body": "parallel-cli enrich suggest \"$INTENT\" --json\n\nUse this when the user knows the goal but not the exact output schema."
      },
      {
        "title": "Step 2 — run (always async for large jobs)",
        "body": "For CSV:\n\nparallel-cli enrich run   --source-type csv   --source \"input.csv\"   --target \"/tmp/enriched.csv\"   --source-columns '[{\"name\":\"company\",\"description\":\"Company name\"}]'   --intent \"$INTENT\"   --no-wait --json\n\nFor inline JSON rows:\n\nparallel-cli enrich run   --data '[{\"company\":\"Google\"},{\"company\":\"Apple\"}]'   --target \"/tmp/enriched.csv\"   --intent \"$INTENT\"   --no-wait --json\n\nParse taskgroup_id from JSON."
      },
      {
        "title": "Step 3 — poll",
        "body": "parallel-cli enrich poll \"$TASKGROUP_ID\" --timeout 540 --json\n\nAfter completion:\n\nTell the user the output file path (the --target you chose).\nPreview a few rows (using file read tools if available) and report row counts.\n\nIf poll times out, re-run it — the job continues server-side."
      },
      {
        "title": "FindAll (entity discovery)",
        "body": "Use FindAll when the user wants you to discover a set of entities (e.g., “AI startups in healthcare”, “roofing companies in Charlotte”, “YC devtools companies”)."
      },
      {
        "title": "Step 1 — run",
        "body": "parallel-cli findall run \"$OBJECTIVE\" --generator core --match-limit 25 --no-wait --json\n\nUseful options:\n\n--dry-run --json to preview schema before spending money\n--exclude '[{\"name\":\"Example Corp\",\"url\":\"example.com\"}]' to avoid known entities\n--generator preview|base|core|pro (core default; pro for hardest queries)\n\nParse run_id from JSON."
      },
      {
        "title": "Step 2 — poll + fetch results",
        "body": "parallel-cli findall poll \"$RUN_ID\" --json\nparallel-cli findall result \"$RUN_ID\" --json\n\nRespond with:\n\ntotal entities found\na clean list/table of the best matches (name + URL + key attributes)\nany caveats about ambiguous matches"
      },
      {
        "title": "Monitor (web change tracking)",
        "body": "Use Monitor when the user wants ongoing tracking.\n\nCreate:\n\nparallel-cli monitor create \"$OBJECTIVE\" --cadence daily --json\n\nOptional:\n\n--cadence hourly|daily|weekly|every_two_weeks\n--webhook https://example.com/hook (deliver events externally)\n--output-schema '<JSON schema string>' (structured events)\n\nManage:\n\nparallel-cli monitor list --json\nparallel-cli monitor get \"$MONITOR_ID\" --json\nparallel-cli monitor update \"$MONITOR_ID\" --cadence weekly --json\nparallel-cli monitor delete \"$MONITOR_ID\"\nparallel-cli monitor events \"$MONITOR_ID\" --json\nparallel-cli monitor simulate \"$MONITOR_ID\" --json\n\nRespond with the monitor id and how to retrieve events (or confirm webhook delivery)."
      },
      {
        "title": "Reference material",
        "body": "Copy/paste command templates and patterns: references/command-templates.md\nTroubleshooting common failures: references/troubleshooting.md"
      }
    ],
    "body": "Parallel AI Search (CLI Master)\n\nThis is a single “master” skill that replaces the earlier Node-script-based version of parallel-ai-search.\n\nIt routes to the right parallel-cli capability for the task:\n\nSearch: quick web lookup with citations (parallel-cli search)\nExtract: turn URLs (including PDFs and JS-heavy pages) into clean, LLM-ready text (parallel-cli extract)\nDeep research: multi-source reports with processor tiers (parallel-cli research ...)\nEnrich: add web-sourced columns to CSV/JSON (parallel-cli enrich ...)\nFindAll: discover entities from the web with optional enrichments (parallel-cli findall ...)\nMonitor: track web changes on a cadence, optionally via webhook (parallel-cli monitor ...)\nRouting rules (pick ONE)\n\nChoose the smallest / cheapest action that solves the user’s request:\n\nExtract — if the user gives one or more URLs or says “read/summarise this page”, “extract”, “quote”, “pull the content”, “what does this page say”.\nDeep research — ONLY if the user explicitly asks for deep, exhaustive, comprehensive, thorough investigation, or a multi-source “report”.\nEnrich — if the user provides a list/table (CSV/JSON/inline objects) and wants new columns like CEO, revenue, funding, contact info, etc.\nFindAll — if the user wants you to discover many entities (companies/people/venues/etc.) that match criteria.\nMonitor — if the user wants ongoing tracking (“alert me”, “track changes”, “monitor this weekly”) rather than a one-off answer.\nSearch — default for everything else that needs current web info or citations.\n\nOptional manual prefixes if the user invoked this skill directly:\n\nsearch: ...\nextract: ...\nresearch: ...\nenrich: ...\nfindall: ...\nmonitor: ...\n\nIf a prefix is present, honour it.\n\nSetup and authentication (only when needed)\n\nBefore running any Parallel command, ensure auth works:\n\nparallel-cli auth\n\n\nIf parallel-cli is missing, install it:\n\ncurl -fsSL https://parallel.ai/install.sh | bash\n\n\nIf you cannot use the install script, use pipx:\n\npipx install \"parallel-web-tools[cli]\"\npipx ensurepath\n\n\nThen authenticate (choose one):\n\n# Interactive OAuth (opens browser)\nparallel-cli login\n\n# Headless / SSH / CI\nparallel-cli login --device\n\n# Or environment variable\nexport PARALLEL_API_KEY=\"your_api_key\"\n\nOutput & citation rules\nAlways cite web-sourced facts with inline markdown links: [Source Title](https://...).\nEnd with a Sources list whenever you used Search/Extract/Research output.\nPrefer official/primary sources when available.\nFor long outputs, save to files in /tmp/ and summarise in-chat.\nSearch (default web lookup)\n\nUse Search for fast, cost-effective answers with citations.\n\nCommand template\nparallel-cli search \"$OBJECTIVE\"   --mode agentic   --max-results 10   --json\n\n\nAdd any of these only when relevant:\n\n--after-date YYYY-MM-DD (freshness constraint)\n--include-domains a.com b.org (restrict sources)\n--exclude-domains spam.com (block sources)\none or more -q \"keyword query\" flags (extra keyword probes)\n-o \"/tmp/$SLUG.search.json\" (save full JSON to a file)\nParse + respond\n\nFrom the JSON results, extract title, url, and any publish_date / excerpt fields. Answer the user’s question, and cite each claim inline.\n\nExtract (read one or more URLs)\n\nUse Extract when you need the actual contents of specific URLs (webpages, PDFs, JS-heavy sites).\n\nCommand template\nparallel-cli extract \"$URL\" --json\n\n\nAdd when relevant:\n\n--objective \"Focus area\" (e.g., pricing, API usage, constraints)\n--full-content (only if the user needs the whole page)\n--no-excerpts (if you only want full content)\n-o \"/tmp/$SLUG.extract.json\" (save full JSON to a file)\nRespond\nIf the user asked for a summary, summarise with citations to the extracted URL.\nIf the user asked for the verbatim text, provide the extracted markdown only if it is reasonably sized; otherwise provide the key sections + offer to read more from the saved output.\nDeep research (only when explicitly requested)\n\nDeep research is slower and may cost more than Search. Use it only when the user explicitly wants depth.\n\nStep 1 — start (always async)\nparallel-cli research run \"$QUESTION\" --processor pro-fast --no-wait --json\n\n\nParse run_id (and any monitoring URL) from JSON and tell the user the run started.\n\nStep 2 — poll (bounded timeout)\n\nChoose a short slug filename (lowercase-hyphen), then:\n\nparallel-cli research poll \"$RUN_ID\" -o \"/tmp/$SLUG\" --timeout 540\n\nShare the executive summary printed by the poll command.\nMention the output files:\n/tmp/$SLUG.md\n/tmp/$SLUG.json\n\nIf polling times out, re-run the same poll command — the run continues server-side.\n\nEnrich (CSV/JSON or inline data)\n\nUse Enrich to add web-sourced columns to structured data.\n\nStep 1 — (optional) suggest columns\nparallel-cli enrich suggest \"$INTENT\" --json\n\n\nUse this when the user knows the goal but not the exact output schema.\n\nStep 2 — run (always async for large jobs)\n\nFor CSV:\n\nparallel-cli enrich run   --source-type csv   --source \"input.csv\"   --target \"/tmp/enriched.csv\"   --source-columns '[{\"name\":\"company\",\"description\":\"Company name\"}]'   --intent \"$INTENT\"   --no-wait --json\n\n\nFor inline JSON rows:\n\nparallel-cli enrich run   --data '[{\"company\":\"Google\"},{\"company\":\"Apple\"}]'   --target \"/tmp/enriched.csv\"   --intent \"$INTENT\"   --no-wait --json\n\n\nParse taskgroup_id from JSON.\n\nStep 3 — poll\nparallel-cli enrich poll \"$TASKGROUP_ID\" --timeout 540 --json\n\n\nAfter completion:\n\nTell the user the output file path (the --target you chose).\nPreview a few rows (using file read tools if available) and report row counts.\n\nIf poll times out, re-run it — the job continues server-side.\n\nFindAll (entity discovery)\n\nUse FindAll when the user wants you to discover a set of entities (e.g., “AI startups in healthcare”, “roofing companies in Charlotte”, “YC devtools companies”).\n\nStep 1 — run\nparallel-cli findall run \"$OBJECTIVE\" --generator core --match-limit 25 --no-wait --json\n\n\nUseful options:\n\n--dry-run --json to preview schema before spending money\n--exclude '[{\"name\":\"Example Corp\",\"url\":\"example.com\"}]' to avoid known entities\n--generator preview|base|core|pro (core default; pro for hardest queries)\n\nParse run_id from JSON.\n\nStep 2 — poll + fetch results\nparallel-cli findall poll \"$RUN_ID\" --json\nparallel-cli findall result \"$RUN_ID\" --json\n\n\nRespond with:\n\ntotal entities found\na clean list/table of the best matches (name + URL + key attributes)\nany caveats about ambiguous matches\nMonitor (web change tracking)\n\nUse Monitor when the user wants ongoing tracking.\n\nCreate:\n\nparallel-cli monitor create \"$OBJECTIVE\" --cadence daily --json\n\n\nOptional:\n\n--cadence hourly|daily|weekly|every_two_weeks\n--webhook https://example.com/hook (deliver events externally)\n--output-schema '<JSON schema string>' (structured events)\n\nManage:\n\nparallel-cli monitor list --json\nparallel-cli monitor get \"$MONITOR_ID\" --json\nparallel-cli monitor update \"$MONITOR_ID\" --cadence weekly --json\nparallel-cli monitor delete \"$MONITOR_ID\"\nparallel-cli monitor events \"$MONITOR_ID\" --json\nparallel-cli monitor simulate \"$MONITOR_ID\" --json\n\n\nRespond with the monitor id and how to retrieve events (or confirm webhook delivery).\n\nReference material\nCopy/paste command templates and patterns: references/command-templates.md\nTroubleshooting common failures: references/troubleshooting.md"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/tristanmanchester/parallel-ai-search",
    "publisherUrl": "https://clawhub.ai/tristanmanchester/parallel-ai-search",
    "owner": "tristanmanchester",
    "version": "1.0.3",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/parallel-ai-search",
    "downloadUrl": "https://openagent3.xyz/downloads/parallel-ai-search",
    "agentUrl": "https://openagent3.xyz/skills/parallel-ai-search/agent",
    "manifestUrl": "https://openagent3.xyz/skills/parallel-ai-search/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/parallel-ai-search/agent.md"
  }
}