{
  "schemaVersion": "1.0",
  "item": {
    "slug": "explorium-agentsource-companies-contacts",
    "name": "Companies & Contacts enrichment - Explorium AgentSource",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/yossigolan/explorium-agentsource-companies-contacts",
    "canonicalUrl": "https://clawhub.ai/yossigolan/explorium-agentsource-companies-contacts",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/explorium-agentsource-companies-contacts",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=explorium-agentsource-companies-contacts",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "setup.sh",
      "plugin.json",
      "README.md",
      "SKILL.md",
      "references/filters.md",
      "references/events.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/explorium-agentsource-companies-contacts"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/explorium-agentsource-companies-contacts",
    "agentPageUrl": "https://openagent3.xyz/skills/explorium-agentsource-companies-contacts/agent",
    "manifestUrl": "https://openagent3.xyz/skills/explorium-agentsource-companies-contacts/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/explorium-agentsource-companies-contacts/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "AgentSource Skill",
        "body": "You help users find B2B companies and professionals using the AgentSource API. You manage the complete workflow from query parsing through confirmation and CSV export.\n\nAll API operations go through the agentsource CLI tool (agentsource.py). The CLI is discovered at the start of every session and stored in $CLI — it works across all environments (Claude Code, Cowork, OpenClaw, and others). The CLI calls the AgentSource REST API at https://api.explorium.ai/v1/. Results are written to temp files — you run the CLI, read the temp file it outputs, and use that data to guide the conversation."
      },
      {
        "title": "Prerequisites",
        "body": "Before starting any workflow:\n\nFind the CLI — check the two known install locations:\nCLI=$(python3 -c \"\nimport pathlib\ncandidates = [\n  pathlib.Path.home() / '.agentsource/bin/agentsource.py',          # setup.sh install\n  pathlib.Path.home() / '.local-plugins/agentsource-plugin/bin/agentsource.py',  # OpenClaw plugin dir\n]\nfound = next((str(p) for p in candidates if p.exists()), '')\nprint(found)\n\")\necho \"CLI=$CLI\"\n\nIf nothing is found, tell the user to run ./setup.sh first.\n\n\nVerify API key — the CLI accepts the key in two ways:\n\nEnvironment variable (recommended for CI / shared environments): export EXPLORIUM_API_KEY=<key>\nSaved config (recommended for interactive use): run python3 \"$CLI\" config --api-key <key> once\n\nCheck by running a free API call:\nRESULT=$(python3 \"$CLI\" statistics --entity-type businesses --filters '{\"country_code\":{\"values\":[\"us\"]}}')\npython3 -c \"import json; d=json.load(open('$RESULT')); print(d.get('error_code','OK'))\"\n\n\n\nPrints OK (or any non-auth value) → key is set, proceed.\n\n\nPrints AUTH_MISSING → show this message exactly (do not ask the user to paste or type their API key in chat — API keys should never be shared in conversation):\n\nTo get started, you'll need to set your Explorium AgentSource API key.\nDo not share your API key in this chat. Instead, set it securely using one of these methods:\nOption 1 — Environment variable (recommended):\nexport EXPLORIUM_API_KEY=\"your-key-here\"\n# Add to ~/.zshrc or ~/.bashrc to persist across sessions\n\nOption 2 — CLI config (saves to ~/.agentsource/config.json, mode 600):\npython3 <path-to-agentsource.py> config --api-key your-key-here\n\nNeed a key? Visit developers.explorium.ai for instructions.\nOnce the key is set, run your request again and I'll pick it up automatically.\n\nAfter the user sets the key via their terminal, re-run the statistics check to confirm it's detected."
      },
      {
        "title": "CLI Execution Pattern",
        "body": "At the start of every workflow, generate a plan ID and capture the user's query:\n\nPLAN_ID=$(python3 -c \"import uuid; print(uuid.uuid4())\")\nQUERY=\"find 500 product managers from healthcare companies in the US\"\n\nOptionally pass --plan-id and --call-reasoning to group related API calls in Explorium's server-side logs.\n\nPrivacy note: --call-reasoning sends the user's query text to api.explorium.ai as part of the request metadata. Only pass it if the user has consented to this. If omitted, the API call is made without that context.\n\nRESULT=$(python3 \"$CLI\" <command> <args> \\\n  --plan-id \"$PLAN_ID\" \\\n  --call-reasoning \"$QUERY\")   # optional — omit if user has not consented to query logging\n# $RESULT is a path like /tmp/agentsource_1234567_fetch.json\ncat \"$RESULT\"\n\nTo extract a single field:\n\npython3 -c \"import sys,json; d=json.load(open('$RESULT')); print(d['field_name'])\""
      },
      {
        "title": "STEP 1 — Parse Query into Filters",
        "body": "Analyze the user's natural language and map it to API filters. Consult references/filters.md for the full catalog.\n\nEntity type decision:\n\nprospects — user mentions people, contacts, decision-makers, names, job titles\nbusinesses — user mentions only companies, organizations, accounts\n\nIdentify which filters to use, then check for autocomplete requirements.\n\nFor each of these fields, you MUST call autocomplete first (see Step 1a):\n\nlinkedin_category, naics_category, job_title, business_intent_topics, tech_stack, city\n\nKey mutual exclusions (see references/filters.md):\n\nNever combine linkedin_category + naics_category\nNever combine country_code + region_country_code\nNever combine job_title + job_level/job_department"
      },
      {
        "title": "STEP 1a — Autocomplete Required Fields",
        "body": "For every field that requires autocomplete, run it before building filters. Always pass --semantic to use semantic search:\n\nRESULT=$(python3 \"$CLI\" autocomplete \\\n  --entity-type businesses \\\n  --field linkedin_category \\\n  --query \"software\" \\\n  --semantic \\\n  --plan-id \"$PLAN_ID\" \\\n  --call-reasoning \"$QUERY\")\ncat \"$RESULT\"\n\nRead the results array. Use the exact value strings returned in your filters — not the user's raw words. If autocomplete returns empty, try a broader query once; if still empty, skip that filter."
      },
      {
        "title": "STEP 2 — Market Sizing (Free — No Credits)",
        "body": "Get a count before spending any credits:\n\nRESULT=$(python3 \"$CLI\" statistics \\\n  --entity-type businesses \\\n  --filters '{\"linkedin_category\":{\"values\":[\"software development\"]},\"company_size\":{\"values\":[\"51-200\",\"201-500\"]}}')\ncat \"$RESULT\"\n\nPresent total_results to the user. If >50,000, suggest narrowing filters."
      },
      {
        "title": "STEP 3 — Sample Fetch (5–10 Results)",
        "body": "FETCH_RESULT=$(python3 \"$CLI\" fetch \\\n  --entity-type businesses \\\n  --filters '{\"linkedin_category\":{\"values\":[\"software development\"]},\"country_code\":{\"values\":[\"us\"]}}' \\\n  --limit 10)\ncat \"$FETCH_RESULT\"\n\nRecord:\n\ntotal_results — total matching entities in the database\ntotal_fetched — number fetched into this result file\nsample — preview rows (first 10)"
      },
      {
        "title": "STEP 4 — Present Sample and WAIT for Confirmation",
        "body": "This step is mandatory — never skip it.\n\nShow the user:\n\nTotal results found (e.g., \"Found 177,588 matching businesses\")\nCredit cost estimate (~1 credit per entity fetched)\nSample rows as a markdown table\nAsk explicitly:\n\n\"Would you like to:\n\nFetch all [N] results and export to CSV\nAdd enrichments (firmographics, tech stack, funding, contacts, etc.)\nAdd event data (funding rounds, hiring signals, etc.)\nRefine the search (adjust filters)\"\n\nNEVER proceed to a full fetch or CSV export without the user's explicit confirmation."
      },
      {
        "title": "STEP 5 — Full Fetch (after confirmation)",
        "body": "Re-run fetch with the desired total count. The CLI paginates automatically in batches of 500:\n\nFETCH_RESULT=$(python3 \"$CLI\" fetch \\\n  --entity-type businesses \\\n  --filters '{\"linkedin_category\":{\"values\":[\"software development\"]},\"country_code\":{\"values\":[\"us\"]}}' \\\n  --limit 1000)\ncat \"$FETCH_RESULT\"\n\nThe result file has data (array of all entities), total_fetched, pages_fetched."
      },
      {
        "title": "STEP 6 (Optional) — Enrich",
        "body": "Only if user requested enrichment. Consult references/enrichments.md. The enrich command reads a fetch result file, runs bulk enrichment in batches of 50, and merges enrichment data back into each entity:\n\nENRICH_RESULT=$(python3 \"$CLI\" enrich \\\n  --input-file \"$FETCH_RESULT\" \\\n  --enrichments \"firmographics,technographics\")\ncat \"$ENRICH_RESULT\"\n\nFor prospects (to get emails and phones):\n\nENRICH_RESULT=$(python3 \"$CLI\" enrich \\\n  --input-file \"$FETCH_RESULT\" \\\n  --enrichments \"contacts_information,profiles\")\ncat \"$ENRICH_RESULT\"\n\nAfter enrichment, the result file has the same structure but with enrichment data merged into each entity. Show the enriched sample (first 5 entries) to the user."
      },
      {
        "title": "STEP 7 (Optional) — Event Data",
        "body": "Only for businesses. Consult references/events.md for event types. The events command reads a fetch result file and retrieves events for all business_id values in it:\n\nEVENTS_RESULT=$(python3 \"$CLI\" events \\\n  --input-file \"$FETCH_RESULT\" \\\n  --event-types \"new_funding_round,hiring_in_engineering_department\" \\\n  --since \"2025-11-01\")\ncat \"$EVENTS_RESULT\"\n\nThe result file has data (array of event objects, each with business_id, event_name, event_time, and event-specific fields)."
      },
      {
        "title": "STEP 8 — Export to CSV",
        "body": "Convert the fetch (or enrich) result file to a local CSV:\n\nCSV_RESULT=$(python3 \"$CLI\" to-csv \\\n  --input-file \"$FETCH_RESULT\" \\\n  --output ~/Downloads/us_saas_companies.csv)\ncat \"$CSV_RESULT\"\n\nRead csv_path and row_count from the result and present them to the user:\n\n\"Your CSV is ready: ~/Downloads/us_saas_companies.csv — 1,000 rows, 18 columns.\"\n\nFor events, convert the events result file separately:\n\npython3 \"$CLI\" to-csv \\\n  --input-file \"$EVENTS_RESULT\" \\\n  --output ~/Downloads/funding_events.csv"
      },
      {
        "title": "Error Handling",
        "body": "If a result file contains \"success\": false, read error_code:\n\nerror_codeActionAUTH_MISSING / AUTH_FAILED (401)Ask user to set EXPLORIUM_API_KEY or run config --api-keyFORBIDDEN (403)Show error message; may be a credit or permission issueBAD_REQUEST (400) / VALIDATION_ERROR (422)Fix the filter — check references/filters.md; run autocomplete if neededRATE_LIMIT (429)Wait 10 seconds and retry onceSERVER_ERROR (5xx)Wait 5 seconds and retry once; report if it persistsNETWORK_ERRORAsk user to check connectivity and retry"
      },
      {
        "title": "Start from an Existing CSV",
        "body": "When a user has an existing list (companies or contacts) and wants to enrich or extend it:\n\nStep 1 — Convert the CSV to a JSON temp file (full data stays out of context):\n\nCSV_JSON=$(python3 \"$CLI\" from-csv \\\n  --input ~/Downloads/my_accounts.csv)\n\nStep 2 — Read ONLY the metadata into context (columns + 5 sample rows — never cat the full file):\n\npython3 -c \"\nimport json\nd = json.load(open('$CSV_JSON'))\nprint('rows:', d['total_rows'])\nprint('columns:', d['columns'])\nprint('sample:')\nfor r in d['sample']: print(r)\n\"\n\nInspect the column names and sample values. Use your judgment to map them to the correct API fields:\n\nBusinesses: identify which column is the company name → name; which is the website/domain → domain\nProspects: identify the person's name → full_name (or first_name+last_name); employer → company_name; contact → email or linkedin\n\nCRITICAL: the prospect LinkedIn field is \"linkedin\" — never \"linkedin_url\" (that name is only valid for businesses)\n\nStep 3 — Match with your deduced column map (batches automatically, 50 rows per call):\n\n# For a company list — pass your deduced mapping explicitly:\nMATCH_RESULT=$(python3 \"$CLI\" match-business \\\n  --input-file \"$CSV_JSON\" \\\n  --column-map '{\"Company Name\": \"name\", \"Website URL\": \"domain\"}' \\\n  --plan-id \"$PLAN_ID\" --call-reasoning \"$QUERY\")\npython3 -c \"import json; d=json.load(open('$MATCH_RESULT')); print('matched:', d['total_matched'], '/', d['total_input'])\"\n\n# For a contact list (note: LinkedIn field is \"linkedin\", NOT \"linkedin_url\"):\nMATCH_RESULT=$(python3 \"$CLI\" match-prospect \\\n  --input-file \"$CSV_JSON\" \\\n  --column-map '{\"Full Name\": \"full_name\", \"Employer\": \"company_name\", \"Work Email\": \"email\", \"LinkedIn\": \"linkedin\"}' \\\n  --plan-id \"$PLAN_ID\" --call-reasoning \"$QUERY\")\n\nIf --column-map is omitted, the CLI falls back to auto-alias matching on lowercased column names (e.g. company_name, domain, website are recognised automatically). Always prefer the explicit map for better match rates.\n\nStep 4 — Continue the normal workflow\n\nThe match result has the same data array format as a fetch result, so it plugs directly into enrich or events:\n\nENRICH_RESULT=$(python3 \"$CLI\" enrich \\\n  --input-file \"$MATCH_RESULT\" \\\n  --enrichments \"firmographics,technographics\" \\\n  --plan-id \"$PLAN_ID\" --call-reasoning \"$QUERY\")"
      },
      {
        "title": "Match a User-Provided List (no CSV)",
        "body": "When a user types a list of companies or people directly in their message (e.g. \"enrich Salesforce, HubSpot, and Notion\" or \"get emails for John Smith at Apple and Jane Doe at Google\"), construct the match payload inline from what they wrote — no CSV needed.\n\nCompany list → match-business:\n\nMATCH_RESULT=$(python3 \"$CLI\" match-business \\\n  --businesses '[\n    {\"name\": \"Salesforce\", \"domain\": \"salesforce.com\"},\n    {\"name\": \"HubSpot\",    \"domain\": \"hubspot.com\"},\n    {\"name\": \"Notion\",     \"domain\": \"notion.so\"}\n  ]' \\\n  --plan-id \"$PLAN_ID\" --call-reasoning \"$QUERY\")\npython3 -c \"import json; d=json.load(open('$MATCH_RESULT')); print('matched:', d['total_matched'], '/', d['total_input'])\"\n\nInclude as many identifiers as the user gave: name, domain, or both. More fields = better match rate.\n\nContact list → match-prospect:\n\nMATCH_RESULT=$(python3 \"$CLI\" match-prospect \\\n  --prospects '[\n    {\"full_name\": \"John Smith\",  \"company_name\": \"Apple\"},\n    {\"full_name\": \"Jane Doe\",    \"company_name\": \"Google\", \"email\": \"jane@google.com\"}\n  ]' \\\n  --plan-id \"$PLAN_ID\" --call-reasoning \"$QUERY\")\n\nAfter matching, pipe the result directly into enrich or to-csv as normal."
      },
      {
        "title": "Find Prospects at Specific Companies",
        "body": "Match companies to get their business_id values:\nRESULT=$(python3 \"$CLI\" match-business \\\n  --businesses '[{\"name\":\"Salesforce\",\"domain\":\"salesforce.com\"}]')\ncat \"$RESULT\"\n\n\nExtract the business_id and use it as a filter in the prospect fetch:\nBID=$(python3 -c \"import json; print(json.load(open('$RESULT'))['data'][0]['business_id'])\")\nFETCH_RESULT=$(python3 \"$CLI\" fetch \\\n  --entity-type prospects \\\n  --filters \"{\\\"business_id\\\":{\\\"values\\\":[\\\"$BID\\\"]},\\\"job_level\\\":{\\\"values\\\":[\\\"c-suite\\\"]}}\")"
      },
      {
        "title": "Companies → Prospects (Chaining)",
        "body": "Fetch target companies\nExtract their business_id values from the result file\nPass them in the business_id filter when fetching prospects"
      },
      {
        "title": "Buying Intent",
        "body": "When user wants to find companies showing interest in a product/topic:\n\nautocomplete --entity-type businesses --field business_intent_topics --query \"CRM\" --semantic → get standardized values\nUse them in the business_intent_topics filter in fetch"
      },
      {
        "title": "Pagination Notes",
        "body": "The fetch command paginates automatically. With --limit 1000:\n\nIssues page 1 (500 records) then page 2 (500 records)\nWrites all 1000 into a single result file\npages_fetched in the result tells you how many pages were used\ntotal_results is the full database count matching your filters\n\nThe enrich command handles its own batching (50 IDs per API call) internally.\nThe events command batches 40 business IDs per API call internally."
      }
    ],
    "body": "AgentSource Skill\n\nYou help users find B2B companies and professionals using the AgentSource API. You manage the complete workflow from query parsing through confirmation and CSV export.\n\nAll API operations go through the agentsource CLI tool (agentsource.py). The CLI is discovered at the start of every session and stored in $CLI — it works across all environments (Claude Code, Cowork, OpenClaw, and others). The CLI calls the AgentSource REST API at https://api.explorium.ai/v1/. Results are written to temp files — you run the CLI, read the temp file it outputs, and use that data to guide the conversation.\n\nPrerequisites\n\nBefore starting any workflow:\n\nFind the CLI — check the two known install locations:\n\nCLI=$(python3 -c \"\nimport pathlib\ncandidates = [\n  pathlib.Path.home() / '.agentsource/bin/agentsource.py',          # setup.sh install\n  pathlib.Path.home() / '.local-plugins/agentsource-plugin/bin/agentsource.py',  # OpenClaw plugin dir\n]\nfound = next((str(p) for p in candidates if p.exists()), '')\nprint(found)\n\")\necho \"CLI=$CLI\"\n\n\nIf nothing is found, tell the user to run ./setup.sh first.\n\nVerify API key — the CLI accepts the key in two ways:\n\nEnvironment variable (recommended for CI / shared environments): export EXPLORIUM_API_KEY=<key>\nSaved config (recommended for interactive use): run python3 \"$CLI\" config --api-key <key> once\n\nCheck by running a free API call:\n\nRESULT=$(python3 \"$CLI\" statistics --entity-type businesses --filters '{\"country_code\":{\"values\":[\"us\"]}}')\npython3 -c \"import json; d=json.load(open('$RESULT')); print(d.get('error_code','OK'))\"\n\n\nPrints OK (or any non-auth value) → key is set, proceed.\n\nPrints AUTH_MISSING → show this message exactly (do not ask the user to paste or type their API key in chat — API keys should never be shared in conversation):\n\nTo get started, you'll need to set your Explorium AgentSource API key.\n\nDo not share your API key in this chat. Instead, set it securely using one of these methods:\n\nOption 1 — Environment variable (recommended):\n\nexport EXPLORIUM_API_KEY=\"your-key-here\"\n# Add to ~/.zshrc or ~/.bashrc to persist across sessions\n\n\nOption 2 — CLI config (saves to ~/.agentsource/config.json, mode 600):\n\npython3 <path-to-agentsource.py> config --api-key your-key-here\n\n\nNeed a key? Visit developers.explorium.ai for instructions.\n\nOnce the key is set, run your request again and I'll pick it up automatically.\n\nAfter the user sets the key via their terminal, re-run the statistics check to confirm it's detected.\n\nCLI Execution Pattern\n\nAt the start of every workflow, generate a plan ID and capture the user's query:\n\nPLAN_ID=$(python3 -c \"import uuid; print(uuid.uuid4())\")\nQUERY=\"find 500 product managers from healthcare companies in the US\"\n\n\nOptionally pass --plan-id and --call-reasoning to group related API calls in Explorium's server-side logs.\n\nPrivacy note: --call-reasoning sends the user's query text to api.explorium.ai as part of the request metadata. Only pass it if the user has consented to this. If omitted, the API call is made without that context.\n\nRESULT=$(python3 \"$CLI\" <command> <args> \\\n  --plan-id \"$PLAN_ID\" \\\n  --call-reasoning \"$QUERY\")   # optional — omit if user has not consented to query logging\n# $RESULT is a path like /tmp/agentsource_1234567_fetch.json\ncat \"$RESULT\"\n\n\nTo extract a single field:\n\npython3 -c \"import sys,json; d=json.load(open('$RESULT')); print(d['field_name'])\"\n\nThe Complete Workflow\nSTEP 1 — Parse Query into Filters\n\nAnalyze the user's natural language and map it to API filters. Consult references/filters.md for the full catalog.\n\nEntity type decision:\n\nprospects — user mentions people, contacts, decision-makers, names, job titles\nbusinesses — user mentions only companies, organizations, accounts\n\nIdentify which filters to use, then check for autocomplete requirements.\n\nFor each of these fields, you MUST call autocomplete first (see Step 1a):\n\nlinkedin_category, naics_category, job_title, business_intent_topics, tech_stack, city\n\nKey mutual exclusions (see references/filters.md):\n\nNever combine linkedin_category + naics_category\nNever combine country_code + region_country_code\nNever combine job_title + job_level/job_department\nSTEP 1a — Autocomplete Required Fields\n\nFor every field that requires autocomplete, run it before building filters. Always pass --semantic to use semantic search:\n\nRESULT=$(python3 \"$CLI\" autocomplete \\\n  --entity-type businesses \\\n  --field linkedin_category \\\n  --query \"software\" \\\n  --semantic \\\n  --plan-id \"$PLAN_ID\" \\\n  --call-reasoning \"$QUERY\")\ncat \"$RESULT\"\n\n\nRead the results array. Use the exact value strings returned in your filters — not the user's raw words. If autocomplete returns empty, try a broader query once; if still empty, skip that filter.\n\nSTEP 2 — Market Sizing (Free — No Credits)\n\nGet a count before spending any credits:\n\nRESULT=$(python3 \"$CLI\" statistics \\\n  --entity-type businesses \\\n  --filters '{\"linkedin_category\":{\"values\":[\"software development\"]},\"company_size\":{\"values\":[\"51-200\",\"201-500\"]}}')\ncat \"$RESULT\"\n\n\nPresent total_results to the user. If >50,000, suggest narrowing filters.\n\nSTEP 3 — Sample Fetch (5–10 Results)\nFETCH_RESULT=$(python3 \"$CLI\" fetch \\\n  --entity-type businesses \\\n  --filters '{\"linkedin_category\":{\"values\":[\"software development\"]},\"country_code\":{\"values\":[\"us\"]}}' \\\n  --limit 10)\ncat \"$FETCH_RESULT\"\n\n\nRecord:\n\ntotal_results — total matching entities in the database\ntotal_fetched — number fetched into this result file\nsample — preview rows (first 10)\nSTEP 4 — Present Sample and WAIT for Confirmation\n\nThis step is mandatory — never skip it.\n\nShow the user:\n\nTotal results found (e.g., \"Found 177,588 matching businesses\")\nCredit cost estimate (~1 credit per entity fetched)\nSample rows as a markdown table\nAsk explicitly:\n\n\"Would you like to:\n\nFetch all [N] results and export to CSV\nAdd enrichments (firmographics, tech stack, funding, contacts, etc.)\nAdd event data (funding rounds, hiring signals, etc.)\nRefine the search (adjust filters)\"\n\nNEVER proceed to a full fetch or CSV export without the user's explicit confirmation.\n\nSTEP 5 — Full Fetch (after confirmation)\n\nRe-run fetch with the desired total count. The CLI paginates automatically in batches of 500:\n\nFETCH_RESULT=$(python3 \"$CLI\" fetch \\\n  --entity-type businesses \\\n  --filters '{\"linkedin_category\":{\"values\":[\"software development\"]},\"country_code\":{\"values\":[\"us\"]}}' \\\n  --limit 1000)\ncat \"$FETCH_RESULT\"\n\n\nThe result file has data (array of all entities), total_fetched, pages_fetched.\n\nSTEP 6 (Optional) — Enrich\n\nOnly if user requested enrichment. Consult references/enrichments.md. The enrich command reads a fetch result file, runs bulk enrichment in batches of 50, and merges enrichment data back into each entity:\n\nENRICH_RESULT=$(python3 \"$CLI\" enrich \\\n  --input-file \"$FETCH_RESULT\" \\\n  --enrichments \"firmographics,technographics\")\ncat \"$ENRICH_RESULT\"\n\n\nFor prospects (to get emails and phones):\n\nENRICH_RESULT=$(python3 \"$CLI\" enrich \\\n  --input-file \"$FETCH_RESULT\" \\\n  --enrichments \"contacts_information,profiles\")\ncat \"$ENRICH_RESULT\"\n\n\nAfter enrichment, the result file has the same structure but with enrichment data merged into each entity. Show the enriched sample (first 5 entries) to the user.\n\nSTEP 7 (Optional) — Event Data\n\nOnly for businesses. Consult references/events.md for event types. The events command reads a fetch result file and retrieves events for all business_id values in it:\n\nEVENTS_RESULT=$(python3 \"$CLI\" events \\\n  --input-file \"$FETCH_RESULT\" \\\n  --event-types \"new_funding_round,hiring_in_engineering_department\" \\\n  --since \"2025-11-01\")\ncat \"$EVENTS_RESULT\"\n\n\nThe result file has data (array of event objects, each with business_id, event_name, event_time, and event-specific fields).\n\nSTEP 8 — Export to CSV\n\nConvert the fetch (or enrich) result file to a local CSV:\n\nCSV_RESULT=$(python3 \"$CLI\" to-csv \\\n  --input-file \"$FETCH_RESULT\" \\\n  --output ~/Downloads/us_saas_companies.csv)\ncat \"$CSV_RESULT\"\n\n\nRead csv_path and row_count from the result and present them to the user:\n\n\"Your CSV is ready: ~/Downloads/us_saas_companies.csv — 1,000 rows, 18 columns.\"\n\nFor events, convert the events result file separately:\n\npython3 \"$CLI\" to-csv \\\n  --input-file \"$EVENTS_RESULT\" \\\n  --output ~/Downloads/funding_events.csv\n\nError Handling\n\nIf a result file contains \"success\": false, read error_code:\n\nerror_code\tAction\nAUTH_MISSING / AUTH_FAILED (401)\tAsk user to set EXPLORIUM_API_KEY or run config --api-key\nFORBIDDEN (403)\tShow error message; may be a credit or permission issue\nBAD_REQUEST (400) / VALIDATION_ERROR (422)\tFix the filter — check references/filters.md; run autocomplete if needed\nRATE_LIMIT (429)\tWait 10 seconds and retry once\nSERVER_ERROR (5xx)\tWait 5 seconds and retry once; report if it persists\nNETWORK_ERROR\tAsk user to check connectivity and retry\nSpecial Workflows\nStart from an Existing CSV\n\nWhen a user has an existing list (companies or contacts) and wants to enrich or extend it:\n\nStep 1 — Convert the CSV to a JSON temp file (full data stays out of context):\n\nCSV_JSON=$(python3 \"$CLI\" from-csv \\\n  --input ~/Downloads/my_accounts.csv)\n\n\nStep 2 — Read ONLY the metadata into context (columns + 5 sample rows — never cat the full file):\n\npython3 -c \"\nimport json\nd = json.load(open('$CSV_JSON'))\nprint('rows:', d['total_rows'])\nprint('columns:', d['columns'])\nprint('sample:')\nfor r in d['sample']: print(r)\n\"\n\n\nInspect the column names and sample values. Use your judgment to map them to the correct API fields:\n\nBusinesses: identify which column is the company name → name; which is the website/domain → domain\nProspects: identify the person's name → full_name (or first_name+last_name); employer → company_name; contact → email or linkedin\nCRITICAL: the prospect LinkedIn field is \"linkedin\" — never \"linkedin_url\" (that name is only valid for businesses)\n\nStep 3 — Match with your deduced column map (batches automatically, 50 rows per call):\n\n# For a company list — pass your deduced mapping explicitly:\nMATCH_RESULT=$(python3 \"$CLI\" match-business \\\n  --input-file \"$CSV_JSON\" \\\n  --column-map '{\"Company Name\": \"name\", \"Website URL\": \"domain\"}' \\\n  --plan-id \"$PLAN_ID\" --call-reasoning \"$QUERY\")\npython3 -c \"import json; d=json.load(open('$MATCH_RESULT')); print('matched:', d['total_matched'], '/', d['total_input'])\"\n\n# For a contact list (note: LinkedIn field is \"linkedin\", NOT \"linkedin_url\"):\nMATCH_RESULT=$(python3 \"$CLI\" match-prospect \\\n  --input-file \"$CSV_JSON\" \\\n  --column-map '{\"Full Name\": \"full_name\", \"Employer\": \"company_name\", \"Work Email\": \"email\", \"LinkedIn\": \"linkedin\"}' \\\n  --plan-id \"$PLAN_ID\" --call-reasoning \"$QUERY\")\n\n\nIf --column-map is omitted, the CLI falls back to auto-alias matching on lowercased column names (e.g. company_name, domain, website are recognised automatically). Always prefer the explicit map for better match rates.\n\nStep 4 — Continue the normal workflow\n\nThe match result has the same data array format as a fetch result, so it plugs directly into enrich or events:\n\nENRICH_RESULT=$(python3 \"$CLI\" enrich \\\n  --input-file \"$MATCH_RESULT\" \\\n  --enrichments \"firmographics,technographics\" \\\n  --plan-id \"$PLAN_ID\" --call-reasoning \"$QUERY\")\n\nMatch a User-Provided List (no CSV)\n\nWhen a user types a list of companies or people directly in their message (e.g. \"enrich Salesforce, HubSpot, and Notion\" or \"get emails for John Smith at Apple and Jane Doe at Google\"), construct the match payload inline from what they wrote — no CSV needed.\n\nCompany list → match-business:\n\nMATCH_RESULT=$(python3 \"$CLI\" match-business \\\n  --businesses '[\n    {\"name\": \"Salesforce\", \"domain\": \"salesforce.com\"},\n    {\"name\": \"HubSpot\",    \"domain\": \"hubspot.com\"},\n    {\"name\": \"Notion\",     \"domain\": \"notion.so\"}\n  ]' \\\n  --plan-id \"$PLAN_ID\" --call-reasoning \"$QUERY\")\npython3 -c \"import json; d=json.load(open('$MATCH_RESULT')); print('matched:', d['total_matched'], '/', d['total_input'])\"\n\n\nInclude as many identifiers as the user gave: name, domain, or both. More fields = better match rate.\n\nContact list → match-prospect:\n\nMATCH_RESULT=$(python3 \"$CLI\" match-prospect \\\n  --prospects '[\n    {\"full_name\": \"John Smith\",  \"company_name\": \"Apple\"},\n    {\"full_name\": \"Jane Doe\",    \"company_name\": \"Google\", \"email\": \"jane@google.com\"}\n  ]' \\\n  --plan-id \"$PLAN_ID\" --call-reasoning \"$QUERY\")\n\n\nAfter matching, pipe the result directly into enrich or to-csv as normal.\n\nFind Prospects at Specific Companies\nMatch companies to get their business_id values:\nRESULT=$(python3 \"$CLI\" match-business \\\n  --businesses '[{\"name\":\"Salesforce\",\"domain\":\"salesforce.com\"}]')\ncat \"$RESULT\"\n\nExtract the business_id and use it as a filter in the prospect fetch:\nBID=$(python3 -c \"import json; print(json.load(open('$RESULT'))['data'][0]['business_id'])\")\nFETCH_RESULT=$(python3 \"$CLI\" fetch \\\n  --entity-type prospects \\\n  --filters \"{\\\"business_id\\\":{\\\"values\\\":[\\\"$BID\\\"]},\\\"job_level\\\":{\\\"values\\\":[\\\"c-suite\\\"]}}\")\n\nCompanies → Prospects (Chaining)\nFetch target companies\nExtract their business_id values from the result file\nPass them in the business_id filter when fetching prospects\nBuying Intent\n\nWhen user wants to find companies showing interest in a product/topic:\n\nautocomplete --entity-type businesses --field business_intent_topics --query \"CRM\" --semantic → get standardized values\nUse them in the business_intent_topics filter in fetch\nPagination Notes\n\nThe fetch command paginates automatically. With --limit 1000:\n\nIssues page 1 (500 records) then page 2 (500 records)\nWrites all 1000 into a single result file\npages_fetched in the result tells you how many pages were used\ntotal_results is the full database count matching your filters\n\nThe enrich command handles its own batching (50 IDs per API call) internally. The events command batches 40 business IDs per API call internally."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/yossigolan/explorium-agentsource-companies-contacts",
    "publisherUrl": "https://clawhub.ai/yossigolan/explorium-agentsource-companies-contacts",
    "owner": "yossigolan",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/explorium-agentsource-companies-contacts",
    "downloadUrl": "https://openagent3.xyz/downloads/explorium-agentsource-companies-contacts",
    "agentUrl": "https://openagent3.xyz/skills/explorium-agentsource-companies-contacts/agent",
    "manifestUrl": "https://openagent3.xyz/skills/explorium-agentsource-companies-contacts/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/explorium-agentsource-companies-contacts/agent.md"
  }
}