{
  "schemaVersion": "1.0",
  "item": {
    "slug": "instagram-scraper",
    "name": "Instagram Scraper",
    "source": "tencent",
    "type": "skill",
    "category": "通讯协作",
    "sourceUrl": "https://clawhub.ai/ArulmozhiV/instagram-scraper",
    "canonicalUrl": "https://clawhub.ai/ArulmozhiV/instagram-scraper",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/instagram-scraper",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=instagram-scraper",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/instagram-scraper"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/instagram-scraper",
    "agentPageUrl": "https://openagent3.xyz/skills/instagram-scraper/agent",
    "manifestUrl": "https://openagent3.xyz/skills/instagram-scraper/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/instagram-scraper/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Instagram Profile Scraper",
        "body": "A browser-based Instagram profile discovery and scraping tool.\n\nPart of ScrapeClaw — a suite of production-ready, agentic social media scrapers for Instagram, YouTube, X/Twitter, and Facebook built with Python & Playwright, no API keys required.\n\n---\nname: instagram-scraper\ndescription: Discover and scrape Instagram profiles from your browser.\nemoji: 📸\nversion: 1.0.6\nauthor: influenza\ntags:\n  - instagram\n  - scraping\n  - social-media\n  - influencer-discovery\nmetadata:\n  clawdbot:\n    requires:\n      bins:\n        - python3\n        - chromium\n\n    config:\n      stateDirs:\n        - data/output\n        - data/queue\n        - thumbnails\n      outputFormats:\n        - json\n        - csv\n---"
      },
      {
        "title": "Overview",
        "body": "This skill provides a two-phase Instagram scraping system:\n\nProfile Discovery\nBrowser Scraping"
      },
      {
        "title": "Features",
        "body": "🔍  - Discover Instagram profiles by location and category\n🌐  - Full browser simulation for accurate scraping\n🛡️  - Browser fingerprinting, human behavior simulation, and stealth scripts\n📊  - Profile info, stats, images, and engagement data\n💾  - JSON/CSV export with downloaded thumbnails\n🔄  - Resume interrupted scraping sessions\n⚡  - Auto-skip private accounts, low followers, empty profiles\n🌍  - Built-in residential proxy support with 4 providers\n\nGetting Google API Credentials (Optional)\n\nGo to Google Cloud Console\nCreate a new project or select existing\nEnable \"Custom Search API\"\nCreate API credentials → API Key\nGo to Programmable Search Engine\nCreate a search engine with instagram.com as the site to search\nCopy the Search Engine ID"
      },
      {
        "title": "Agent Tool Interface",
        "body": "For OpenClaw agent integration, the skill provides JSON output:\n\n# Discover profiles (returns JSON)\ndiscover --location \"Miami\" --category \"fitness\" --output json\n\n# Scrape single profile (returns JSON)\nscrape --username influencer123 --output json"
      },
      {
        "title": "Profile Data Structure",
        "body": "{\n  \"username\": \"example_user\",\n  \"full_name\": \"Example User\",\n  \"bio\": \"Fashion blogger | NYC\",\n  \"followers\": 125000,\n  \"following\": 1500,\n  \"posts_count\": 450,\n  \"is_verified\": false,\n  \"is_private\": false,\n  \"influencer_tier\": \"mid\",\n  \"category\": \"fashion\",\n  \"location\": \"New York\",\n  \"profile_pic_local\": \"thumbnails/example_user/profile_abc123.jpg\",\n  \"content_thumbnails\": [\n    \"thumbnails/example_user/content_1_def456.jpg\",\n    \"thumbnails/example_user/content_2_ghi789.jpg\"\n  ],\n  \"post_engagement\": [\n    {\"post_url\": \"https://instagram.com/p/ABC123/\", \"likes\": 5420, \"comments\": 89}\n  ],\n  \"scrape_timestamp\": \"2025-02-09T14:30:00\"\n}"
      },
      {
        "title": "Influencer Tiers",
        "body": "TierFollower Rangenano< 1,000micro1,000 - 10,000mid10,000 - 100,000macro100,000 - 1Mmega> 1,000,000"
      },
      {
        "title": "File Outputs",
        "body": "Queue files: data/queue/{location}_{category}_{timestamp}.json\nScraped data: data/output/{username}.json\nThumbnails: thumbnails/{username}/profile_*.jpg, thumbnails/{username}/content_*.jpg\nExport files: data/export_{timestamp}.json, data/export_{timestamp}.csv"
      },
      {
        "title": "Configuration",
        "body": "Edit config/scraper_config.json:\n\n{\n  \"proxy\": {\n    \"enabled\": false,\n    \"provider\": \"brightdata\",\n    \"country\": \"\",\n    \"sticky\": true,\n    \"sticky_ttl_minutes\": 10\n  },\n  \"google_search\": {\n    \"enabled\": true,\n    \"api_key\": \"\",\n    \"search_engine_id\": \"\",\n    \"queries_per_location\": 3\n  },\n  \"scraper\": {\n    \"headless\": false,\n    \"min_followers\": 1000,\n    \"download_thumbnails\": true,\n    \"max_thumbnails\": 6\n  },\n  \"cities\": [\"New York\", \"Los Angeles\", \"Miami\", \"Chicago\"],\n  \"categories\": [\"fashion\", \"beauty\", \"fitness\", \"food\", \"travel\", \"tech\"]\n}"
      },
      {
        "title": "Filters Applied",
        "body": "The scraper automatically filters out:\n\n❌ Private accounts\n❌ Accounts with < 1,000 followers (configurable)\n❌ Accounts with no posts\n❌ Non-existent/removed accounts\n❌ Already scraped accounts (deduplication)"
      },
      {
        "title": "Login Issues",
        "body": "Ensure credentials are correct\nHandle verification codes when prompted\nWait if rate limited (the script will auto-retry)"
      },
      {
        "title": "No Profiles Discovered",
        "body": "Check Google API key and quota\nVerify Search Engine ID is configured for instagram.com\nTry different location/category combinations"
      },
      {
        "title": "Rate Limiting",
        "body": "Reduce scraping speed (increase delays in config)\nRun during off-peak hours\nUse a residential proxy (see below)"
      },
      {
        "title": "Why Use a Residential Proxy?",
        "body": "Running a scraper at scale without a residential proxy will get your IP blocked fast. Here's why proxies are essential for long-running scrapes:\n\nAdvantageDescriptionAvoid IP BansResidential IPs look like real household users, not data-center bots. Instagram is far less likely to flag them.Automatic IP RotationEach request (or session) gets a fresh IP, so rate-limits never stack up on one address.Geo-TargetingRoute traffic through a specific country/city so scraped content matches the target audience's locale.Sticky SessionsKeep the same IP for a configurable window (e.g. 10 min) — critical for maintaining a consistent browsing session.Higher Success RateRotating residential IPs deliver 95%+ success rates compared to ~30% with data-center proxies on Instagram.Long-Running ScrapesScrape thousands of profiles over hours or days without interruption.Concurrent ScrapingRun multiple browser instances across different IPs simultaneously."
      },
      {
        "title": "Recommended Proxy Providers",
        "body": "We have affiliate partnerships with top residential proxy providers. Using these links supports continued development of this skill:\n\nProviderBest ForSign UpBright DataWorld's largest network, 72M+ IPs, enterprise-grade👉 Get Bright DataIProyalPay-as-you-go, 195+ countries, no traffic expiry👉 Get IProyalStorm ProxiesFast & reliable, developer-friendly API, competitive pricing👉 Get Storm ProxiesNetNutISP-grade network, 52M+ IPs, direct connectivity👉 Get NetNut"
      },
      {
        "title": "Setup Steps",
        "body": "1. Get Your Proxy Credentials\n\nSign up with any provider above, then grab:\n\nUsername (from your provider dashboard)\nPassword (from your provider dashboard)\nHost and Port are pre-configured per provider (or use custom)\n\n2. Configure via Environment Variables\n\nexport PROXY_ENABLED=true\nexport PROXY_PROVIDER=brightdata    # brightdata | iproyal | stormproxies | netnut | custom\nexport PROXY_USERNAME=your_user\nexport PROXY_PASSWORD=your_pass\nexport PROXY_COUNTRY=us             # optional: two-letter country code\nexport PROXY_STICKY=true            # optional: keep same IP per session\n\n3. Provider-Specific Host/Port Defaults\n\nThese are auto-configured when you set the provider name:\n\nProviderHostPortBright Databrd.superproxy.io22225IProyalproxy.iproyal.com12321Storm Proxiesrotating.stormproxies.com9999NetNutgw-resi.netnut.io5959\n\nOverride with PROXY_HOST / PROXY_PORT env vars if your plan uses a different gateway.\n\n4. Custom Proxy Provider\n\nFor any other proxy service, set provider to custom and supply host/port manually:\n\n{\n  \"proxy\": {\n    \"enabled\": true,\n    \"provider\": \"custom\",\n    \"host\": \"your.proxy.host\",\n    \"port\": 8080,\n    \"username\": \"user\",\n    \"password\": \"pass\"\n  }\n}"
      },
      {
        "title": "Running the Scraper with Proxy",
        "body": "Once configured, the scraper picks up the proxy automatically — no extra flags needed:\n\n# Discover and scrape as usual — proxy is applied automatically\npython main.py discover --location \"Miami\" --category \"fitness\"\npython main.py scrape --username influencer123\n\n# The log will confirm proxy is active:\n# INFO - Proxy enabled: <ProxyManager provider=brightdata enabled host=brd.superproxy.io:22225>\n# INFO - Browser using proxy: brightdata → brd.superproxy.io:22225"
      },
      {
        "title": "Using the Proxy Manager Programmatically",
        "body": "from proxy_manager import ProxyManager\n\n# From config (auto-reads config/scraper_config.json)\npm = ProxyManager.from_config()\n\n# From environment variables\npm = ProxyManager.from_env()\n\n# Manual construction\npm = ProxyManager(\n    provider=\"brightdata\",\n    username=\"your_user\",\n    password=\"your_pass\",\n    country=\"us\",\n    sticky=True\n)\n\n# For Playwright browser context\nproxy = pm.get_playwright_proxy()\n# → {\"server\": \"http://brd.superproxy.io:22225\", \"username\": \"user-country-us-session-abc123\", \"password\": \"pass\"}\n\n# For requests / aiohttp\nproxies = pm.get_requests_proxy()\n# → {\"http\": \"http://user:pass@host:port\", \"https\": \"http://user:pass@host:port\"}\n\n# Force new IP (rotates session ID)\npm.rotate_session()\n\n# Debug info\nprint(pm.info())"
      },
      {
        "title": "Best Practices for Long-Running Scrapes",
        "body": "Use sticky sessions — Instagram requires consistent IPs during a browsing session. Set \"sticky\": true.\nTarget the right country — Set \"country\": \"us\" (or your target region) so Instagram serves content in the expected locale.\nCombine with existing anti-detection — This scraper already has fingerprinting, stealth scripts, and human behavior simulation. The proxy is the final layer.\nRotate sessions between batches — Call pm.rotate_session() between large batches of profiles to get a fresh IP.\nUse delays — Even with proxies, respect delay_between_profiles in config to avoid aggressive patterns.\nMonitor your proxy dashboard — All providers have dashboards showing bandwidth usage and success rates."
      }
    ],
    "body": "Instagram Profile Scraper\n\nA browser-based Instagram profile discovery and scraping tool.\n\nPart of ScrapeClaw — a suite of production-ready, agentic social media scrapers for Instagram, YouTube, X/Twitter, and Facebook built with Python & Playwright, no API keys required.\n\n---\nname: instagram-scraper\ndescription: Discover and scrape Instagram profiles from your browser.\nemoji: 📸\nversion: 1.0.6\nauthor: influenza\ntags:\n  - instagram\n  - scraping\n  - social-media\n  - influencer-discovery\nmetadata:\n  clawdbot:\n    requires:\n      bins:\n        - python3\n        - chromium\n\n    config:\n      stateDirs:\n        - data/output\n        - data/queue\n        - thumbnails\n      outputFormats:\n        - json\n        - csv\n---\n\nOverview\n\nThis skill provides a two-phase Instagram scraping system:\n\nProfile Discovery\nBrowser Scraping\nFeatures\n🔍 - Discover Instagram profiles by location and category\n🌐 - Full browser simulation for accurate scraping\n🛡️ - Browser fingerprinting, human behavior simulation, and stealth scripts\n📊 - Profile info, stats, images, and engagement data\n💾 - JSON/CSV export with downloaded thumbnails\n🔄 - Resume interrupted scraping sessions\n⚡ - Auto-skip private accounts, low followers, empty profiles\n🌍 - Built-in residential proxy support with 4 providers\nGetting Google API Credentials (Optional)\nGo to Google Cloud Console\nCreate a new project or select existing\nEnable \"Custom Search API\"\nCreate API credentials → API Key\nGo to Programmable Search Engine\nCreate a search engine with instagram.com as the site to search\nCopy the Search Engine ID\nUsage\nAgent Tool Interface\n\nFor OpenClaw agent integration, the skill provides JSON output:\n\n# Discover profiles (returns JSON)\ndiscover --location \"Miami\" --category \"fitness\" --output json\n\n# Scrape single profile (returns JSON)\nscrape --username influencer123 --output json\n\nOutput Data\nProfile Data Structure\n{\n  \"username\": \"example_user\",\n  \"full_name\": \"Example User\",\n  \"bio\": \"Fashion blogger | NYC\",\n  \"followers\": 125000,\n  \"following\": 1500,\n  \"posts_count\": 450,\n  \"is_verified\": false,\n  \"is_private\": false,\n  \"influencer_tier\": \"mid\",\n  \"category\": \"fashion\",\n  \"location\": \"New York\",\n  \"profile_pic_local\": \"thumbnails/example_user/profile_abc123.jpg\",\n  \"content_thumbnails\": [\n    \"thumbnails/example_user/content_1_def456.jpg\",\n    \"thumbnails/example_user/content_2_ghi789.jpg\"\n  ],\n  \"post_engagement\": [\n    {\"post_url\": \"https://instagram.com/p/ABC123/\", \"likes\": 5420, \"comments\": 89}\n  ],\n  \"scrape_timestamp\": \"2025-02-09T14:30:00\"\n}\n\nInfluencer Tiers\nTier\tFollower Range\nnano\t< 1,000\nmicro\t1,000 - 10,000\nmid\t10,000 - 100,000\nmacro\t100,000 - 1M\nmega\t> 1,000,000\nFile Outputs\nQueue files: data/queue/{location}_{category}_{timestamp}.json\nScraped data: data/output/{username}.json\nThumbnails: thumbnails/{username}/profile_*.jpg, thumbnails/{username}/content_*.jpg\nExport files: data/export_{timestamp}.json, data/export_{timestamp}.csv\nConfiguration\n\nEdit config/scraper_config.json:\n\n{\n  \"proxy\": {\n    \"enabled\": false,\n    \"provider\": \"brightdata\",\n    \"country\": \"\",\n    \"sticky\": true,\n    \"sticky_ttl_minutes\": 10\n  },\n  \"google_search\": {\n    \"enabled\": true,\n    \"api_key\": \"\",\n    \"search_engine_id\": \"\",\n    \"queries_per_location\": 3\n  },\n  \"scraper\": {\n    \"headless\": false,\n    \"min_followers\": 1000,\n    \"download_thumbnails\": true,\n    \"max_thumbnails\": 6\n  },\n  \"cities\": [\"New York\", \"Los Angeles\", \"Miami\", \"Chicago\"],\n  \"categories\": [\"fashion\", \"beauty\", \"fitness\", \"food\", \"travel\", \"tech\"]\n}\n\nFilters Applied\n\nThe scraper automatically filters out:\n\n❌ Private accounts\n❌ Accounts with < 1,000 followers (configurable)\n❌ Accounts with no posts\n❌ Non-existent/removed accounts\n❌ Already scraped accounts (deduplication)\nTroubleshooting\nLogin Issues\nEnsure credentials are correct\nHandle verification codes when prompted\nWait if rate limited (the script will auto-retry)\nNo Profiles Discovered\nCheck Google API key and quota\nVerify Search Engine ID is configured for instagram.com\nTry different location/category combinations\nRate Limiting\nReduce scraping speed (increase delays in config)\nRun during off-peak hours\nUse a residential proxy (see below)\n🌐 Residential Proxy Support\nWhy Use a Residential Proxy?\n\nRunning a scraper at scale without a residential proxy will get your IP blocked fast. Here's why proxies are essential for long-running scrapes:\n\nAdvantage\tDescription\nAvoid IP Bans\tResidential IPs look like real household users, not data-center bots. Instagram is far less likely to flag them.\nAutomatic IP Rotation\tEach request (or session) gets a fresh IP, so rate-limits never stack up on one address.\nGeo-Targeting\tRoute traffic through a specific country/city so scraped content matches the target audience's locale.\nSticky Sessions\tKeep the same IP for a configurable window (e.g. 10 min) — critical for maintaining a consistent browsing session.\nHigher Success Rate\tRotating residential IPs deliver 95%+ success rates compared to ~30% with data-center proxies on Instagram.\nLong-Running Scrapes\tScrape thousands of profiles over hours or days without interruption.\nConcurrent Scraping\tRun multiple browser instances across different IPs simultaneously.\nRecommended Proxy Providers\n\nWe have affiliate partnerships with top residential proxy providers. Using these links supports continued development of this skill:\n\nProvider\tBest For\tSign Up\nBright Data\tWorld's largest network, 72M+ IPs, enterprise-grade\t👉 Get Bright Data\nIProyal\tPay-as-you-go, 195+ countries, no traffic expiry\t👉 Get IProyal\nStorm Proxies\tFast & reliable, developer-friendly API, competitive pricing\t👉 Get Storm Proxies\nNetNut\tISP-grade network, 52M+ IPs, direct connectivity\t👉 Get NetNut\nSetup Steps\n1. Get Your Proxy Credentials\n\nSign up with any provider above, then grab:\n\nUsername (from your provider dashboard)\nPassword (from your provider dashboard)\nHost and Port are pre-configured per provider (or use custom)\n2. Configure via Environment Variables\nexport PROXY_ENABLED=true\nexport PROXY_PROVIDER=brightdata    # brightdata | iproyal | stormproxies | netnut | custom\nexport PROXY_USERNAME=your_user\nexport PROXY_PASSWORD=your_pass\nexport PROXY_COUNTRY=us             # optional: two-letter country code\nexport PROXY_STICKY=true            # optional: keep same IP per session\n\n3. Provider-Specific Host/Port Defaults\n\nThese are auto-configured when you set the provider name:\n\nProvider\tHost\tPort\nBright Data\tbrd.superproxy.io\t22225\nIProyal\tproxy.iproyal.com\t12321\nStorm Proxies\trotating.stormproxies.com\t9999\nNetNut\tgw-resi.netnut.io\t5959\n\nOverride with PROXY_HOST / PROXY_PORT env vars if your plan uses a different gateway.\n\n4. Custom Proxy Provider\n\nFor any other proxy service, set provider to custom and supply host/port manually:\n\n{\n  \"proxy\": {\n    \"enabled\": true,\n    \"provider\": \"custom\",\n    \"host\": \"your.proxy.host\",\n    \"port\": 8080,\n    \"username\": \"user\",\n    \"password\": \"pass\"\n  }\n}\n\nRunning the Scraper with Proxy\n\nOnce configured, the scraper picks up the proxy automatically — no extra flags needed:\n\n# Discover and scrape as usual — proxy is applied automatically\npython main.py discover --location \"Miami\" --category \"fitness\"\npython main.py scrape --username influencer123\n\n# The log will confirm proxy is active:\n# INFO - Proxy enabled: <ProxyManager provider=brightdata enabled host=brd.superproxy.io:22225>\n# INFO - Browser using proxy: brightdata → brd.superproxy.io:22225\n\nUsing the Proxy Manager Programmatically\nfrom proxy_manager import ProxyManager\n\n# From config (auto-reads config/scraper_config.json)\npm = ProxyManager.from_config()\n\n# From environment variables\npm = ProxyManager.from_env()\n\n# Manual construction\npm = ProxyManager(\n    provider=\"brightdata\",\n    username=\"your_user\",\n    password=\"your_pass\",\n    country=\"us\",\n    sticky=True\n)\n\n# For Playwright browser context\nproxy = pm.get_playwright_proxy()\n# → {\"server\": \"http://brd.superproxy.io:22225\", \"username\": \"user-country-us-session-abc123\", \"password\": \"pass\"}\n\n# For requests / aiohttp\nproxies = pm.get_requests_proxy()\n# → {\"http\": \"http://user:pass@host:port\", \"https\": \"http://user:pass@host:port\"}\n\n# Force new IP (rotates session ID)\npm.rotate_session()\n\n# Debug info\nprint(pm.info())\n\nBest Practices for Long-Running Scrapes\nUse sticky sessions — Instagram requires consistent IPs during a browsing session. Set \"sticky\": true.\nTarget the right country — Set \"country\": \"us\" (or your target region) so Instagram serves content in the expected locale.\nCombine with existing anti-detection — This scraper already has fingerprinting, stealth scripts, and human behavior simulation. The proxy is the final layer.\nRotate sessions between batches — Call pm.rotate_session() between large batches of profiles to get a fresh IP.\nUse delays — Even with proxies, respect delay_between_profiles in config to avoid aggressive patterns.\nMonitor your proxy dashboard — All providers have dashboards showing bandwidth usage and success rates."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/ArulmozhiV/instagram-scraper",
    "publisherUrl": "https://clawhub.ai/ArulmozhiV/instagram-scraper",
    "owner": "ArulmozhiV",
    "version": "1.0.7",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/instagram-scraper",
    "downloadUrl": "https://openagent3.xyz/downloads/instagram-scraper",
    "agentUrl": "https://openagent3.xyz/skills/instagram-scraper/agent",
    "manifestUrl": "https://openagent3.xyz/skills/instagram-scraper/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/instagram-scraper/agent.md"
  }
}