{
  "schemaVersion": "1.0",
  "item": {
    "slug": "social-data",
    "name": "Macrocosmos",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/Arrmlet/social-data",
    "canonicalUrl": "https://clawhub.ai/Arrmlet/social-data",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/social-data",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=social-data",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/social-data"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/social-data",
    "agentPageUrl": "https://openagent3.xyz/skills/social-data/agent",
    "manifestUrl": "https://openagent3.xyz/skills/social-data/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/social-data/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Macrocosmos SN13 API - Social Media Data Skill",
        "body": "Fetch real-time social media data from X (Twitter) and Reddit by keyword, username, date range, and filters with engagement metrics via Macrocosmos SN13 API on Bittensor."
      },
      {
        "title": "Metadata",
        "body": "name: macrocosmos-social-data\nversion: 1.0.1\nhomepage: https://github.com/macrocosm-os/macrocosmos-mcp\nsource: https://github.com/macrocosm-os/macrocosmos-mcp\npypi: https://pypi.org/project/macrocosmos-mcp\nsubnet: Bittensor SN13 (Data Universe)\nauthor: Macrocosmos AI\nlicense: MIT"
      },
      {
        "title": "Required Environment Variables",
        "body": "VariableRequiredTypeDescriptionMC_APIYessecretMacrocosmos API key. Required for all API requests. Get your free key at https://app.macrocosmos.ai/account?tab=api-keys\n\nSetup: The MC_API key must be set as an environment variable. It is passed as a Bearer token in the Authorization header for REST calls, or provided directly to the Python SDK client."
      },
      {
        "title": "API Endpoint",
        "body": "POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData"
      },
      {
        "title": "Headers",
        "body": "Content-Type: application/json\nAuthorization: Bearer <YOUR_MC_API_KEY>"
      },
      {
        "title": "Request Format",
        "body": "{\n  \"source\": \"X\",\n  \"usernames\": [\"@elonmusk\"],\n  \"keywords\": [\"AI\", \"bittensor\"],\n  \"start_date\": \"2026-01-01\",\n  \"end_date\": \"2026-02-10\",\n  \"limit\": 10,\n  \"keyword_mode\": \"any\"\n}"
      },
      {
        "title": "Parameters",
        "body": "ParameterTypeRequiredDescriptionsourcestringYes\"X\" or \"REDDIT\" (case-sensitive)usernamesarrayNoUp to 5 usernames. @ optional. X only (not available for Reddit)keywordsarrayNoUp to 5 keywords/hashtags. For Reddit: use subreddit format \"r/subreddit\"start_datestringNoYYYY-MM-DD or ISO format. Defaults to 24h agoend_datestringNoYYYY-MM-DD or ISO format. Defaults to nowlimitintNo1-1000 results. Default: 10keyword_modestringNo\"any\" (default) matches ANY keyword, \"all\" requires ALL keywords"
      },
      {
        "title": "Response Format",
        "body": "{\n  \"data\": [\n    {\n      \"datetime\": \"2026-02-10T17:30:58Z\",\n      \"source\": \"x\",\n      \"text\": \"Tweet content here\",\n      \"uri\": \"https://x.com/username/status/123456\",\n      \"user\": {\n        \"username\": \"example_user\",\n        \"display_name\": \"Example User\",\n        \"followers_count\": 1500,\n        \"following_count\": 300,\n        \"user_description\": \"Bio text\",\n        \"user_blue_verified\": true,\n        \"profile_image_url\": \"https://pbs.twimg.com/...\"\n      },\n      \"tweet\": {\n        \"id\": \"123456\",\n        \"like_count\": 42,\n        \"retweet_count\": 10,\n        \"reply_count\": 5,\n        \"quote_count\": 2,\n        \"view_count\": 5000,\n        \"bookmark_count\": 3,\n        \"hashtags\": [\"#AI\", \"#bittensor\"],\n        \"language\": \"en\",\n        \"is_reply\": false,\n        \"is_quote\": false,\n        \"conversation_id\": \"123456\"\n      }\n    }\n  ]\n}"
      },
      {
        "title": "1. Keyword Search on X",
        "body": "curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer YOUR_API_KEY\" \\\n  -d '{\n    \"source\": \"X\",\n    \"keywords\": [\"bittensor\"],\n    \"start_date\": \"2026-01-01\",\n    \"limit\": 10\n  }'"
      },
      {
        "title": "2. Fetch Tweets from a Specific User",
        "body": "curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer YOUR_API_KEY\" \\\n  -d '{\n    \"source\": \"X\",\n    \"usernames\": [\"@MacrocosmosAI\"],\n    \"start_date\": \"2026-01-01\",\n    \"limit\": 10\n  }'"
      },
      {
        "title": "3. Multi-Keyword AND Search",
        "body": "curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer YOUR_API_KEY\" \\\n  -d '{\n    \"source\": \"X\",\n    \"keywords\": [\"chutes\", \"bittensor\"],\n    \"keyword_mode\": \"all\",\n    \"start_date\": \"2026-01-01\",\n    \"limit\": 20\n  }'"
      },
      {
        "title": "4. Reddit Search",
        "body": "curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer YOUR_API_KEY\" \\\n  -d '{\n    \"source\": \"REDDIT\",\n    \"keywords\": [\"r/MachineLearning\", \"transformers\"],\n    \"start_date\": \"2026-02-01\",\n    \"limit\": 50\n  }'"
      },
      {
        "title": "5. User + Keyword Filter",
        "body": "curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer YOUR_API_KEY\" \\\n  -d '{\n    \"source\": \"X\",\n    \"usernames\": [\"@opentensor\"],\n    \"keywords\": [\"subnet\"],\n    \"start_date\": \"2026-01-01\",\n    \"limit\": 20\n  }'"
      },
      {
        "title": "Using the macrocosmos SDK",
        "body": "import asyncio\nimport macrocosmos as mc\n\nasync def search_tweets():\n    client = mc.AsyncSn13Client(api_key=\"YOUR_API_KEY\")\n\n    response = await client.sn13.OnDemandData(\n        source=\"X\",\n        keywords=[\"bittensor\"],\n        usernames=[],\n        start_date=\"2026-01-01\",\n        end_date=None,\n        limit=10,\n        keyword_mode=\"any\",\n    )\n\n    if hasattr(response, \"model_dump\"):\n        data = response.model_dump()\n\n    for tweet in data[\"data\"]:\n        print(f\"@{tweet['user']['username']}: {tweet['text'][:100]}\")\n        print(f\"  Likes: {tweet['tweet']['like_count']} | Views: {tweet['tweet']['view_count']}\")\n\nasyncio.run(search_tweets())"
      },
      {
        "title": "Using requests (REST)",
        "body": "import requests\n\nurl = \"https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData\"\nheaders = {\n    \"Content-Type\": \"application/json\",\n    \"Authorization\": \"Bearer YOUR_API_KEY\"\n}\npayload = {\n    \"source\": \"X\",\n    \"keywords\": [\"bittensor\"],\n    \"start_date\": \"2026-01-01\",\n    \"limit\": 10\n}\n\nresponse = requests.post(url, json=payload, headers=headers)\ndata = response.json()\n\nfor tweet in data[\"data\"]:\n    print(f\"@{tweet['user']['username']}: {tweet['text'][:100]}\")"
      },
      {
        "title": "What works reliably",
        "body": "High-volume keyword searches: Popular terms like \"bittensor\", \"AI\", \"iran\", \"lfg\" return fast\nWider date ranges: Setting start_date further back (e.g., weeks/months) improves results\nkeyword_mode: \"all\": Great for finding intersection of two topics (e.g., \"chutes\" AND \"bittensor\")"
      },
      {
        "title": "What can be flaky",
        "body": "Username-only queries: Can timeout (DEADLINE_EXCEEDED). Adding start_date far back helps\nNiche/low-volume keywords: Very specific terms may timeout if miners don't have data indexed\nNo start_date: Defaults to last 24h which can miss data; set explicitly for best results"
      },
      {
        "title": "Best practices for LLM agents",
        "body": "Always set start_date — don't rely on the 24h default. Use at least 7 days back for user queries\nPrefer keywords over usernames — keyword searches are more reliable\nFor username queries, always include start_date set weeks/months back\nUse keyword_mode: \"all\" when combining a topic with a subtopic (e.g., \"bittensor\" + \"chutes\")\nHandle timeouts gracefully — if a query times out, retry with broader date range or switch to keyword search\nParse engagement metrics — view_count, like_count, retweet_count help rank relevance\nCheck is_reply and is_quote — filter for original tweets vs replies depending on use case"
      },
      {
        "title": "Gravity API (Large-Scale Collection)",
        "body": "For datasets larger than 1000 results, use the Gravity endpoints:"
      },
      {
        "title": "Create Task",
        "body": "POST /gravity.v1.GravityService/CreateGravityTask\n\n{\n  \"gravity_tasks\": [\n    {\"platform\": \"x\", \"topic\": \"#bittensor\", \"keyword\": \"dTAO\"}\n  ],\n  \"name\": \"Bittensor dTAO Collection\"\n}\n\nNote: X topics MUST start with # or $. Reddit topics use subreddit format."
      },
      {
        "title": "Check Status",
        "body": "POST /gravity.v1.GravityService/GetGravityTasks\n\n{\n  \"gravity_task_id\": \"multicrawler-xxxx-xxxx\",\n  \"include_crawlers\": true\n}"
      },
      {
        "title": "Build Dataset",
        "body": "POST /gravity.v1.GravityService/BuildDataset\n\n{\n  \"crawler_id\": \"crawler-0-multicrawler-xxxx\",\n  \"max_rows\": 10000\n}\n\nWarning: Building stops the crawler permanently."
      },
      {
        "title": "Get Dataset Download",
        "body": "POST /gravity.v1.GravityService/GetDataset\n\n{\n  \"dataset_id\": \"dataset-xxxx-xxxx\"\n}\n\nReturns Parquet file download URLs when complete."
      },
      {
        "title": "Workflow Summary",
        "body": "Quick Query (< 1000 results):\n  OnDemandData → instant results\n\nLarge Collection (7-day crawl):\n  CreateGravityTask → GetGravityTasks (monitor) → BuildDataset → GetDataset (download)"
      },
      {
        "title": "Error Reference",
        "body": "ErrorCauseFix401 UnauthorizedMissing or invalid API keyCheck Authorization: Bearer header500 Internal Server ErrorServer-side issue (often auth via gRPC)Verify API key, retryDEADLINE_EXCEEDEDQuery timeout — miners can't fulfill requestUse broader date range, switch to keyword searchEmpty data arrayNo matching resultsBroaden search terms or date range"
      }
    ],
    "body": "Macrocosmos SN13 API - Social Media Data Skill\n\nFetch real-time social media data from X (Twitter) and Reddit by keyword, username, date range, and filters with engagement metrics via Macrocosmos SN13 API on Bittensor.\n\nMetadata\nname: macrocosmos-social-data\nversion: 1.0.1\nhomepage: https://github.com/macrocosm-os/macrocosmos-mcp\nsource: https://github.com/macrocosm-os/macrocosmos-mcp\npypi: https://pypi.org/project/macrocosmos-mcp\nsubnet: Bittensor SN13 (Data Universe)\nauthor: Macrocosmos AI\nlicense: MIT\nRequired Environment Variables\nVariable\tRequired\tType\tDescription\nMC_API\tYes\tsecret\tMacrocosmos API key. Required for all API requests. Get your free key at https://app.macrocosmos.ai/account?tab=api-keys\n\nSetup: The MC_API key must be set as an environment variable. It is passed as a Bearer token in the Authorization header for REST calls, or provided directly to the Python SDK client.\n\nAPI Endpoint\nPOST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData\n\nHeaders\nContent-Type: application/json\nAuthorization: Bearer <YOUR_MC_API_KEY>\n\nRequest Format\n{\n  \"source\": \"X\",\n  \"usernames\": [\"@elonmusk\"],\n  \"keywords\": [\"AI\", \"bittensor\"],\n  \"start_date\": \"2026-01-01\",\n  \"end_date\": \"2026-02-10\",\n  \"limit\": 10,\n  \"keyword_mode\": \"any\"\n}\n\nParameters\nParameter\tType\tRequired\tDescription\nsource\tstring\tYes\t\"X\" or \"REDDIT\" (case-sensitive)\nusernames\tarray\tNo\tUp to 5 usernames. @ optional. X only (not available for Reddit)\nkeywords\tarray\tNo\tUp to 5 keywords/hashtags. For Reddit: use subreddit format \"r/subreddit\"\nstart_date\tstring\tNo\tYYYY-MM-DD or ISO format. Defaults to 24h ago\nend_date\tstring\tNo\tYYYY-MM-DD or ISO format. Defaults to now\nlimit\tint\tNo\t1-1000 results. Default: 10\nkeyword_mode\tstring\tNo\t\"any\" (default) matches ANY keyword, \"all\" requires ALL keywords\nResponse Format\n{\n  \"data\": [\n    {\n      \"datetime\": \"2026-02-10T17:30:58Z\",\n      \"source\": \"x\",\n      \"text\": \"Tweet content here\",\n      \"uri\": \"https://x.com/username/status/123456\",\n      \"user\": {\n        \"username\": \"example_user\",\n        \"display_name\": \"Example User\",\n        \"followers_count\": 1500,\n        \"following_count\": 300,\n        \"user_description\": \"Bio text\",\n        \"user_blue_verified\": true,\n        \"profile_image_url\": \"https://pbs.twimg.com/...\"\n      },\n      \"tweet\": {\n        \"id\": \"123456\",\n        \"like_count\": 42,\n        \"retweet_count\": 10,\n        \"reply_count\": 5,\n        \"quote_count\": 2,\n        \"view_count\": 5000,\n        \"bookmark_count\": 3,\n        \"hashtags\": [\"#AI\", \"#bittensor\"],\n        \"language\": \"en\",\n        \"is_reply\": false,\n        \"is_quote\": false,\n        \"conversation_id\": \"123456\"\n      }\n    }\n  ]\n}\n\ncurl Examples\n1. Keyword Search on X\ncurl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer YOUR_API_KEY\" \\\n  -d '{\n    \"source\": \"X\",\n    \"keywords\": [\"bittensor\"],\n    \"start_date\": \"2026-01-01\",\n    \"limit\": 10\n  }'\n\n2. Fetch Tweets from a Specific User\ncurl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer YOUR_API_KEY\" \\\n  -d '{\n    \"source\": \"X\",\n    \"usernames\": [\"@MacrocosmosAI\"],\n    \"start_date\": \"2026-01-01\",\n    \"limit\": 10\n  }'\n\n3. Multi-Keyword AND Search\ncurl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer YOUR_API_KEY\" \\\n  -d '{\n    \"source\": \"X\",\n    \"keywords\": [\"chutes\", \"bittensor\"],\n    \"keyword_mode\": \"all\",\n    \"start_date\": \"2026-01-01\",\n    \"limit\": 20\n  }'\n\n4. Reddit Search\ncurl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer YOUR_API_KEY\" \\\n  -d '{\n    \"source\": \"REDDIT\",\n    \"keywords\": [\"r/MachineLearning\", \"transformers\"],\n    \"start_date\": \"2026-02-01\",\n    \"limit\": 50\n  }'\n\n5. User + Keyword Filter\ncurl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer YOUR_API_KEY\" \\\n  -d '{\n    \"source\": \"X\",\n    \"usernames\": [\"@opentensor\"],\n    \"keywords\": [\"subnet\"],\n    \"start_date\": \"2026-01-01\",\n    \"limit\": 20\n  }'\n\nPython Examples\nUsing the macrocosmos SDK\nimport asyncio\nimport macrocosmos as mc\n\nasync def search_tweets():\n    client = mc.AsyncSn13Client(api_key=\"YOUR_API_KEY\")\n\n    response = await client.sn13.OnDemandData(\n        source=\"X\",\n        keywords=[\"bittensor\"],\n        usernames=[],\n        start_date=\"2026-01-01\",\n        end_date=None,\n        limit=10,\n        keyword_mode=\"any\",\n    )\n\n    if hasattr(response, \"model_dump\"):\n        data = response.model_dump()\n\n    for tweet in data[\"data\"]:\n        print(f\"@{tweet['user']['username']}: {tweet['text'][:100]}\")\n        print(f\"  Likes: {tweet['tweet']['like_count']} | Views: {tweet['tweet']['view_count']}\")\n\nasyncio.run(search_tweets())\n\nUsing requests (REST)\nimport requests\n\nurl = \"https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData\"\nheaders = {\n    \"Content-Type\": \"application/json\",\n    \"Authorization\": \"Bearer YOUR_API_KEY\"\n}\npayload = {\n    \"source\": \"X\",\n    \"keywords\": [\"bittensor\"],\n    \"start_date\": \"2026-01-01\",\n    \"limit\": 10\n}\n\nresponse = requests.post(url, json=payload, headers=headers)\ndata = response.json()\n\nfor tweet in data[\"data\"]:\n    print(f\"@{tweet['user']['username']}: {tweet['text'][:100]}\")\n\nTips & Known Behaviors\nWhat works reliably\nHigh-volume keyword searches: Popular terms like \"bittensor\", \"AI\", \"iran\", \"lfg\" return fast\nWider date ranges: Setting start_date further back (e.g., weeks/months) improves results\nkeyword_mode: \"all\": Great for finding intersection of two topics (e.g., \"chutes\" AND \"bittensor\")\nWhat can be flaky\nUsername-only queries: Can timeout (DEADLINE_EXCEEDED). Adding start_date far back helps\nNiche/low-volume keywords: Very specific terms may timeout if miners don't have data indexed\nNo start_date: Defaults to last 24h which can miss data; set explicitly for best results\nBest practices for LLM agents\nAlways set start_date — don't rely on the 24h default. Use at least 7 days back for user queries\nPrefer keywords over usernames — keyword searches are more reliable\nFor username queries, always include start_date set weeks/months back\nUse keyword_mode: \"all\" when combining a topic with a subtopic (e.g., \"bittensor\" + \"chutes\")\nHandle timeouts gracefully — if a query times out, retry with broader date range or switch to keyword search\nParse engagement metrics — view_count, like_count, retweet_count help rank relevance\nCheck is_reply and is_quote — filter for original tweets vs replies depending on use case\nGravity API (Large-Scale Collection)\n\nFor datasets larger than 1000 results, use the Gravity endpoints:\n\nCreate Task\nPOST /gravity.v1.GravityService/CreateGravityTask\n\n{\n  \"gravity_tasks\": [\n    {\"platform\": \"x\", \"topic\": \"#bittensor\", \"keyword\": \"dTAO\"}\n  ],\n  \"name\": \"Bittensor dTAO Collection\"\n}\n\n\nNote: X topics MUST start with # or $. Reddit topics use subreddit format.\n\nCheck Status\nPOST /gravity.v1.GravityService/GetGravityTasks\n\n{\n  \"gravity_task_id\": \"multicrawler-xxxx-xxxx\",\n  \"include_crawlers\": true\n}\n\nBuild Dataset\nPOST /gravity.v1.GravityService/BuildDataset\n\n{\n  \"crawler_id\": \"crawler-0-multicrawler-xxxx\",\n  \"max_rows\": 10000\n}\n\n\nWarning: Building stops the crawler permanently.\n\nGet Dataset Download\nPOST /gravity.v1.GravityService/GetDataset\n\n{\n  \"dataset_id\": \"dataset-xxxx-xxxx\"\n}\n\n\nReturns Parquet file download URLs when complete.\n\nWorkflow Summary\nQuick Query (< 1000 results):\n  OnDemandData → instant results\n\nLarge Collection (7-day crawl):\n  CreateGravityTask → GetGravityTasks (monitor) → BuildDataset → GetDataset (download)\n\nError Reference\nError\tCause\tFix\n401 Unauthorized\tMissing or invalid API key\tCheck Authorization: Bearer header\n500 Internal Server Error\tServer-side issue (often auth via gRPC)\tVerify API key, retry\nDEADLINE_EXCEEDED\tQuery timeout — miners can't fulfill request\tUse broader date range, switch to keyword search\nEmpty data array\tNo matching results\tBroaden search terms or date range"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/Arrmlet/social-data",
    "publisherUrl": "https://clawhub.ai/Arrmlet/social-data",
    "owner": "Arrmlet",
    "version": "1.0.4",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/social-data",
    "downloadUrl": "https://openagent3.xyz/downloads/social-data",
    "agentUrl": "https://openagent3.xyz/skills/social-data/agent",
    "manifestUrl": "https://openagent3.xyz/skills/social-data/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/social-data/agent.md"
  }
}