# Send Macrocosmos to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "social-data",
    "name": "Macrocosmos",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/Arrmlet/social-data",
    "canonicalUrl": "https://clawhub.ai/Arrmlet/social-data",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/social-data",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=social-data",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "SKILL.md"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "slug": "social-data",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-05T06:06:08.914Z",
      "expiresAt": "2026-05-12T06:06:08.914Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=social-data",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=social-data",
        "contentDisposition": "attachment; filename=\"social-data-1.0.4.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "social-data"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/social-data"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/social-data",
    "downloadUrl": "https://openagent3.xyz/downloads/social-data",
    "agentUrl": "https://openagent3.xyz/skills/social-data/agent",
    "manifestUrl": "https://openagent3.xyz/skills/social-data/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/social-data/agent.md"
  }
}
```
## Documentation

### Macrocosmos SN13 API - Social Media Data Skill

Fetch real-time social media data from X (Twitter) and Reddit by keyword, username, date range, and filters with engagement metrics via Macrocosmos SN13 API on Bittensor.

### Metadata

name: macrocosmos-social-data
version: 1.0.1
homepage: https://github.com/macrocosm-os/macrocosmos-mcp
source: https://github.com/macrocosm-os/macrocosmos-mcp
pypi: https://pypi.org/project/macrocosmos-mcp
subnet: Bittensor SN13 (Data Universe)
author: Macrocosmos AI
license: MIT

### Required Environment Variables

VariableRequiredTypeDescriptionMC_APIYessecretMacrocosmos API key. Required for all API requests. Get your free key at https://app.macrocosmos.ai/account?tab=api-keys

Setup: The MC_API key must be set as an environment variable. It is passed as a Bearer token in the Authorization header for REST calls, or provided directly to the Python SDK client.

### API Endpoint

POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData

### Headers

Content-Type: application/json
Authorization: Bearer <YOUR_MC_API_KEY>

### Request Format

{
  "source": "X",
  "usernames": ["@elonmusk"],
  "keywords": ["AI", "bittensor"],
  "start_date": "2026-01-01",
  "end_date": "2026-02-10",
  "limit": 10,
  "keyword_mode": "any"
}

### Parameters

ParameterTypeRequiredDescriptionsourcestringYes"X" or "REDDIT" (case-sensitive)usernamesarrayNoUp to 5 usernames. @ optional. X only (not available for Reddit)keywordsarrayNoUp to 5 keywords/hashtags. For Reddit: use subreddit format "r/subreddit"start_datestringNoYYYY-MM-DD or ISO format. Defaults to 24h agoend_datestringNoYYYY-MM-DD or ISO format. Defaults to nowlimitintNo1-1000 results. Default: 10keyword_modestringNo"any" (default) matches ANY keyword, "all" requires ALL keywords

### Response Format

{
  "data": [
    {
      "datetime": "2026-02-10T17:30:58Z",
      "source": "x",
      "text": "Tweet content here",
      "uri": "https://x.com/username/status/123456",
      "user": {
        "username": "example_user",
        "display_name": "Example User",
        "followers_count": 1500,
        "following_count": 300,
        "user_description": "Bio text",
        "user_blue_verified": true,
        "profile_image_url": "https://pbs.twimg.com/..."
      },
      "tweet": {
        "id": "123456",
        "like_count": 42,
        "retweet_count": 10,
        "reply_count": 5,
        "quote_count": 2,
        "view_count": 5000,
        "bookmark_count": 3,
        "hashtags": ["#AI", "#bittensor"],
        "language": "en",
        "is_reply": false,
        "is_quote": false,
        "conversation_id": "123456"
      }
    }
  ]
}

### 1. Keyword Search on X

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\
  -H "Content-Type: application/json" \\
  -H "Authorization: Bearer YOUR_API_KEY" \\
  -d '{
    "source": "X",
    "keywords": ["bittensor"],
    "start_date": "2026-01-01",
    "limit": 10
  }'

### 2. Fetch Tweets from a Specific User

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\
  -H "Content-Type: application/json" \\
  -H "Authorization: Bearer YOUR_API_KEY" \\
  -d '{
    "source": "X",
    "usernames": ["@MacrocosmosAI"],
    "start_date": "2026-01-01",
    "limit": 10
  }'

### 3. Multi-Keyword AND Search

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\
  -H "Content-Type: application/json" \\
  -H "Authorization: Bearer YOUR_API_KEY" \\
  -d '{
    "source": "X",
    "keywords": ["chutes", "bittensor"],
    "keyword_mode": "all",
    "start_date": "2026-01-01",
    "limit": 20
  }'

### 4. Reddit Search

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\
  -H "Content-Type: application/json" \\
  -H "Authorization: Bearer YOUR_API_KEY" \\
  -d '{
    "source": "REDDIT",
    "keywords": ["r/MachineLearning", "transformers"],
    "start_date": "2026-02-01",
    "limit": 50
  }'

### 5. User + Keyword Filter

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \\
  -H "Content-Type: application/json" \\
  -H "Authorization: Bearer YOUR_API_KEY" \\
  -d '{
    "source": "X",
    "usernames": ["@opentensor"],
    "keywords": ["subnet"],
    "start_date": "2026-01-01",
    "limit": 20
  }'

### Using the macrocosmos SDK

import asyncio
import macrocosmos as mc

async def search_tweets():
    client = mc.AsyncSn13Client(api_key="YOUR_API_KEY")

    response = await client.sn13.OnDemandData(
        source="X",
        keywords=["bittensor"],
        usernames=[],
        start_date="2026-01-01",
        end_date=None,
        limit=10,
        keyword_mode="any",
    )

    if hasattr(response, "model_dump"):
        data = response.model_dump()

    for tweet in data["data"]:
        print(f"@{tweet['user']['username']}: {tweet['text'][:100]}")
        print(f"  Likes: {tweet['tweet']['like_count']} | Views: {tweet['tweet']['view_count']}")

asyncio.run(search_tweets())

### Using requests (REST)

import requests

url = "https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer YOUR_API_KEY"
}
payload = {
    "source": "X",
    "keywords": ["bittensor"],
    "start_date": "2026-01-01",
    "limit": 10
}

response = requests.post(url, json=payload, headers=headers)
data = response.json()

for tweet in data["data"]:
    print(f"@{tweet['user']['username']}: {tweet['text'][:100]}")

### What works reliably

High-volume keyword searches: Popular terms like "bittensor", "AI", "iran", "lfg" return fast
Wider date ranges: Setting start_date further back (e.g., weeks/months) improves results
keyword_mode: "all": Great for finding intersection of two topics (e.g., "chutes" AND "bittensor")

### What can be flaky

Username-only queries: Can timeout (DEADLINE_EXCEEDED). Adding start_date far back helps
Niche/low-volume keywords: Very specific terms may timeout if miners don't have data indexed
No start_date: Defaults to last 24h which can miss data; set explicitly for best results

### Best practices for LLM agents

Always set start_date — don't rely on the 24h default. Use at least 7 days back for user queries
Prefer keywords over usernames — keyword searches are more reliable
For username queries, always include start_date set weeks/months back
Use keyword_mode: "all" when combining a topic with a subtopic (e.g., "bittensor" + "chutes")
Handle timeouts gracefully — if a query times out, retry with broader date range or switch to keyword search
Parse engagement metrics — view_count, like_count, retweet_count help rank relevance
Check is_reply and is_quote — filter for original tweets vs replies depending on use case

### Gravity API (Large-Scale Collection)

For datasets larger than 1000 results, use the Gravity endpoints:

### Create Task

POST /gravity.v1.GravityService/CreateGravityTask

{
  "gravity_tasks": [
    {"platform": "x", "topic": "#bittensor", "keyword": "dTAO"}
  ],
  "name": "Bittensor dTAO Collection"
}

Note: X topics MUST start with # or $. Reddit topics use subreddit format.

### Check Status

POST /gravity.v1.GravityService/GetGravityTasks

{
  "gravity_task_id": "multicrawler-xxxx-xxxx",
  "include_crawlers": true
}

### Build Dataset

POST /gravity.v1.GravityService/BuildDataset

{
  "crawler_id": "crawler-0-multicrawler-xxxx",
  "max_rows": 10000
}

Warning: Building stops the crawler permanently.

### Get Dataset Download

POST /gravity.v1.GravityService/GetDataset

{
  "dataset_id": "dataset-xxxx-xxxx"
}

Returns Parquet file download URLs when complete.

### Workflow Summary

Quick Query (< 1000 results):
  OnDemandData → instant results

Large Collection (7-day crawl):
  CreateGravityTask → GetGravityTasks (monitor) → BuildDataset → GetDataset (download)

### Error Reference

ErrorCauseFix401 UnauthorizedMissing or invalid API keyCheck Authorization: Bearer header500 Internal Server ErrorServer-side issue (often auth via gRPC)Verify API key, retryDEADLINE_EXCEEDEDQuery timeout — miners can't fulfill requestUse broader date range, switch to keyword searchEmpty data arrayNo matching resultsBroaden search terms or date range
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: Arrmlet
- Version: 1.0.4
## Source health
- Status: healthy
- Item download looks usable.
- Yavira can redirect you to the upstream package for this item.
- Health scope: item
- Reason: direct_download_ok
- Checked at: 2026-05-05T06:06:08.914Z
- Expires at: 2026-05-12T06:06:08.914Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/social-data)
- [Send to Agent page](https://openagent3.xyz/skills/social-data/agent)
- [JSON manifest](https://openagent3.xyz/skills/social-data/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/social-data/agent.md)
- [Download page](https://openagent3.xyz/downloads/social-data)