Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Fetch real-time social media data from X (Twitter) and Reddit by keyword, username, date range, and filters with engagement metrics via Macrocosmos SN13 API.
Fetch real-time social media data from X (Twitter) and Reddit by keyword, username, date range, and filters with engagement metrics via Macrocosmos SN13 API.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Fetch real-time social media data from X (Twitter) and Reddit by keyword, username, date range, and filters with engagement metrics via Macrocosmos SN13 API on Bittensor.
name: macrocosmos-social-data version: 1.0.1 homepage: https://github.com/macrocosm-os/macrocosmos-mcp source: https://github.com/macrocosm-os/macrocosmos-mcp pypi: https://pypi.org/project/macrocosmos-mcp subnet: Bittensor SN13 (Data Universe) author: Macrocosmos AI license: MIT
VariableRequiredTypeDescriptionMC_APIYessecretMacrocosmos API key. Required for all API requests. Get your free key at https://app.macrocosmos.ai/account?tab=api-keys Setup: The MC_API key must be set as an environment variable. It is passed as a Bearer token in the Authorization header for REST calls, or provided directly to the Python SDK client.
POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData
Content-Type: application/json Authorization: Bearer <YOUR_MC_API_KEY>
{ "source": "X", "usernames": ["@elonmusk"], "keywords": ["AI", "bittensor"], "start_date": "2026-01-01", "end_date": "2026-02-10", "limit": 10, "keyword_mode": "any" }
ParameterTypeRequiredDescriptionsourcestringYes"X" or "REDDIT" (case-sensitive)usernamesarrayNoUp to 5 usernames. @ optional. X only (not available for Reddit)keywordsarrayNoUp to 5 keywords/hashtags. For Reddit: use subreddit format "r/subreddit"start_datestringNoYYYY-MM-DD or ISO format. Defaults to 24h agoend_datestringNoYYYY-MM-DD or ISO format. Defaults to nowlimitintNo1-1000 results. Default: 10keyword_modestringNo"any" (default) matches ANY keyword, "all" requires ALL keywords
{ "data": [ { "datetime": "2026-02-10T17:30:58Z", "source": "x", "text": "Tweet content here", "uri": "https://x.com/username/status/123456", "user": { "username": "example_user", "display_name": "Example User", "followers_count": 1500, "following_count": 300, "user_description": "Bio text", "user_blue_verified": true, "profile_image_url": "https://pbs.twimg.com/..." }, "tweet": { "id": "123456", "like_count": 42, "retweet_count": 10, "reply_count": 5, "quote_count": 2, "view_count": 5000, "bookmark_count": 3, "hashtags": ["#AI", "#bittensor"], "language": "en", "is_reply": false, "is_quote": false, "conversation_id": "123456" } } ] }
curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "X", "keywords": ["bittensor"], "start_date": "2026-01-01", "limit": 10 }'
curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "X", "usernames": ["@MacrocosmosAI"], "start_date": "2026-01-01", "limit": 10 }'
curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "X", "keywords": ["chutes", "bittensor"], "keyword_mode": "all", "start_date": "2026-01-01", "limit": 20 }'
curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "REDDIT", "keywords": ["r/MachineLearning", "transformers"], "start_date": "2026-02-01", "limit": 50 }'
curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "X", "usernames": ["@opentensor"], "keywords": ["subnet"], "start_date": "2026-01-01", "limit": 20 }'
import asyncio import macrocosmos as mc async def search_tweets(): client = mc.AsyncSn13Client(api_key="YOUR_API_KEY") response = await client.sn13.OnDemandData( source="X", keywords=["bittensor"], usernames=[], start_date="2026-01-01", end_date=None, limit=10, keyword_mode="any", ) if hasattr(response, "model_dump"): data = response.model_dump() for tweet in data["data"]: print(f"@{tweet['user']['username']}: {tweet['text'][:100]}") print(f" Likes: {tweet['tweet']['like_count']} | Views: {tweet['tweet']['view_count']}") asyncio.run(search_tweets())
import requests url = "https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData" headers = { "Content-Type": "application/json", "Authorization": "Bearer YOUR_API_KEY" } payload = { "source": "X", "keywords": ["bittensor"], "start_date": "2026-01-01", "limit": 10 } response = requests.post(url, json=payload, headers=headers) data = response.json() for tweet in data["data"]: print(f"@{tweet['user']['username']}: {tweet['text'][:100]}")
High-volume keyword searches: Popular terms like "bittensor", "AI", "iran", "lfg" return fast Wider date ranges: Setting start_date further back (e.g., weeks/months) improves results keyword_mode: "all": Great for finding intersection of two topics (e.g., "chutes" AND "bittensor")
Username-only queries: Can timeout (DEADLINE_EXCEEDED). Adding start_date far back helps Niche/low-volume keywords: Very specific terms may timeout if miners don't have data indexed No start_date: Defaults to last 24h which can miss data; set explicitly for best results
Always set start_date โ don't rely on the 24h default. Use at least 7 days back for user queries Prefer keywords over usernames โ keyword searches are more reliable For username queries, always include start_date set weeks/months back Use keyword_mode: "all" when combining a topic with a subtopic (e.g., "bittensor" + "chutes") Handle timeouts gracefully โ if a query times out, retry with broader date range or switch to keyword search Parse engagement metrics โ view_count, like_count, retweet_count help rank relevance Check is_reply and is_quote โ filter for original tweets vs replies depending on use case
For datasets larger than 1000 results, use the Gravity endpoints:
POST /gravity.v1.GravityService/CreateGravityTask { "gravity_tasks": [ {"platform": "x", "topic": "#bittensor", "keyword": "dTAO"} ], "name": "Bittensor dTAO Collection" } Note: X topics MUST start with # or $. Reddit topics use subreddit format.
POST /gravity.v1.GravityService/GetGravityTasks { "gravity_task_id": "multicrawler-xxxx-xxxx", "include_crawlers": true }
POST /gravity.v1.GravityService/BuildDataset { "crawler_id": "crawler-0-multicrawler-xxxx", "max_rows": 10000 } Warning: Building stops the crawler permanently.
POST /gravity.v1.GravityService/GetDataset { "dataset_id": "dataset-xxxx-xxxx" } Returns Parquet file download URLs when complete.
Quick Query (< 1000 results): OnDemandData โ instant results Large Collection (7-day crawl): CreateGravityTask โ GetGravityTasks (monitor) โ BuildDataset โ GetDataset (download)
ErrorCauseFix401 UnauthorizedMissing or invalid API keyCheck Authorization: Bearer header500 Internal Server ErrorServer-side issue (often auth via gRPC)Verify API key, retryDEADLINE_EXCEEDEDQuery timeout โ miners can't fulfill requestUse broader date range, switch to keyword searchEmpty data arrayNo matching resultsBroaden search terms or date range
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.