โ† All skills
Tencent SkillHub ยท Developer Tools

Macrocosmos

Fetch real-time social media data from X (Twitter) and Reddit by keyword, username, date range, and filters with engagement metrics via Macrocosmos SN13 API.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Fetch real-time social media data from X (Twitter) and Reddit by keyword, username, date range, and filters with engagement metrics via Macrocosmos SN13 API.

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.4

Documentation

ClawHub primary doc Primary doc: SKILL.md 25 sections Open source page

Macrocosmos SN13 API - Social Media Data Skill

Fetch real-time social media data from X (Twitter) and Reddit by keyword, username, date range, and filters with engagement metrics via Macrocosmos SN13 API on Bittensor.

Metadata

name: macrocosmos-social-data version: 1.0.1 homepage: https://github.com/macrocosm-os/macrocosmos-mcp source: https://github.com/macrocosm-os/macrocosmos-mcp pypi: https://pypi.org/project/macrocosmos-mcp subnet: Bittensor SN13 (Data Universe) author: Macrocosmos AI license: MIT

Required Environment Variables

VariableRequiredTypeDescriptionMC_APIYessecretMacrocosmos API key. Required for all API requests. Get your free key at https://app.macrocosmos.ai/account?tab=api-keys Setup: The MC_API key must be set as an environment variable. It is passed as a Bearer token in the Authorization header for REST calls, or provided directly to the Python SDK client.

API Endpoint

POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData

Headers

Content-Type: application/json Authorization: Bearer <YOUR_MC_API_KEY>

Request Format

{ "source": "X", "usernames": ["@elonmusk"], "keywords": ["AI", "bittensor"], "start_date": "2026-01-01", "end_date": "2026-02-10", "limit": 10, "keyword_mode": "any" }

Parameters

ParameterTypeRequiredDescriptionsourcestringYes"X" or "REDDIT" (case-sensitive)usernamesarrayNoUp to 5 usernames. @ optional. X only (not available for Reddit)keywordsarrayNoUp to 5 keywords/hashtags. For Reddit: use subreddit format "r/subreddit"start_datestringNoYYYY-MM-DD or ISO format. Defaults to 24h agoend_datestringNoYYYY-MM-DD or ISO format. Defaults to nowlimitintNo1-1000 results. Default: 10keyword_modestringNo"any" (default) matches ANY keyword, "all" requires ALL keywords

Response Format

{ "data": [ { "datetime": "2026-02-10T17:30:58Z", "source": "x", "text": "Tweet content here", "uri": "https://x.com/username/status/123456", "user": { "username": "example_user", "display_name": "Example User", "followers_count": 1500, "following_count": 300, "user_description": "Bio text", "user_blue_verified": true, "profile_image_url": "https://pbs.twimg.com/..." }, "tweet": { "id": "123456", "like_count": 42, "retweet_count": 10, "reply_count": 5, "quote_count": 2, "view_count": 5000, "bookmark_count": 3, "hashtags": ["#AI", "#bittensor"], "language": "en", "is_reply": false, "is_quote": false, "conversation_id": "123456" } } ] }

1. Keyword Search on X

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "X", "keywords": ["bittensor"], "start_date": "2026-01-01", "limit": 10 }'

2. Fetch Tweets from a Specific User

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "X", "usernames": ["@MacrocosmosAI"], "start_date": "2026-01-01", "limit": 10 }'

3. Multi-Keyword AND Search

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "X", "keywords": ["chutes", "bittensor"], "keyword_mode": "all", "start_date": "2026-01-01", "limit": 20 }'

4. Reddit Search

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "REDDIT", "keywords": ["r/MachineLearning", "transformers"], "start_date": "2026-02-01", "limit": 50 }'

5. User + Keyword Filter

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "X", "usernames": ["@opentensor"], "keywords": ["subnet"], "start_date": "2026-01-01", "limit": 20 }'

Using the macrocosmos SDK

import asyncio import macrocosmos as mc async def search_tweets(): client = mc.AsyncSn13Client(api_key="YOUR_API_KEY") response = await client.sn13.OnDemandData( source="X", keywords=["bittensor"], usernames=[], start_date="2026-01-01", end_date=None, limit=10, keyword_mode="any", ) if hasattr(response, "model_dump"): data = response.model_dump() for tweet in data["data"]: print(f"@{tweet['user']['username']}: {tweet['text'][:100]}") print(f" Likes: {tweet['tweet']['like_count']} | Views: {tweet['tweet']['view_count']}") asyncio.run(search_tweets())

Using requests (REST)

import requests url = "https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData" headers = { "Content-Type": "application/json", "Authorization": "Bearer YOUR_API_KEY" } payload = { "source": "X", "keywords": ["bittensor"], "start_date": "2026-01-01", "limit": 10 } response = requests.post(url, json=payload, headers=headers) data = response.json() for tweet in data["data"]: print(f"@{tweet['user']['username']}: {tweet['text'][:100]}")

What works reliably

High-volume keyword searches: Popular terms like "bittensor", "AI", "iran", "lfg" return fast Wider date ranges: Setting start_date further back (e.g., weeks/months) improves results keyword_mode: "all": Great for finding intersection of two topics (e.g., "chutes" AND "bittensor")

What can be flaky

Username-only queries: Can timeout (DEADLINE_EXCEEDED). Adding start_date far back helps Niche/low-volume keywords: Very specific terms may timeout if miners don't have data indexed No start_date: Defaults to last 24h which can miss data; set explicitly for best results

Best practices for LLM agents

Always set start_date โ€” don't rely on the 24h default. Use at least 7 days back for user queries Prefer keywords over usernames โ€” keyword searches are more reliable For username queries, always include start_date set weeks/months back Use keyword_mode: "all" when combining a topic with a subtopic (e.g., "bittensor" + "chutes") Handle timeouts gracefully โ€” if a query times out, retry with broader date range or switch to keyword search Parse engagement metrics โ€” view_count, like_count, retweet_count help rank relevance Check is_reply and is_quote โ€” filter for original tweets vs replies depending on use case

Gravity API (Large-Scale Collection)

For datasets larger than 1000 results, use the Gravity endpoints:

Create Task

POST /gravity.v1.GravityService/CreateGravityTask { "gravity_tasks": [ {"platform": "x", "topic": "#bittensor", "keyword": "dTAO"} ], "name": "Bittensor dTAO Collection" } Note: X topics MUST start with # or $. Reddit topics use subreddit format.

Check Status

POST /gravity.v1.GravityService/GetGravityTasks { "gravity_task_id": "multicrawler-xxxx-xxxx", "include_crawlers": true }

Build Dataset

POST /gravity.v1.GravityService/BuildDataset { "crawler_id": "crawler-0-multicrawler-xxxx", "max_rows": 10000 } Warning: Building stops the crawler permanently.

Get Dataset Download

POST /gravity.v1.GravityService/GetDataset { "dataset_id": "dataset-xxxx-xxxx" } Returns Parquet file download URLs when complete.

Workflow Summary

Quick Query (< 1000 results): OnDemandData โ†’ instant results Large Collection (7-day crawl): CreateGravityTask โ†’ GetGravityTasks (monitor) โ†’ BuildDataset โ†’ GetDataset (download)

Error Reference

ErrorCauseFix401 UnauthorizedMissing or invalid API keyCheck Authorization: Bearer header500 Internal Server ErrorServer-side issue (often auth via gRPC)Verify API key, retryDEADLINE_EXCEEDEDQuery timeout โ€” miners can't fulfill requestUse broader date range, switch to keyword searchEmpty data arrayNo matching resultsBroaden search terms or date range

Category context

Code helpers, APIs, CLIs, browser automation, testing, and developer operations.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
1 Docs
  • SKILL.md Primary doc