Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
General-purpose X/Twitter research agent. Searches X for real-time perspectives, dev discussions, product feedback, cultural takes, breaking news, and expert...
General-purpose X/Twitter research agent. Searches X for real-time perspectives, dev discussions, product feedback, cultural takes, breaking news, and expert...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
General-purpose agentic research over X/Twitter. Decompose any research question into targeted searches, iteratively refine, follow threads, deep-dive linked content, and synthesize into a sourced briefing. For twitterapi.io API details (endpoints, operators, response format): read references/x-api.md.
All commands run from this skill directory: cd ~/clawd/skills/x-research source ~/.config/env/global.env # needs TWITTERAPI_IO_KEY
bun run x-search.ts search "<query>" [options] Options: --sort likes|impressions|retweets|recent โ sort order (default: likes) --since 1h|3h|12h|1d|7d โ time filter (default: last 7 days). Also accepts minutes (30m) or ISO timestamps. --min-likes N โ filter by minimum likes --min-impressions N โ filter by minimum impressions --pages N โ pages to fetch, 1-25 (default: 5, ~20 tweets/page) --limit N โ max results to display (default: 15) --quick โ quick mode: 1 page, max 10 results, auto noise filter (-is:retweet -is:reply), 1hr cache, cost summary --from <username> โ shorthand for from:username in query --quality โ filter low-engagement tweets (โฅ10 likes, post-hoc) --no-replies โ exclude replies --save โ save results to ~/clawd/drafts/x-research-{slug}-{date}.md --json โ raw JSON output --markdown โ markdown output for research docs Auto-adds -is:retweet unless query already includes it. All searches display estimated API cost. Note: twitterapi.io search covers full archive (not limited to 7 days). Time filtering uses since: operator in the query. Examples: bun run x-search.ts search "BNKR" --sort likes --limit 10 bun run x-search.ts search "from:frankdegods" --sort recent bun run x-search.ts search "(opus 4.6 OR claude) trading" --pages 2 --save bun run x-search.ts search "$BNKR (revenue OR fees)" --min-likes 5 bun run x-search.ts search "BNKR" --quick bun run x-search.ts search "BNKR" --from voidcider --quick bun run x-search.ts search "AI agents" --quality --quick
bun run x-search.ts profile <username> [--count N] [--replies] [--json] Fetches recent tweets from a specific user (excludes replies by default).
bun run x-search.ts thread <tweet_id> [--pages N] Fetches full conversation thread by root tweet ID.
bun run x-search.ts tweet <tweet_id> [--json]
bun run x-search.ts watchlist # Show all bun run x-search.ts watchlist add <user> [note] # Add account bun run x-search.ts watchlist remove <user> # Remove account bun run x-search.ts watchlist check # Check recent from all Watchlist stored in data/watchlist.json. Use for heartbeat integration โ check if key accounts posted anything important.
bun run x-search.ts cache clear # Clear all cached results 15-minute TTL. Avoids re-fetching identical queries.
When doing deep research (not just a quick search), follow this loop:
Turn the research question into 3-5 keyword queries using X search operators: Core query: Direct keywords for the topic Expert voices: from: specific known experts Pain points: Keywords like (broken OR bug OR issue OR migration) Positive signal: Keywords like (shipped OR love OR fast OR benchmark) Links: url:github.com or url: specific domains Noise reduction: -is:retweet (auto-added), add -is:reply if needed Crypto spam: Add -airdrop -giveaway -whitelist if crypto topics flooding
Run each query via CLI. After each, assess: Signal or noise? Adjust operators. Key voices worth searching from: specifically? Threads worth following via thread command? Linked resources worth deep-diving with web_fetch?
When a tweet has high engagement or is a thread starter: bun run x-search.ts thread <tweet_id>
When tweets link to GitHub repos, blog posts, or docs, fetch with web_fetch. Prioritize links that: Multiple tweets reference Come from high-engagement tweets Point to technical resources directly relevant to the question
Use --save flag or save manually to ~/clawd/drafts/x-research-{topic-slug}-{YYYY-MM-DD}.md.
Too much noise? Add -is:reply, use --sort likes, narrow keywords Too few results? Broaden with OR, remove restrictive operators Crypto spam? Add -$ -airdrop -giveaway -whitelist Expert takes only? Use from: or --min-likes 50 Substance over hot takes? Search with has:links
On heartbeat, can run watchlist check to see if key accounts posted anything notable. Flag to Frank only if genuinely interesting/actionable โ don't report routine tweets.
skills/x-research/ โโโ SKILL.md (this file) โโโ x-search.ts (CLI entry point) โโโ lib/ โ โโโ api.ts (twitterapi.io wrapper: search, thread, profile, tweet) โ โโโ cache.ts (file-based cache, 15min TTL) โ โโโ format.ts (Telegram + markdown formatters) โโโ data/ โ โโโ watchlist.json (accounts to monitor) โ โโโ cache/ (auto-managed) โโโ references/ โโโ x-api.md (twitterapi.io endpoint reference)
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.