Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Build a deduplicated digest from X (Twitter) For You and Following timelines using bird. Outputs a payload for upstream delivery.
Build a deduplicated digest from X (Twitter) For You and Following timelines using bird. Outputs a payload for upstream delivery.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
This skill uses bird to read X/Twitter timelines and build a high-signal digest. Sources: For You timeline Following timeline What it does: Fetch recent tweets Filter incrementally (avoid reprocessing) Deduplicate (ID + near-duplicate text) Rank and trim Generate a Chinese digest Output a structured payload Delivery (Telegram, email, etc.) is NOT handled here. Upstream OpenClaw workflows decide how to notify users.
All config is read from: skills.entries["x-timeline-digest"].config
NameTypeDefaultDescriptionintervalHoursnumber6Interval window in hoursfetchLimitForYounumber100Tweets fetched from For YoufetchLimitFollowingnumber60Tweets fetched from FollowingmaxItemsPerDigestnumber25Max tweets in one digestsimilarityThresholdnumber0.9Near-duplicate similarity thresholdstatePathstring~/.openclaw/state/x-timeline-digest.jsonState file path
bird must be installed and available in PATH bird must already be authenticated (cookie login) Read-only usage
Run the digest generator to get a clean, deduplicated JSON payload: node skills/x-timeline-digest/digest.js
To generate the "Smart Brief" (Categorized, Summarized, Denoised): Run the script: node skills/x-timeline-digest/digest.js > digest.json Read the prompt template: read skills/x-timeline-digest/PROMPT.md Send the prompt to your LLM, injecting the content of digest.json where {{JSON_DATA}} is. Note: The script automatically applies heuristic filtering (removes "gm", ads, short spam) before outputting JSON.
State is persisted to statePath.
{ "lastRunAt": "2026-02-01T00:00:00+08:00", "sentTweetIds": { "123456789": "2026-02-01T00:00:00+08:00" } }
Tweets already in sentTweetIds must not be included again After a successful run: Update lastRunAt Add pushed tweet IDs to sentTweetIds Keep IDs for at least 30 days
Fetch from For You and Following Incremental filter using lastRunAt Hard deduplication by tweet id Near-duplicate merge using text similarity Rank and trim to maxItemsPerDigest Generate a Categorized Chinese Digest (via PROMPT.md + LLM) Categories: ๐ค AI & Tech, ๐ฐ Crypto & Markets, ๐ก Insights, ๐๏ธ Other Language: Simplified Chinese Format: Author: Summary Denoising: Remove ads and low-value content
The skill returns one JSON object: { "window": { "start": "2026-02-01T00:00:00+08:00", "end": "2026-02-01T06:00:00+08:00", "intervalHours": 6 }, "counts": { "forYouFetched": 100, "followingFetched": 60, "afterIncremental": 34, "afterDedup": 26, "final": 20 }, "digestText": "ไธญๆๆ่ฆๅ ๅฎน", "items": [ { "id": "123456", "author": "@handle", "createdAt": "2026-02-01T02:15:00+08:00", "text": "tweet text", "url": "https://x.com/handle/status/123456", "sources": ["following"] } ] }
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.