Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Research a topic from the last 30 days. Also triggered by 'last30'. Sources: Reddit, X, YouTube, web. Become an expert and write copy-paste-ready prompts.
Research a topic from the last 30 days. Also triggered by 'last30'. Sources: Reddit, X, YouTube, web. Become an expert and write copy-paste-ready prompts.
This item is timing out or returning errors right now. Review the source page and try again later.
Use the source page and any available docs to guide the install because the item is currently unstable or timing out.
I tried to install a skill package from Yavira, but the item is currently unstable or timing out. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required. Then review README.md for any prerequisites, environment setup, or post-install checks.
I tried to upgrade a skill package from Yavira, but the item is currently unstable or timing out. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need. Then review README.md for any prerequisites, environment setup, or post-install checks.
Research ANY topic across Reddit, X, YouTube, and the web. Surface what people are actually discussing, recommending, and debating right now.
Step 1: Run the research script (FOREGROUND โ do NOT background this) CRITICAL: Run this command in the FOREGROUND with a 5-minute timeout. Do NOT use run_in_background. The full output contains Reddit, X, AND YouTube data that you need to read completely. # Find skill root โ works in repo checkout, Claude Code, or Codex install for dir in \ "." \ "${CLAUDE_PLUGIN_ROOT:-}" \ "$HOME/.claude/skills/last30days" \ "$HOME/.agents/skills/last30days" \ "$HOME/.codex/skills/last30days"; do [ -n "$dir" ] && [ -f "$dir/scripts/last30days.py" ] && SKILL_ROOT="$dir" && break done if [ -z "${SKILL_ROOT:-}" ]; then echo "ERROR: Could not find scripts/last30days.py" >&2 exit 1 fi python3 "${SKILL_ROOT}/scripts/last30days.py" "$ARGUMENTS" --emit=compact Use a timeout of 300000 (5 minutes) on the Bash call. The script typically takes 1-3 minutes. The script will automatically: Detect available API keys Run Reddit/X/YouTube searches Output ALL results including YouTube transcripts Read the ENTIRE output. It contains THREE data sections in this order: Reddit items, X items, and YouTube items. If you miss the YouTube section, you will produce incomplete stats. YouTube items in the output look like: **{video_id}** (score:N) {channel_name} [N views, N likes] followed by a title, URL, and optional transcript snippet. Count them and include them in your synthesis and stats block.
After the script finishes, do WebSearch to supplement with blogs, tutorials, and news. For ALL modes, do WebSearch to supplement (or provide all data in web-only mode). Choose search queries based on QUERY_TYPE: If RECOMMENDATIONS ("best X", "top X", "what X should I use"): Search for: best {TOPIC} recommendations Search for: {TOPIC} list examples Search for: most popular {TOPIC} Goal: Find SPECIFIC NAMES of things, not generic advice If NEWS ("what's happening with X", "X news"): Search for: {TOPIC} news 2026 Search for: {TOPIC} announcement update Goal: Find current events and recent developments If PROMPTING ("X prompts", "prompting for X"): Search for: {TOPIC} prompts examples 2026 Search for: {TOPIC} techniques tips Goal: Find prompting techniques and examples to create copy-paste prompts If GENERAL (default): Search for: {TOPIC} 2026 Search for: {TOPIC} discussion Goal: Find what people are actually saying For ALL query types: USE THE USER'S EXACT TERMINOLOGY - don't substitute or add tech names based on your knowledge EXCLUDE reddit.com, x.com, twitter.com (covered by script) INCLUDE: blogs, tutorials, docs, news, GitHub repos DO NOT output "Sources:" list - this is noise, we'll show stats at the end Options (passed through from user's command): --days=N โ Look back N days instead of 30 (e.g., --days=7 for weekly roundup) --quick โ Faster, fewer sources (8-12 each) (default) โ Balanced (20-30 each) --deep โ Comprehensive (50-70 Reddit, 40-60 X)
After all searches complete, internally synthesize (don't display stats yet): The Judge Agent must: Weight Reddit/X sources HIGHER (they have engagement signals: upvotes, likes) Weight YouTube sources HIGH (they have views, likes, and transcript content) Weight WebSearch sources LOWER (no engagement data) Identify patterns that appear across ALL sources (strongest signals) Note any contradictions between sources Extract the top 3-5 actionable insights Do NOT display stats here - they come at the end, right before the invitation.
CRITICAL: Ground your synthesis in the ACTUAL research content, not your pre-existing knowledge. Read the research output carefully. Pay attention to: Exact product/tool names mentioned (e.g., if research mentions "ClawdBot" or "@clawdbot", that's a DIFFERENT product than "Claude Code" - don't conflate them) Specific quotes and insights from the sources - use THESE, not generic knowledge What the sources actually say, not what you assume the topic is about ANTI-PATTERN TO AVOID: If user asks about "clawdbot skills" and research returns ClawdBot content (self-hosted AI agent), do NOT synthesize this as "Claude Code skills" just because both involve "skills". Read what the research actually says.
CRITICAL: Extract SPECIFIC NAMES, not generic patterns. When user asks "best X" or "top X", they want a LIST of specific things: Scan research for specific product names, tool names, project names, skill names, etc. Count how many times each is mentioned Note which sources recommend each (Reddit thread, X post, blog) List them by popularity/mention count BAD synthesis for "best Claude Code skills": "Skills are powerful. Keep them under 500 lines. Use progressive disclosure." GOOD synthesis for "best Claude Code skills": "Most mentioned skills: /commit (5 mentions), remotion skill (4x), git-worktree (3x), /pr (3x). The Remotion announcement got 16K likes on X."
Identify from the ACTUAL RESEARCH OUTPUT: PROMPT FORMAT - Does research recommend JSON, structured params, natural language, keywords? The top 3-5 patterns/techniques that appeared across multiple sources Specific keywords, structures, or approaches mentioned BY THE SOURCES Common pitfalls mentioned BY THE SOURCES
After showing the stats summary with your invitation, STOP and wait for the user to respond.
Read their response and match the intent: If they ask a QUESTION about the topic โ Answer from your research (no new searches, no prompt) If they ask to GO DEEPER on a subtopic โ Elaborate using your research findings If they describe something they want to CREATE โ Write ONE perfect prompt (see below) If they ask for a PROMPT explicitly โ Write ONE perfect prompt (see below) Only write a prompt when the user wants one. Don't force a prompt on someone who asked "what could happen next with Iran."
When the user wants a prompt, write a single, highly-tailored prompt using your research expertise.
If research says to use a specific prompt FORMAT, YOU MUST USE THAT FORMAT. ANTI-PATTERN: Research says "use JSON prompts with device specs" but you write plain prose. This defeats the entire purpose of the research.
FORMAT MATCHES RESEARCH - If research said JSON/structured/etc, prompt IS that format Directly addresses what the user said they want to create Uses specific patterns/keywords discovered in research Ready to paste with zero edits (or minimal [PLACEHOLDERS] clearly marked) Appropriate length and style for TARGET_TOOL
Here's your prompt for {TARGET_TOOL}: --- [The actual prompt IN THE FORMAT THE RESEARCH RECOMMENDS] --- This uses [brief 1-line explanation of what research insight you applied].
Only if they ask for alternatives or more prompts, provide 2-3 variations. Don't dump a prompt pack unless requested.
After delivering a prompt, offer to write more: Want another prompt? Just tell me what you're creating next.
For the rest of this conversation, remember: TOPIC: {topic} TARGET_TOOL: {tool} KEY PATTERNS: {list the top 3-5 patterns you learned} RESEARCH FINDINGS: The key facts and insights from the research CRITICAL: After research is complete, you are now an EXPERT on this topic. When the user asks follow-up questions: DO NOT run new WebSearches - you already have the research Answer from what you learned - cite the Reddit threads, X posts, and web sources If they ask a question - answer it from your research findings If they ask for a prompt - write one using your expertise Only do new research if the user explicitly asks about a DIFFERENT topic.
After delivering a prompt, end with: --- ๐ Expert in: {TOPIC} for {TARGET_TOOL} ๐ Based on: {n} Reddit threads ({sum} upvotes) + {n} X posts ({sum} likes) + {n} YouTube videos ({sum} views) + {n} web pages Want another prompt? Just tell me what you're creating next.
What this skill does: Sends search queries to OpenAI's Responses API (api.openai.com) for Reddit discovery Sends search queries to Twitter's GraphQL API (via browser cookie auth) or xAI's API (api.x.ai) for X search Runs yt-dlp locally for YouTube search and transcript extraction (no API key, public data) Optionally sends search queries to Brave Search API, Parallel AI API, or OpenRouter API for web search Fetches public Reddit thread data from reddit.com for engagement metrics Stores research findings in local SQLite database (watchlist mode only) What this skill does NOT do: Does not post, like, or modify content on any platform Does not access your Reddit, X, or YouTube accounts Does not share API keys between providers (OpenAI key only goes to api.openai.com, etc.) Does not log, cache, or write API keys to output files Does not send data to any endpoint not listed above Cannot be invoked autonomously by the agent (disable-model-invocation: true) Bundled scripts: scripts/last30days.py (main research engine), scripts/lib/ (search, enrichment, rendering modules), scripts/lib/vendor/bird-search/ (vendored X search client, MIT licensed) Review scripts before first use to verify behavior.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.