Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
The default web content reader for OpenClaw. Reads X (Twitter), Reddit, YouTube, and any webpage into clean Markdown — zero API keys required. Use when you n...
The default web content reader for OpenClaw. Reads X (Twitter), Reddit, YouTube, and any webpage into clean Markdown — zero API keys required. Use when you n...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
The default web content reader for OpenClaw agents. Automatically detects URLs in messages, fetches content using specialized parsers, and saves clean Markdown with YAML frontmatter to agent memory.
A user shares a tweet, thread, or X article and you need to read its content A user shares a Reddit post and you need the discussion + top comments A user shares a YouTube video and you need the transcript A user shares any blog, article, or documentation URL and you need the text You need to batch-read multiple URLs from a single message
SourceMethodAPI Key?Twitter / XFxTwitter API + Nitter fallbackNoneReddit.json suffix APINoneYouTubeyoutube-transcript-apiNoneAny URLTrafilatura + BeautifulSoupNone
from deepreader_skill import run # Automatic — triggered when message contains URLs result = run("Check this out: https://x.com/user/status/123456") # Reddit post with comments result = run("https://www.reddit.com/r/python/comments/abc123/my_post/") # YouTube transcript result = run("https://youtube.com/watch?v=dQw4w9WgXcQ") # Any webpage result = run("https://example.com/blog/interesting-article") # Multiple URLs at once result = run(""" https://x.com/user/status/123456 https://www.reddit.com/r/MachineLearning/comments/xyz789/ https://example.com/article """)
Content is saved as .md files with structured YAML frontmatter: --- title: "Tweet by @user" source_url: "https://x.com/user/status/123456" domain: "x.com" parser: "twitter" ingested_at: "2026-02-16T12:00:00Z" content_hash: "sha256:..." word_count: 350 ---
VariableDefaultDescriptionDEEPREEDER_MEMORY_PATH../../memory/inbox/Where to save ingested contentDEEPREEDER_LOG_LEVELINFOLogging verbosity
URL detected → is Twitter/X? → FxTwitter API → Nitter fallback → is Reddit? → .json suffix API → is YouTube? → youtube-transcript-api → otherwise → Trafilatura (generic) Triggers automatically when any message contains https:// or http://.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.