Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Extract and summarize YouTube video transcripts into concise overviews with main points, arguments, and conclusions using video captions.
Extract and summarize YouTube video transcripts into concise overviews with main points, arguments, and conclusions using video captions.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
too long; didn't watch Extract and summarize YouTube video transcripts quickly and efficiently.
The tldw skill takes a YouTube URL, extracts the video transcript, and provides a comprehensive summary of the content. This allows you to quickly understand video content without watching the entire video.
This skill solves the problem of information overload from video content. Instead of spending 10-60 minutes watching a video, you can get the key points, main arguments, and conclusions in a concise summary within seconds.
Use this skill when: A user provides a YouTube video URL and asks for a summary You need to quickly understand video content without watching it You want to analyze or reference specific video content You need to extract information from educational, news, or documentary videos
Extraction: Uses yt-dlp to download video transcripts (captions/subtitles) Cleaning: Applies deduplication to remove artifacts from auto-generated captions Processing: Analyzes the cleaned transcript directly in the main agent session Summary: Returns a structured summary with main points, key arguments, and conclusions
Caching: Downloaded transcripts are cached locally to avoid re-downloading Deduplication: Removes duplicate lines common in auto-generated captions Multi-format support: Works with VTT, SRT, and JSON caption formats Cookie support: Can access age-restricted content with a cookie file Comprehensive summaries: Provides thesis, key examples, comparisons, and conclusions Fast processing: Typical videos summarized in seconds
This skill is based on the tldw project by stong. Full attribution and licensing details are available in ATTRIBUTION.md.
Python: 3.8 or higher Disk space: ~60MB for virtual environment and dependencies, plus additional space for transcript cache
The skill uses a Python virtual environment with the following dependencies: yt-dlp: Video transcript downloader (installed via pip) Python standard library: json, re, argparse (built-in) All dependencies are installed in the local virtual environment at venv/.
Cookie file: For accessing age-restricted or members-only content Format: Netscape cookie format (can be exported from browser) Place in skill directory and reference with --cookies flag
tldw/ โโโ SKILL.md # This documentation โโโ ATTRIBUTION.md # Credit to original project โโโ LICENSE # AGPL-3.0 license โโโ scripts/ โ โโโ extract_transcript.py # Main extraction script โโโ cache/ # Cached transcripts (auto-created) โโโ venv/ # Python virtual environment โโโ bin/ โ โโโ yt-dlp # Video transcript downloader โโโ lib/ # Python packages
Follow these steps to set up the tldw skill: Navigate to the skill directory: cd tldw/ Create a Python virtual environment: python3 -m venv venv Install dependencies: venv/bin/pip install yt-dlp webvtt-py Verify installation: venv/bin/yt-dlp --version The cache directory will be created automatically on first use The skill is now ready to use!
When a user provides a YouTube URL and requests a summary, use the following workflow: Extract the transcript using the extraction script Parse the JSON output to get the cleaned transcript Summarize the transcript directly (do not use sub-agents for large transcripts) Return a structured summary to the user
cd tldw/ && \ venv/bin/python scripts/extract_transcript.py \ --json --cache-dir cache "YOUTUBE_URL"
The script returns JSON with the following structure: { "transcript": "Full cleaned transcript text...", "video_id": "video_id_here", "title": "Video Title", "description": "Video description...", "duration": 1234, "uploader": "Channel Name", "upload_date": "20260101", "view_count": 12345, "webpage_url": "https://www.youtube.com/watch?v=..." } Extract the transcript field and process it directly to create a comprehensive summary.
--json: Output in JSON format (recommended for parsing) --cache-dir <path>: Specify cache directory (default: cache/) --cookies <file>: Path to Netscape-format cookie file for age-restricted content
# 1. Extract transcript cd tldw/ && \ venv/bin/python scripts/extract_transcript.py \ --json --cache-dir cache "https://www.youtube.com/watch?v=VIDEO_ID" # 2. Parse the JSON output and extract the transcript field # 3. Summarize the transcript directly (include main points, key arguments, conclusions) # 4. Return formatted summary to user
For age-restricted or members-only videos, export cookies from your browser: Install a browser extension like "Get cookies.txt LOCALLY" Navigate to YouTube while logged in Export cookies in Netscape format Save to the tldw directory (e.g., youtube_cookies.txt) Use with: --cookies youtube_cookies.txt
Error message: "No subtitles/captions found" What it means: The video has no auto-generated or manual captions available. Solution: Inform the user that the video cannot be transcribed because it lacks captions.
Error message: "ERROR: unable to download video data" What it means: The URL is malformed, the video doesn't exist, or it's private/deleted. Solution: Verify the URL is correct and check if the video is publicly accessible.
Error message: "Sign in to confirm your age" or similar authentication errors What it means: The video requires age verification or YouTube login. Solution: Use the --cookies flag with exported browser cookies (see "Accessing Age-Restricted Content" above).
Error messages: "Unable to download", "Connection timeout", extraction failures What it means: Network issues, YouTube blocking the request, or outdated yt-dlp that's incompatible with current YouTube. Solution: First, update yt-dlp: cd tldw/ && \ venv/bin/pip install --upgrade yt-dlp Retry the extraction If still failing: check internet connection or wait and try later YouTube frequently changes their API, so keeping yt-dlp updated is essential.
Symptoms: Permission errors, disk full errors What it means: The cache directory has permission problems or insufficient disk space. Solution: Check available disk space with df -h and verify write permissions on the cache/ directory.
Note: Transcripts over 50,000 characters may take longer to process. Best practice: Process large transcripts directly in the main agent session. Do not delegate to sub-agents, as they have been found unreliable with large payloads.
To see full error output (not just the last 100 lines): cd tldw/ && \ venv/bin/python scripts/extract_transcript.py \ --json --cache-dir cache "YOUTUBE_URL" To inspect cached transcripts: ls -lh tldw/cache/
The skill only works with videos that have captions/subtitles available Cannot transcribe videos with only audio (no built-in speech-to-text capability) Auto-generated captions may contain errors, typos, or timing artifacts Deduplication helps clean up auto-generated caption issues but isn't perfect
Depends on available caption languages provided by YouTube The script extracts whatever captions are available (auto-generated or manual) Non-English captions work, but summarization quality depends on the language model's capabilities English captions typically provide the best results
Very long videos (2+ hours) may produce extremely large transcripts (100k+ characters) Processing time increases proportionally with transcript length No hard limit, but practical considerations for context window and processing time apply
Currently only supports YouTube URLs Does not work with other video platforms (Vimeo, Dailymotion, TikTok, etc.) While yt-dlp supports many platforms, this script is optimized specifically for YouTube
Cannot access truly private videos (not shared publicly) Members-only or channel membership content requires cookies from an authenticated session Live streams may not have captions available until after the stream has ended Some geographic restrictions cannot be bypassed even with cookies
The deduplication logic removes consecutive duplicate lines May occasionally remove legitimate repeated phrases or refrains Designed primarily for auto-generated caption artifacts, not all repetition scenarios Manual captions typically don't need deduplication
This skill extracts text transcripts only, not audio files For audio extraction or processing, other tools (like yt-dlp directly with audio flags) would be needed Focus is on text-based content analysis, not media files
Writing, remixing, publishing, visual generation, and marketing content production.
Largest current source with strong distribution and engagement signals.