Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Archive YouTube playlists into markdown notes with metadata, transcripts, AI summaries, and tags. Use when a user asks to import/sync YouTube playlists, arch...
Archive YouTube playlists into markdown notes with metadata, transcripts, AI summaries, and tags. Use when a user asks to import/sync YouTube playlists, arch...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Use this skill to import YouTube playlists into markdown files and optionally enrich notes with transcript, summary, and tagging.
Python 3.7+ yt-dlp (pip install yt-dlp or brew install yt-dlp) A browser signed into YouTube (for private playlists like Liked/Watch Later) macOS: terminal needs Full Disk Access to read browser cookies Windows: browser cookie extraction can be flaky; cookies_file export is the safer path Linux: works on desktop installs; headless servers need cookies_file
If no config exists at <output>/.config.json, ask these questions before running scripts.
Where should archived notes be stored? Default: ./YouTube-Archive Which playlists should be archived? Accept playlist IDs or URLs Default: LL (Liked Videos), WL (Watch Later) Which browser is signed into YouTube for cookie auth? Default: chrome
Ask only if the user wants summaries/tags. Generate AI summaries? (yes/no) Summary provider? (openai, gemini, anthropic, openrouter, ollama, none) Summary model name? API key env var name? Enable auto-tagging? (yes/no) Tagging provider/model/env var? Keep default tags or define custom vocabulary?
Run init: python3 <skill>/scripts/yt-import.py --output <output-dir> --init Edit <output-dir>/.config.json from the userโs answers. Verify auth with dry run: python3 <skill>/scripts/yt-import.py --output <output-dir> --dry-run Run real import. Run enrichment (optional): python3 <skill>/scripts/yt-enrich.py --output <output-dir> --limit 10
Use this for immediate manual sync: python3 <skill>/scripts/yt-import.py --output <output-dir> python3 <skill>/scripts/yt-enrich.py --output <output-dir> --limit 10 Useful import flags: --dry-run --playlist <ID> (repeatable) --no-summary --no-tags --cookies <path/to/cookies.txt> --browser <name> Useful enrich flags: --dry-run --limit <N> --strict-config
Import skips already archived videos by video_id. Filenames include video ID: Title [video_id].md. Enrichment skips notes where frontmatter has enriched: true. Lockfile prevents concurrent runs: <output-dir>/.yt-archiver.lock.
Offer cron only after one successful manual run. Example schedule (daily 11:00): Import new videos Enrich a bounded batch Example task text: Run yt-import.py for <output-dir>, then run yt-enrich.py --limit 10 for the same output. Keep it single-agent by default. Do not assume multi-agent routing.
Read these references when needed: Provider setup, model suggestions, cost: references/providers.md Common failures and fixes: references/troubleshooting.md Default summary prompt template: references/default-summary-prompt.md
Writing, remixing, publishing, visual generation, and marketing content production.
Largest current source with strong distribution and engagement signals.