Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Convert Bilibili (B站) videos into a searchable text knowledge base. Supports single videos and batch processing of entire UP主 channels. Uses local whisper.cp...
Convert Bilibili (B站) videos into a searchable text knowledge base. Supports single videos and batch processing of entire UP主 channels. Uses local whisper.cp...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Convert B站 videos (single or entire channels) into cleaned, structured text knowledge bases.
Agent orchestrates, scripts execute. The agent's job is to decide WHAT to do and kick off the right script. All mechanical, repetitive work (downloading, transcribing, cleaning) is handled by shell scripts with built-in parallelism. The agent NEVER loops through videos one by one — it runs ONE command and the script handles concurrency internally.
kb/UP主名_UID/ ├── BV号_视频标题.txt # Cleaned transcript (user-facing) ├── BV号_视频标题.meta.json # Video metadata ├── index.md # Summary index └── .raw/ # Hidden: whisper transcripts (if any) └── BV号_视频标题.txt Key decisions: File names include title for readability (BV1xxx_标题.txt) Folder includes UP主 name (UP主名_UID/) Raw transcripts hidden in .raw/ No _clean suffix — clean files are the main files Per-video .meta.json with title, uploader, duration, etc.
# 30-50 concurrent is fine — B站 CDN handles it scripts/batch_channel.sh "https://space.bilibili.com/UID/" ./kb/output zh 0 30
# Metal GPU can only handle 1-4 parallel whisper instances # More = slower total (GPU saturation) scripts/batch_channel.sh "https://space.bilibili.com/UID/" ./kb/output zh 0 2 --whisper-only
# Clean whisper transcripts (AI subtitles skip automatically) scripts/batch_clean.sh ./kb/UP主名_UID/ scripts/generate_index.sh ./kb/UP主名_UID/
Critical: Different stages need different concurrency! StageBottleneckRecommendedWhyAI subtitle downloadNetwork30-50B站 CDN handles high parallelWhisper transcribeMetal GPU1-4GPU饱和,多了反而慢Transcript cleaningAPI rate limitALL (0)Network I/O only
scripts/transcribe.sh "https://www.bilibili.com/video/BV..." ./output zh
AI subtitles are clean enough — skipped by default. SourceCleaning needed?B站 AI subtitlesNo — directly usablewhisper fallbackYes — goes through cleaning Cleaning uses opencode/minimax-m2.5-free: Fix homophones and garbled words Add punctuation Output MUST be Simplified Chinese Keep uncertain proper nouns unchanged Never substitute one real term for another Chunk size: 80 lines. Retry: 3 attempts with 3s delay.
Use nohup to avoid session compaction killing processes: nohup bash scripts/batch_clean.sh ./kb/UP主名_UID/ 0 80 > /tmp/clean.log 2>&1 & batch_clean.sh is resumable — safe to re-run after interruption.
Script auto-detects large channels (>800 videos) and fetches in chunks to avoid timeout. # Auto-chunked, just re-run to resume nohup bash scripts/batch_channel.sh "https://space.bilibili.com/UID/" ./kb/output > /tmp/batch.log 2>&1 & If still fails, manually fetch URL list: for i in $(seq 1 500 2000); do yt-dlp --flat-playlist --playlist-start $i --playlist-end $((i+499)) \ --print url "https://space.bilibili.com/UID/" >> /tmp/urls.txt done cat /tmp/urls.txt | xargs -P 20 -I {} bash scripts/transcribe.sh {} ./kb/OUTPUT zh
Keep system cool — avoid fan spin! StageRiskMitigationWhisper (GPU)HIGHKeep concurrency ≤2, monitor tempsAI subtitle downloadLowCan run 30-50 concurrentCleaning (API)NonePure network I/O, no local load If fans start spinning: Stop whisper processes immediately Wait for cooldown Resume with lower concurrency (1-2) # Check GPU temp (if using CUDA) nvidia-smi # Check Mac CPU/GPU temp sudo powermetrics --sample-rate 1000 -i 1 -n 1 | grep -E "CPU|GPU"
Required: yt-dlp, ffmpeg, whisper.cpp (+ model), opencode CLI Optional: Browser cookies for member-only content (--cookies-from-browser chrome)
VariableDefaultDescriptionWHISPER_CLIwhisper-cliPath to whisper.cppWHISPER_MODEL~/.whisper-cpp/ggml-small.binWhisper modelOPENCODE_BIN~/.opencode/bin/opencodeopencode CLICLEAN_MODELopencode/minimax-m2.5-freeCleaning model
China users: Use hf-mirror.com for whisper model Long videos (1h+): Auto-segmented into 10-min chunks Resumable: All batch scripts skip already-processed files
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.