Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Free local speech-to-text transcription using OpenAI Whisper. Transcribe audio files (mp3, wav, m4a, ogg, etc.) to text without API costs. Use when: (1) User...
Free local speech-to-text transcription using OpenAI Whisper. Transcribe audio files (mp3, wav, m4a, ogg, etc.) to text without API costs. Use when: (1) User...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Free, local speech-to-text using OpenAI Whisper.
Install dependencies (one-time setup): pip install openai-whisper torch Optional: Install ffmpeg for broader format support: macOS: brew install ffmpeg Ubuntu: sudo apt install ffmpeg
python ~/.openclaw/skills/whisper-stt/scripts/transcribe.py <audio_file>
OptionDescription--modelModel size: tiny, base, small, medium, large, large-v3-turbo (default: base)--language, -lLanguage code: zh, en, ja, etc. (auto-detect if not specified)--output, -oOutput format: json, txt, srt, vtt (default: json)
Chinese audio to text: python ~/.openclaw/skills/whisper-stt/scripts/transcribe.py recording.m4a --language zh --output txt Generate subtitles (SRT): python ~/.openclaw/skills/whisper-stt/scripts/transcribe.py video.mp4 --output srt > subtitles.srt Use faster model: python ~/.openclaw/skills/whisper-stt/scripts/transcribe.py audio.mp3 --model tiny --output txt High accuracy (slower): python ~/.openclaw/skills/whisper-stt/scripts/transcribe.py audio.mp3 --model large-v3 --output txt
ModelSpeedAccuracyVRAM/RAMBest Fortiny~32xBasic~1GBQuick tests, low resourcebase~16xGood~1GBBalanced speed/accuracysmall~6xBetter~2GBBetter accuracymedium~2xVery Good~5GBHigh accuracylarge1xExcellent~10GBBest qualitylarge-v3-turbo~8xExcellent~6GBFast + accurate (recommended)
"ModuleNotFoundError: No module named 'whisper'" โ Run: pip install openai-whisper torch "ffmpeg not found" โ Install ffmpeg or convert audio to WAV format first Slow transcription โ Use smaller model (tiny/base) or ensure GPU is available (Apple Silicon MPS, NVIDIA CUDA) Poor accuracy on Chinese โ Use --language zh explicitly and consider larger model (medium/large)
json: Full result with segments, timestamps, and metadata txt: Plain text transcription only srt: SubRip subtitle format with timing vtt: WebVTT subtitle format for web players
Powered by OpenAI Whisper - open source speech recognition.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.