Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
macOS CLI for transcribing audio and video files using local Whisper models or Whisnap Cloud.
macOS CLI for transcribing audio and video files using local Whisper models or Whisnap Cloud.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Use whisnap for transcribing audio/video files from the terminal. Requires the Whisnap macOS app with at least one model downloaded. Setup (once) Open Whisnap app โ Settings โ Advanced โ Enable CLI (creates /usr/local/bin/whisnap symlink) Download at least one Whisper model in the app Common commands Transcribe audio: whisnap recording.wav Transcribe video: whisnap meeting.mp4 Cloud transcription: whisnap recording.wav --cloud JSON output with timestamps: whisnap lecture.m4a --json Specific model: whisnap interview.wav -m small-q5_1 Cloud + JSON: whisnap recording.wav --cloud --json List downloaded models: whisnap --list-models Verbose diagnostics: whisnap recording.wav -v Supported formats Audio: WAV, MP3, FLAC, M4A, OGG Video: MP4, MOV, MKV, WebM Flags -c, --cloud โ Use Whisnap Cloud instead of local model (requires sign-in) -m, --model <ID> โ Override model (e.g., small-q5_1). Defaults to app's selected model. -j, --json โ Structured JSON output with text, segments, timestamps, model info -v, --verbose โ Print progress and diagnostics to stderr --list-models โ List available models and exit JSON output format { "text": "transcribed text", "segments": [{ "start_ms": 0, "end_ms": 1000, "text": "segment" }], "model": "small-q5_1", "backend": "whisper", "processing_time_ms": 5000 } Notes The CLI reuses models and settings from the Whisnap app (~/Library/Application Support/com.whisnap.desktop/). Cloud mode requires authentication โ sign in via the app first. For scripting, use --json and pipe stdout. Diagnostics go to stderr. Exit code 0 = success, 1 = error. Only Whisper models are supported in CLI mode (not Parakeet). Confirm the file path exists before transcribing โ the CLI validates but does not search.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.