Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Install and use the speechall CLI tool for speech-to-text transcription. Use when the user wants to: (1) transcribe audio or video files to text, (2) install speechall on macOS or Linux, (3) list available STT models and their capabilities, (4) use speaker diarization, subtitles, or other transcription features from the terminal. Triggers on mentions of speechall, audio transcription CLI, or speech-to-text from the command line.
Install and use the speechall CLI tool for speech-to-text transcription. Use when the user wants to: (1) transcribe audio or video files to text, (2) install speechall on macOS or Linux, (3) list available STT models and their capabilities, (4) use speaker diarization, subtitles, or other transcription features from the terminal. Triggers on mentions of speechall, audio transcription CLI, or speech-to-text from the command line.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
CLI for speech-to-text transcription via the Speechall API. Supports multiple providers (OpenAI, Deepgram, AssemblyAI, Google, Gemini, Groq, ElevenLabs, Cloudflare, and more).
brew install Speechall/tap/speechall Without Homebrew: Download the binary for your platform from https://github.com/Speechall/speechall-cli/releases and place it on your PATH.
speechall --version
An API key is required. Provide it via environment variable (preferred) or flag: export SPEECHALL_API_KEY="your-key-here" # or speechall --api-key "your-key-here" audio.wav The user can create an API key on https://speechall.com/console/api-keys
Transcribe an audio or video file. This is the default subcommand — speechall audio.wav is equivalent to speechall transcribe audio.wav. speechall <file> [options] Options: FlagDescriptionDefault--model <provider.model>STT model identifieropenai.gpt-4o-mini-transcribe--language <code>Language code (e.g. en, tr, de)API default (auto-detect)--output-format <format>Output format (text, json, verbose_json, srt, vtt)API default--diarizationEnable speaker diarizationoff--speakers-expected <n>Expected number of speakers (use with --diarization)—--no-punctuationDisable automatic punctuation—--temperature <0.0-1.0>Model temperature—--initial-prompt <text>Text prompt to guide model style—--custom-vocabulary <term>Terms to boost recognition (repeatable)—--ruleset-id <uuid>Replacement ruleset UUID—--api-key <key>API key (overrides SPEECHALL_API_KEY env var)— Examples: # Basic transcription speechall interview.mp3 # Specific model and language speechall call.wav --model deepgram.nova-2 --language en # Speaker diarization with SRT output speechall meeting.wav --diarization --speakers-expected 3 --output-format srt # Custom vocabulary for domain-specific terms speechall medical.wav --custom-vocabulary "myocardial" --custom-vocabulary "infarction" # Transcribe a video file (macOS extracts audio automatically) speechall presentation.mp4
List available speech-to-text models. Outputs JSON to stdout. Filters combine with AND logic. speechall models [options] Filter flags: FlagDescription--provider <name>Filter by provider (e.g. openai, deepgram)--language <code>Filter by supported language (tr matches tr, tr-TR, tr-CY)--diarizationOnly models supporting speaker diarization--srtOnly models supporting SRT output--vttOnly models supporting VTT output--punctuationOnly models supporting automatic punctuation--streamableOnly models supporting real-time streaming--vocabularyOnly models supporting custom vocabulary Examples: # List all available models speechall models # Models from a specific provider speechall models --provider deepgram # Models that support Turkish and diarization speechall models --language tr --diarization # Pipe to jq for specific fields speechall models --provider openai | jq '.[].identifier'
On macOS, video files (.mp4, .mov, etc.) are automatically converted to audio before upload. On Linux, pass audio files directly (.wav, .mp3, .m4a, .flac, etc.). Output goes to stdout. Redirect to save: speechall audio.wav > transcript.txt Errors go to stderr, so piping stdout is safe. Run speechall --help, speechall transcribe --help, or speechall models --help to see all valid enum values for model identifiers, language codes, and output formats.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.