Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Local STT and TTS on macOS using native Apple capabilities. Speech-to-text via yap (Apple Speech.framework), text-to-speech via say + ffmpeg. Fully offline, no API keys required. Includes voice quality detection and smart voice selection.
Local STT and TTS on macOS using native Apple capabilities. Speech-to-text via yap (Apple Speech.framework), text-to-speech via say + ffmpeg. Fully offline, no API keys required. Includes voice quality detection and smart voice selection.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Fully local speech-to-text (STT) and text-to-speech (TTS) on macOS. No API keys, no network, no cloud. All processing happens on-device.
macOS (Apple Silicon recommended, Intel works too) yap CLI in PATH โ install via brew install finnvoor/tools/yap ffmpeg in PATH (optional, needed for ogg/opus output) โ brew install ffmpeg say and osascript are macOS built-in
Transcribe an audio file to text using Apple's on-device speech recognition. node {baseDir}/scripts/stt.mjs <audio_file> [locale] audio_file: path to audio (ogg, m4a, mp3, wav, etc.) locale: optional, e.g. zh_CN, en_US, ja_JP. If omitted, uses system default. Outputs transcribed text to stdout.
Use node {baseDir}/scripts/stt.mjs --locales to list all supported locales. Key locales: en_US, en_GB, zh_CN, zh_TW, zh_HK, ja_JP, ko_KR, fr_FR, de_DE, es_ES, pt_BR, ru_RU, vi_VN, th_TH.
If the user's recent messages are in Chinese โ use zh_CN If in English โ use en_US If mixed or unclear โ try without locale (system default)
Convert text to an audio file using macOS native TTS. node {baseDir}/scripts/tts.mjs "<text>" [voice_name] [output_path] text: the text to speak voice_name: optional, e.g. Yue (Premium), Tingting, Ava (Premium). If omitted, auto-selects the best available voice based on text language. output_path: optional, defaults to a timestamped file in ~/.openclaw/media/outbound/ Outputs the generated audio file path to stdout. If ffmpeg is available, output is ogg/opus (ideal for messaging platforms). Otherwise aiff.
After generating the audio file, send it using the message tool: message action=send media=<path_from_tts.sh> asVoice=true
List available voices, check readiness, or find the best voice for a language: node {baseDir}/scripts/voices.mjs list [locale] # List voices, optionally filter by locale node {baseDir}/scripts/voices.mjs check "<name>" # Check if a specific voice is downloaded and ready node {baseDir}/scripts/voices.mjs best <locale> # Get the highest quality voice for a locale
1 = compact (low quality, always available) 2 = enhanced (mid quality, may need download) 3 = premium (highest quality, needs download from System Settings)
Tell the user: "Voice X is not downloaded. Go to System Settings โ Accessibility โ Spoken Content โ System Voice โ Manage Voices to download it."
The say command silently falls back to a default voice if the requested voice is not available (exit code 0, no error). Always use voices.mjs check before calling tts.mjs with a specific voice name. Premium voices (e.g. Yue (Premium), Ava (Premium)) sound significantly better but must be manually downloaded by the user. Siri voices are not accessible via the speech synthesis API.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.