Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Local text-to-speech using Qwen3-TTS-12Hz-1.7B-CustomVoice. Use when generating audio from text, creating voice messages, or when TTS is requested. Supports 10 languages including Italian, 9 premium speaker voices, and instruction-based voice control (emotion, tone, style). Alternative to cloud-based TTS services like ElevenLabs. Runs entirely offline after initial model download.
Local text-to-speech using Qwen3-TTS-12Hz-1.7B-CustomVoice. Use when generating audio from text, creating voice messages, or when TTS is requested. Supports 10 languages including Italian, 9 premium speaker voices, and instruction-based voice control (emotion, tone, style). Alternative to cloud-based TTS services like ElevenLabs. Runs entirely offline after initial model download.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Local text-to-speech using Hugging Face's Qwen3-TTS-12Hz-1.7B-CustomVoice model.
Generate speech from text: scripts/tts.py "Ciao, come va?" -l Italian -o output.wav With voice instruction (emotion/style): scripts/tts.py "Sono felice!" -i "Parla con entusiasmo" -l Italian -o happy.wav Different speaker: scripts/tts.py "Hello world" -s Ryan -l English -o hello.wav
First-time setup (one-time): cd skills/public/qwen-tts bash scripts/setup.sh This creates a local virtual environment and installs qwen-tts package (~500MB). Note: First synthesis downloads ~1.7GB model from Hugging Face automatically.
scripts/tts.py [options] "Text to speak"
-o, --output PATH - Output file path (default: qwen_output.wav) -s, --speaker NAME - Speaker voice (default: Vivian) -l, --language LANG - Language (default: Auto) -i, --instruct TEXT - Voice instruction (emotion, style, tone) --list-speakers - Show available speakers --model NAME - Model name (default: CustomVoice 1.7B)
Basic Italian speech: scripts/tts.py "Benvenuto nel futuro del text-to-speech" -l Italian -o welcome.wav With emotion/instruction: scripts/tts.py "Sono molto felice di vederti!" -i "Parla con entusiasmo e gioia" -l Italian -o happy.wav Different speaker: scripts/tts.py "Hello, nice to meet you" -s Ryan -l English -o ryan.wav List available speakers: scripts/tts.py --list-speakers
The CustomVoice model includes 9 premium voices: SpeakerLanguageDescriptionVivianChineseBright, slightly edgy young femaleSerenaChineseWarm, gentle young femaleUncle_FuChineseSeasoned male, low mellow timbreDylanChinese (Beijing)Youthful Beijing male, clearEricChinese (Sichuan)Lively Chengdu male, huskyRyanEnglishDynamic male, rhythmicAidenEnglishSunny American maleOno_AnnaJapanesePlayful female, light nimbleSoheeKoreanWarm female, rich emotion Recommendation: Use each speaker's native language for best quality, though all speakers support all 10 languages (Chinese, English, Japanese, Korean, German, French, Russian, Portuguese, Spanish, Italian).
Use -i, --instruct to control emotion, tone, and style: Italian examples: "Parla con entusiasmo" "Tono serio e professionale" "Voce calma e rilassante" "Leggi come un narratore" English examples: "Speak with excitement" "Very happy and energetic" "Calm and soothing voice" "Read like a narrator"
The script outputs the audio file path to stdout (last line), making it compatible with OpenClaw's TTS workflow: # OpenClaw captures the output path cd skills/public/qwen-tts OUTPUT=$(scripts/tts.py "Ciao" -s Vivian -l Italian -o /tmp/audio.wav 2>/dev/null) # OUTPUT = /tmp/audio.wav
GPU (CUDA): ~1-3 seconds for short phrases CPU: ~10-30 seconds for short phrases Model size: ~1.7GB (auto-downloads on first run) Venv size: ~500MB (installed dependencies)
Setup fails: # Ensure Python 3.10-3.12 is available python3.12 --version # Re-run setup cd skills/public/qwen-tts rm -rf venv bash scripts/setup.sh Model download slow/fails: # Use mirror (China mainland) export HF_ENDPOINT=https://hf-mirror.com scripts/tts.py "Test" -o test.wav Out of memory (GPU): The model automatically falls back to CPU if GPU memory insufficient. Audio quality issues: Try different speaker: --list-speakers Add instruction: -i "Speak clearly and slowly" Check language matches text: -l Italian for Italian text
Model: Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice Source: Hugging Face (https://huggingface.co/Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice) License: Check model card for current license terms Sample Rate: 16kHz Output Format: WAV (uncompressed)
Messaging, meetings, inboxes, CRM, and teammate communication surfaces.
Largest current source with strong distribution and engagement signals.