Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Generate speech from text using Kyutai Pocket TTS - lightweight, CPU-friendly, streaming TTS with voice cloning. English only. ~6x real-time on M4 MacBook Air.
Generate speech from text using Kyutai Pocket TTS - lightweight, CPU-friendly, streaming TTS with voice cloning. English only. ~6x real-time on M4 MacBook Air.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Lightweight CPU-friendly text-to-speech with voice cloning. No GPU required.
Generating speech from text on CPU without GPU Voice cloning from audio samples Streaming audio generation (low latency) Local TTS without API dependencies Real-time speech synthesis (~6x faster than real-time)
100M parameters - Small, efficient model CPU-optimized - No GPU needed, uses only 2 cores ~6x real-time - Fast generation on modern CPUs ~200ms latency - To first audio chunk (streaming) Voice cloning - From 3-10s audio samples 24kHz mono WAV - High-quality output English only - More languages planned
pip install pocket-tts # or uv add pocket-tts
# Basic generation (default voice) pocket-tts generate --text "Hello world" # Custom voice (local file, URL, or safetensors) pocket-tts generate --voice ./my_voice.wav pocket-tts generate --voice "hf://kyutai/tts-voices/alba-mackenna/casual.wav" pocket-tts generate --voice ./voice.safetensors # Quality tuning pocket-tts generate --temperature 0.7 --lsd-decode-steps 3 See docs/generate.md for full CLI reference.
# Start FastAPI server with web UI pocket-tts serve # Custom host/port pocket-tts serve --host localhost --port 8080 See docs/serve.md for server options.
Convert audio files to .safetensors for faster loading: # Single file pocket-tts export-voice voice.mp3 voice.safetensors # Batch conversion pocket-tts export-voice voices/ embeddings/ --truncate See docs/export_voice.md for export options.
from pocket_tts import TTSModel import scipy.io.wavfile # Load model model = TTSModel.load_model() # Get voice state voice = model.get_state_for_audio_prompt( "hf://kyutai/tts-voices/alba-mackenna/casual.wav" ) # Generate audio audio = model.generate_audio(voice, "Hello world!") # Save scipy.io.wavfile.write("output.wav", model.sample_rate, audio.numpy())
model = TTSModel.load_model( config="b6369a24", # Model variant temp=0.7, # Temperature (0.5-1.0) lsd_decode_steps=1, # Generation steps (1-5) eos_threshold=-4.0 # End-of-sequence threshold )
# From audio file/URL voice = model.get_state_for_audio_prompt("./voice.wav") voice = model.get_state_for_audio_prompt("hf://kyutai/tts-voices/alba-mackenna/casual.wav") # From safetensors (fast loading) voice = model.get_state_for_audio_prompt("./voice.safetensors")
# Stream audio chunks for chunk in model.generate_audio_stream(voice, "Long text..."): # Process/save/play each chunk as generated print(f"Chunk: {chunk.shape[0]} samples")
# Preload multiple voices voices = { "casual": model.get_state_for_audio_prompt("hf://kyutai/tts-voices/alba-mackenna/casual.wav"), "announcer": model.get_state_for_audio_prompt("./announcer.safetensors"), } # Use different voices audio1 = model.generate_audio(voices["casual"], "Hey there!") audio2 = model.generate_audio(voices["announcer"], "Breaking news!") See docs/python-api.md for complete API reference.
Pre-made voices from hf://kyutai/tts-voices/: alba-mackenna/casual.wav (default, female) jessica-jian/casual.wav (female) voice-donations/Selfie.wav (male, marius) voice-donations/Butter.wav (male, javert) ears/p010/freeform_speech_01.wav (male, jean) vctk/p244_023.wav (female, fantine) vctk/p262_023.wav (female, eponine) vctk/p303_023.wav (female, azelma) Or clone any voice from your own audio samples.
Clean audio - Remove background noise (use Adobe Podcast Enhance) Length - 3-10 seconds of speech is ideal Quality - Input quality affects output quality Format - WAV, MP3, or any common audio format supported
CPU-only - GPU provides no speedup (model too small, batch size 1) 2 cores - Uses only 2 CPU cores efficiently Streaming - Low latency (<200ms to first chunk) Safetensors - Pre-process voices to .safetensors for instant loading
All commands output WAV files: Sample rate: 24 kHz Channels: Mono Bit depth: 16-bit PCM
GitHub Tech Report Paper (arXiv) HuggingFace Model Voice Repository Live Demo
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.