Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Generate high-quality English speech offline on CPU using 8 built-in voices or custom voice cloning with Kyutai's Pocket TTS model.
Generate high-quality English speech offline on CPU using 8 built-in voices or custom voice cloning with Kyutai's Pocket TTS model.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Fully local, offline text-to-speech using Kyutai's Pocket TTS model. Generate high-quality audio from text without any API calls or internet connection. Features 8 built-in voices, voice cloning support, and runs entirely on CPU.
π― Fully local - No API calls, runs completely offline π CPU-only - No GPU required, works on any computer β‘ Fast generation - ~2-6x real-time on CPU π€ 8 built-in voices - alba, marius, javert, jean, fantine, cosette, eponine, azelma π Voice cloning - Clone any voice from a WAV sample π Low latency - ~200ms first audio chunk π Simple Python API - Easy integration into any project
# 1. Accept the model license on Hugging Face # https://huggingface.co/kyutai/pocket-tts # 2. Install the package pip install pocket-tts # Or use uv for automatic dependency management uvx pocket-tts generate "Hello world"
# Basic usage pocket-tts "Hello, I am your AI assistant" # With specific voice pocket-tts "Hello" --voice alba --output hello.wav # With custom voice file (voice cloning) pocket-tts "Hello" --voice-file myvoice.wav --output output.wav # Adjust speed pocket-tts "Hello" --speed 1.2 # Start local server pocket-tts --serve # List available voices pocket-tts --list-voices
from pocket_tts import TTSModel import scipy.io.wavfile # Load model tts_model = TTSModel.load_model() # Get voice state voice_state = tts_model.get_state_for_audio_prompt( "hf://kyutai/tts-voices/alba-mackenna/casual.wav" ) # Generate audio audio = tts_model.generate_audio(voice_state, "Hello world!") # Save to WAV scipy.io.wavfile.write("output.wav", tts_model.sample_rate, audio.numpy()) # Check sample rate print(f"Sample rate: {tts_model.sample_rate} Hz")
VoiceDescriptionalbaCasual female voicemariusMale voicejavertClear male voicejeanNatural male voicefantineFemale voicecosetteFemale voiceeponineFemale voiceazelmaFemale voice Or use --voice-file /path/to/wav.wav for custom voice cloning.
OptionDescriptionDefaulttextText to convertRequired-o, --outputOutput WAV fileoutput.wav-v, --voiceVoice presetalba-s, --speedSpeech speed (0.5-2.0)1.0--voice-fileCustom WAV for cloningNone--serveStart HTTP serverFalse--list-voicesList all voicesFalse
Python 3.10-3.14 PyTorch 2.5+ (CPU version works) Works on 2 CPU cores
β οΈ Model is gated - accept license on Hugging Face first π English language only (v1) πΎ First run downloads model (~100M parameters) π Audio is returned as 1D torch tensor (PCM data)
Demo GitHub Hugging Face Paper
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.