← All skills
Tencent SkillHub Β· AI

Pocket TTS Complete Documentation

Generate speech from text using Kyutai Pocket TTS - lightweight, CPU-friendly, streaming TTS with voice cloning. English only. ~6x real-time on M4 MacBook Air.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Generate speech from text using Kyutai Pocket TTS - lightweight, CPU-friendly, streaming TTS with voice cloning. English only. ~6x real-time on M4 MacBook Air.

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
README.md, SKILL.md, docs/export_voice.md, docs/generate.md, docs/python-api.md, docs/serve.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
0.1.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 17 sections Open source page

Pocket TTS

Lightweight CPU-friendly text-to-speech with voice cloning. No GPU required.

When to Use

Generating speech from text on CPU without GPU Voice cloning from audio samples Streaming audio generation (low latency) Local TTS without API dependencies Real-time speech synthesis (~6x faster than real-time)

Key Features

100M parameters - Small, efficient model CPU-optimized - No GPU needed, uses only 2 cores ~6x real-time - Fast generation on modern CPUs ~200ms latency - To first audio chunk (streaming) Voice cloning - From 3-10s audio samples 24kHz mono WAV - High-quality output English only - More languages planned

Installation

pip install pocket-tts # or uv add pocket-tts

Generate Speech

# Basic generation (default voice) pocket-tts generate --text "Hello world" # Custom voice (local file, URL, or safetensors) pocket-tts generate --voice ./my_voice.wav pocket-tts generate --voice "hf://kyutai/tts-voices/alba-mackenna/casual.wav" pocket-tts generate --voice ./voice.safetensors # Quality tuning pocket-tts generate --temperature 0.7 --lsd-decode-steps 3 See docs/generate.md for full CLI reference.

Start Web Server

# Start FastAPI server with web UI pocket-tts serve # Custom host/port pocket-tts serve --host localhost --port 8080 See docs/serve.md for server options.

Export Voice Embeddings

Convert audio files to .safetensors for faster loading: # Single file pocket-tts export-voice voice.mp3 voice.safetensors # Batch conversion pocket-tts export-voice voices/ embeddings/ --truncate See docs/export_voice.md for export options.

Basic Usage

from pocket_tts import TTSModel import scipy.io.wavfile # Load model model = TTSModel.load_model() # Get voice state voice = model.get_state_for_audio_prompt( "hf://kyutai/tts-voices/alba-mackenna/casual.wav" ) # Generate audio audio = model.generate_audio(voice, "Hello world!") # Save scipy.io.wavfile.write("output.wav", model.sample_rate, audio.numpy())

Load Model

model = TTSModel.load_model( config="b6369a24", # Model variant temp=0.7, # Temperature (0.5-1.0) lsd_decode_steps=1, # Generation steps (1-5) eos_threshold=-4.0 # End-of-sequence threshold )

Voice State

# From audio file/URL voice = model.get_state_for_audio_prompt("./voice.wav") voice = model.get_state_for_audio_prompt("hf://kyutai/tts-voices/alba-mackenna/casual.wav") # From safetensors (fast loading) voice = model.get_state_for_audio_prompt("./voice.safetensors")

Streaming Generation

# Stream audio chunks for chunk in model.generate_audio_stream(voice, "Long text..."): # Process/save/play each chunk as generated print(f"Chunk: {chunk.shape[0]} samples")

Multi-Voice Management

# Preload multiple voices voices = { "casual": model.get_state_for_audio_prompt("hf://kyutai/tts-voices/alba-mackenna/casual.wav"), "announcer": model.get_state_for_audio_prompt("./announcer.safetensors"), } # Use different voices audio1 = model.generate_audio(voices["casual"], "Hey there!") audio2 = model.generate_audio(voices["announcer"], "Breaking news!") See docs/python-api.md for complete API reference.

Available Voices

Pre-made voices from hf://kyutai/tts-voices/: alba-mackenna/casual.wav (default, female) jessica-jian/casual.wav (female) voice-donations/Selfie.wav (male, marius) voice-donations/Butter.wav (male, javert) ears/p010/freeform_speech_01.wav (male, jean) vctk/p244_023.wav (female, fantine) vctk/p262_023.wav (female, eponine) vctk/p303_023.wav (female, azelma) Or clone any voice from your own audio samples.

Voice Cloning Tips

Clean audio - Remove background noise (use Adobe Podcast Enhance) Length - 3-10 seconds of speech is ideal Quality - Input quality affects output quality Format - WAV, MP3, or any common audio format supported

Performance Tips

CPU-only - GPU provides no speedup (model too small, batch size 1) 2 cores - Uses only 2 CPU cores efficiently Streaming - Low latency (<200ms to first chunk) Safetensors - Pre-process voices to .safetensors for instant loading

Output Format

All commands output WAV files: Sample rate: 24 kHz Channels: Mono Bit depth: 16-bit PCM

Links

GitHub Tech Report Paper (arXiv) HuggingFace Model Voice Repository Live Demo

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
6 Docs
  • SKILL.md Primary doc
  • docs/export_voice.md Docs
  • docs/generate.md Docs
  • docs/python-api.md Docs
  • docs/serve.md Docs
  • README.md Docs