Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Real-time voice conversations in Discord voice channels with Claude AI
Real-time voice conversations in Discord voice channels with Claude AI
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Real-time voice conversations in Discord voice channels. Join a voice channel, speak, and have your words transcribed, processed by Claude, and spoken back.
Join/Leave Voice Channels: Via slash commands, CLI, or agent tool Voice Activity Detection (VAD): Automatically detects when users are speaking Speech-to-Text: Whisper API (OpenAI), Deepgram, or Local Whisper (Offline) Streaming STT: Real-time transcription with Deepgram WebSocket (~1s latency reduction) Agent Integration: Transcribed speech is routed through the Clawdbot agent Text-to-Speech: OpenAI TTS, ElevenLabs, or Kokoro (Local/Offline) Audio Playback: Responses are spoken back in the voice channel Barge-in Support: Stops speaking immediately when user starts talking Auto-reconnect: Automatic heartbeat monitoring and reconnection on disconnect
Discord bot with voice permissions (Connect, Speak, Use Voice Activity) API keys for STT and TTS providers System dependencies for voice: ffmpeg (audio processing) Native build tools for @discordjs/opus and sodium-native
# Ubuntu/Debian sudo apt-get install ffmpeg build-essential python3 # Fedora/RHEL sudo dnf install ffmpeg gcc-c++ make python3 # macOS brew install ffmpeg
clawdhub install discord-voice Or manually: cd ~/.clawdbot/extensions git clone <repository-url> discord-voice cd discord-voice npm install
{ plugins: { entries: { "discord-voice": { enabled: true, config: { sttProvider: "local-whisper", ttsProvider: "openai", ttsVoice: "nova", vadSensitivity: "medium", allowedUsers: [], // Empty = allow all users silenceThresholdMs: 1500, maxRecordingMs: 30000, openai: { apiKey: "sk-...", // Or use OPENAI_API_KEY env var }, }, }, }, }, }
Ensure your Discord bot has these permissions: Connect - Join voice channels Speak - Play audio Use Voice Activity - Detect when users speak Add these to your bot's OAuth2 URL or configure in Discord Developer Portal.
OptionTypeDefaultDescriptionenabledbooleantrueEnable/disable the pluginsttProviderstring"local-whisper""whisper", "deepgram", or "local-whisper"streamingSTTbooleantrueUse streaming STT (Deepgram only, ~1s faster)ttsProviderstring"openai""openai" or "elevenlabs"ttsVoicestring"nova"Voice ID for TTSvadSensitivitystring"medium""low", "medium", or "high"bargeInbooleantrueStop speaking when user talksallowedUsersstring[][]User IDs allowed (empty = all)silenceThresholdMsnumber1500Silence before processing (ms)maxRecordingMsnumber30000Max recording length (ms)heartbeatIntervalMsnumber30000Connection health check intervalautoJoinChannelstringundefinedChannel ID to auto-join on startup
OpenAI (Whisper + TTS) { openai: { apiKey: "sk-...", whisperModel: "whisper-1", ttsModel: "tts-1", }, } ElevenLabs (TTS only) { elevenlabs: { apiKey: "...", voiceId: "21m00Tcm4TlvDq8ikWAM", // Rachel modelId: "eleven_multilingual_v2", }, } Deepgram (STT only) { deepgram: { apiKey: "...", model: "nova-2", }, }
Once registered with Discord, use these commands: /discord_voice join <channel> - Join a voice channel /discord_voice leave - Leave the current voice channel /discord_voice status - Show voice connection status
# Join a voice channel clawdbot discord_voice join <channelId> # Leave voice clawdbot discord_voice leave --guild <guildId> # Check status clawdbot discord_voice status
The agent can use the discord_voice tool: Join voice channel 1234567890 The tool supports actions: join - Join a voice channel (requires channelId) leave - Leave voice channel speak - Speak text in the voice channel status - Get current voice status
Join: Bot joins the specified voice channel Listen: VAD detects when users start/stop speaking Record: Audio is buffered while user speaks Transcribe: On silence, audio is sent to STT provider Process: Transcribed text is sent to Clawdbot agent Synthesize: Agent response is converted to audio via TTS Play: Audio is played back in the voice channel
When using Deepgram as your STT provider, streaming mode is enabled by default. This provides: ~1 second faster end-to-end latency Real-time feedback with interim transcription results Automatic keep-alive to prevent connection timeouts Fallback to batch transcription if streaming fails To use streaming STT: { sttProvider: "deepgram", streamingSTT: true, // default deepgram: { apiKey: "...", model: "nova-2", }, }
When enabled (default), the bot will immediately stop speaking if a user starts talking. This creates a more natural conversational flow where you can interrupt the bot. To disable (let the bot finish speaking): { bargeIn: false, }
The plugin includes automatic connection health monitoring: Heartbeat checks every 30 seconds (configurable) Auto-reconnect on disconnect with exponential backoff Max 3 attempts before giving up If the connection drops, you'll see logs like: [discord-voice] Disconnected from voice channel [discord-voice] Reconnection attempt 1/3 [discord-voice] Reconnected successfully
low: Picks up quiet speech, may trigger on background noise medium: Balanced (recommended) high: Requires louder, clearer speech
Ensure the Discord channel is configured and the bot is connected before using voice.
Install build tools: npm install -g node-gyp npm rebuild @discordjs/opus sodium-native
Check bot has Connect + Speak permissions Check bot isn't server muted Verify TTS API key is valid
Check STT API key is valid Check audio is being recorded (see debug logs) Try adjusting VAD sensitivity
DEBUG=discord-voice clawdbot gateway start
VariableDescriptionDISCORD_TOKENDiscord bot token (required)OPENAI_API_KEYOpenAI API key (Whisper + TTS)ELEVENLABS_API_KEYElevenLabs API keyDEEPGRAM_API_KEYDeepgram API key
Only one voice channel per guild at a time Maximum recording length: 30 seconds (configurable) Requires stable network for real-time audio TTS output may have slight delay due to synthesis
MIT
Messaging, meetings, inboxes, CRM, and teammate communication surfaces.
Largest current source with strong distribution and engagement signals.