# Send Clack to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "clack",
    "name": "Clack",
    "source": "tencent",
    "type": "skill",
    "category": "通讯协作",
    "sourceUrl": "https://clawhub.ai/fbn3799/clack",
    "canonicalUrl": "https://clawhub.ai/fbn3799/clack",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/clack",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=clack",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "server.py",
      "CHANGELOG.md",
      "README.md",
      "SKILL.md",
      "scripts/setup.sh",
      "scripts/clack.sh"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "slug": "clack",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-29T10:16:13.239Z",
      "expiresAt": "2026-05-06T10:16:13.239Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=clack",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=clack",
        "contentDisposition": "attachment; filename=\"clack-1.5.3.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "clack"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/clack"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/clack",
    "downloadUrl": "https://openagent3.xyz/downloads/clack",
    "agentUrl": "https://openagent3.xyz/skills/clack/agent",
    "manifestUrl": "https://openagent3.xyz/skills/clack/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/clack/agent.md"
  }
}
```
## Documentation

### Clack

WebSocket relay server that enables real-time voice conversations with an OpenClaw agent.

Flow: Client audio (PCM 16kHz/16-bit/mono) → STT → OpenClaw Gateway → TTS → PCM audio back to client.

Per-session provider selection: The client can independently choose STT and TTS providers per call — any combination of on-device (Apple speech frameworks) and server-side providers (ElevenLabs, OpenAI, Deepgram). The server auto-detects all available providers based on configured API keys and exposes them via /info.

### Prerequisites

Python 3.10+
API key for at least one provider (ElevenLabs, OpenAI, or Deepgram) — not needed for local speech mode
OpenClaw Gateway with chatCompletions endpoint enabled
Root/sudo access (for systemd)
Secure connection: Domain with SSL (recommended) or Tailscale

### Setup

Run the setup script. It creates a venv, installs deps, prompts for API keys, configures a systemd service, and optionally sets up SSL.

sudo bash scripts/setup.sh

The script auto-detects your OpenClaw gateway config and interactively prompts for provider API keys (ElevenLabs, OpenAI, Deepgram — all optional). On re-runs, existing keys can be kept, updated, or deleted.

### Options

bash scripts/setup.sh [--port 9878] [--domain clack.example.com]

FlagDefaultDescription--port9878Relay server port--domain(none)Domain for SSL setup (enables WSS)

### Connection modes

All connections are encrypted. The app supports two modes:

Domain with SSL (recommended):

bash scripts/setup.sh --domain clack.yourdomain.com
# → wss://clack.yourdomain.com/voice

Requires a DNS A record pointing the domain to your server IP. The setup script auto-configures SSL via Caddy. You can use a free domain from DuckDNS or your own.

Tailscale:

# Install Tailscale on your server, then connect from the app using your Tailscale IP
# → ws://100.x.x.x:9878/voice (encrypted at network level)

No domain or SSL setup needed. Tailscale encrypts all traffic at the network layer. Install Tailscale on both your server and phone, then use the server's Tailscale IP in the app.

Security note: Port 9878 should be firewalled from the public internet. Only allow access via localhost (for Caddy reverse proxy) and Tailscale. The app does not support unencrypted public connections.

### Enable OpenClaw Gateway endpoint

The gateway must have chatCompletions enabled. Apply this config patch:

{"http": {"endpoints": {"chatCompletions": {"enabled": true}}}}

### Management

clack status     # Check service status
clack restart    # Restart the server
clack logs       # Tail logs
clack pair       # Generate a new pairing code
clack update     # Pull latest code and restart
clack setup      # Re-run interactive setup (add SSL later, update keys, etc.)
clack uninstall  # Remove service and venv

### Client App

📱 iOS — Available on the App Store (or build from source at github.com/fbn3799/clack-app)
🤖 Android — Coming soon!

### Authentication

All endpoints except GET /health and POST /pair require a valid auth token (RELAY_AUTH_TOKEN). Tokens are verified using constant-time HMAC comparison to prevent timing attacks.

### Pairing System

6-character alphanumeric one-time codes (~2.1 billion combinations)
Codes expire after 5 minutes (TTL) and are single-use
Rate limited: 5 attempts per IP per 5 minutes — returns HTTP 429 after
2-second delay on failed attempts to slow brute force
Generating a code requires the admin auth token (GET /pair)
Redeeming a code is public but rate-limited (POST /pair)

### Encrypted Connections

Domain mode: WSS (WebSocket Secure) via Caddy with automatic SSL certificates
Tailscale mode: WireGuard encryption at the network layer
The app enforces encrypted connections — no unencrypted public access
Port 9878 should be firewalled; only accessible via localhost and Tailscale

### Input Sanitization

All user-facing text inputs are sanitized before processing:

Voice transcripts: Capped at 300 characters (CLACK_MAX_INPUT_CHARS), echo detection filters feedback loops, hallucination detection discards nonsense STT output
User context: Stripped to natural-language characters only (letters, numbers, common punctuation, whitespace). Control characters, escape sequences, and non-printable characters are removed. Capped at 1000 characters. Context is wrapped in explicit delimiters before injection into the system prompt.
No shell execution: All external communication uses structured HTTP/WebSocket APIs. No user input is ever passed to a shell.

### Data Privacy

No analytics, tracking, or telemetry
Voice audio goes to your server and only to the providers you choose
The iOS app stores only settings locally (server address, token, preferences)
Third-party API usage depends on your provider config (ElevenLabs, OpenAI, Deepgram)

### Session Routing

Each voice call creates a clack:<uuid> session in OpenClaw. These are small, isolated sessions — one per call — so voice conversations don't pollute your main agent context.

### Session Picker

The session picker in the iOS app provides context injection only. When you select a session key, it is added as text context to the LLM prompt — it does not change routing. All voice calls still create their own clack:<uuid> session.

### User Context

Users can provide persistent context that gets injected into the system prompt for every voice call. This lets the AI know about the user's preferences, notes, or any background information.

### How to set context

App text field: In the Clack app under Settings → Context, enter free-form text
Session picker: Select an OpenClaw session to inject its content as context
WebSocket message: Send {"type": "set_context", "text": "..."} during a voice session
HTTP API: PUT /context?token=...&text=... or POST /context with JSON body {"text": "..."}

Context is sanitized before saving — only natural-language characters are kept (letters, numbers, common punctuation). IP addresses and domains are stripped. The server returns the sanitized text in the response so the app can show the user exactly what will be sent as context.

Context persists across calls and server restarts. Clear it via DELETE /context or by sending an empty set_context message.

### Conversation History

The relay maintains a shared history file across calls for continuity. History is stored as JSON in CLACK_HISTORY_DIR (default: /var/lib/clack/history).

Max messages: 50 (configurable via CLACK_MAX_HISTORY)
History persists across calls and server restarts
Viewable via GET /history, clearable via DELETE /history

### Echo Test Mode

For testing audio round-trips without using LLM credits:

Server-wide: Set CLACK_ECHO_MODE=true environment variable
Per-session: Send {"type":"start","config":{"echo":true}} from the client

In echo mode, transcribed text is echoed back through TTS instead of being sent to the LLM. Audio is peak-normalized with capped gain to ensure consistent playback volume.

### Provider Selection

STT and TTS providers can be configured independently per session. The server auto-detects all available providers at startup based on which API keys are set (ELEVENLABS_API_KEY, OPENAI_API_KEY, DEEPGRAM_API_KEY).

### Available modes per direction (STT / TTS):

On-device (local): Uses Apple's built-in speech frameworks. Zero API costs.
Server provider: ElevenLabs, OpenAI, or Deepgram — whichever keys are configured.

### How it works:

App fetches GET /info to discover available providers
User picks STT and TTS providers independently in Settings → Voice
On call start, the app sends sttProvider and ttsProvider in the session config
Server creates the appropriate provider instances per session

### Example combinations:

STTTTSUse caseElevenLabsElevenLabsFull cloud — best qualityOn-deviceElevenLabsSave STT costs, keep premium voicesOn-deviceOn-deviceFully local — zero API usage, works offlineOpenAIDeepgramMix providers freely

Cost optimization: Use on-device STT (free, unlimited) with a premium cloud TTS voice — get great output quality while eliminating transcription costs entirely. Or go fully on-device for zero API spend.

### Text input mode

When STT is set to on-device, the client sends transcribed text instead of audio:

{"type": "text_input", "text": "What's the weather like?"}

When TTS is set to on-device, the server returns response_text only and skips audio synthesis.

### AI Response Rules

Responses are enforced to 1–3 sentences for natural voice conversation
Server-side max_tokens: 150 to prevent runaway responses
Server-side max input: 300 characters (CLACK_MAX_INPUT_CHARS) — transcripts exceeding this are truncated

### HTTP Endpoints

EndpointMethodAuthDescriptionGET /healthGETNoHealth check — returns service statusPOST /pairPOSTNoRedeem pairing code → get auth token (rate-limited)GET /pairGETYesGenerate one-time pairing codeGET /infoGETYesServer info: agent name, available STT/TTS providersGET /voicesGETYesList available TTS voicesGET /sessionsGETYesList active sessionsGET /historyGETYesGet conversation historyDELETE /historyDELETEYesClear conversation historyGET /contextGETYesGet current user contextPUT /contextPUTYesSet user context (query param text)POST /contextPOSTYesSet user context (JSON body {"text": "..."})DELETE /contextDELETEYesClear user contextWebSocket /voiceWSYesVoice relay connection

### WebSocket Protocol

Endpoint: ws://<host>:<port>/voice?token=<RELAY_AUTH_TOKEN>

### Client → Server

MessageFormatDescription{"type":"start","config":{...}}JSONStart session. Config: voice, systemPrompt, echo, sttProvider, ttsProviderBinary framesbytesRaw PCM audio (16kHz, 16-bit, mono){"type":"text_input","text":"..."}JSONLocal speech mode — send text directly{"type":"end_speech"}JSONSignal end of speech, triggers processing{"type":"interrupt"}JSONCancel current TTS playback{"type":"ping"}JSONKeepalive{"type":"set_context","text":"..."}JSONSet user context (sanitized before saving){"type":"auth","token":"..."}JSONAuthenticate (alternative to query param)

### Server → Client

MessageFormatDescription{"type":"ready"}JSONSession ready{"type":"auth_ok"} / {"type":"auth_failed"}JSONAuth result{"type":"processing","stage":"..."}JSONStage: transcribing, thinking, speaking, filtered{"type":"transcript","text":"...","final":true}JSONSTT result{"type":"response_text","text":"..."}JSONLLM text response{"type":"response_start","format":"pcm_16000"}JSONAudio stream startingBinary framesbytesTTS audio (PCM 16kHz, 16-bit, mono){"type":"response_end"}JSONAudio stream done{"type":"tts_cancelled"}JSONTTS playback was interrupted{"type":"context_updated","text":"..."}JSONContext saved — text contains the sanitized version{"type":"context_cleared"}JSONContext was cleared

### Features

Multi-provider STT/TTS: ElevenLabs, OpenAI, and Deepgram support
Independent voice input/output configuration: Choose STT and TTS providers separately — full control over how your voice is transcribed and how the AI speaks back
On-device speech: Apple speech frameworks for STT and/or TTS — zero API costs, mix with cloud providers freely
Cost optimization: Use free on-device transcription with premium cloud voices, or go fully local for zero spend
Voice response rules: AI responses enforced short (1-3 sentences, max_tokens 150)
Input length limiting: Configurable max transcript length (default 300 chars)
Confidence filtering: Low-confidence STT results are discarded
Echo detection: Prevents feedback loops (TTS → mic → STT)
Echo test mode: Test audio pipeline without LLM (server-wide or per-session)
Audio normalization: Peak normalization with capped gain for echo mode playback
Audio chunking: Long recordings auto-split for reliable transcription
Hallucination detection: Filters repetitive/nonsense STT output
Interrupt/TTS cancellation: Cancel in-progress TTS for all providers
Pairing system: Rate-limited one-time codes for secure device pairing
Session isolation: Each call gets its own clack:<uuid> session
Conversation history: Shared across calls, 50 messages max, persistent
Token auth: Constant-time HMAC verification
Keepalive pings: Prevents client timeout during long LLM responses
Silence detection: Default threshold 220, configurable range 20–1000
Auto-restart: systemd restarts on crash

### Voice Configuration

20 built-in ElevenLabs voices available. Default: Will. Pass voice name or ID in session config:

{"type": "start", "config": {"voice": "aria"}}

Available aliases: will, aria, roger, sarah, laura, charlie, george, callum, river, liam, charlotte, alice, matilda, jessica, eric, chris, brian, daniel, lily, bill.

### Environment Variables

VariableDefaultDescriptionRELAY_AUTH_TOKEN—Required. Client auth token (32-char)OPENCLAW_GATEWAY_URLhttp://127.0.0.1:18789OpenClaw Gateway URLOPENCLAW_GATEWAY_TOKEN—Gateway bearer tokenSTT_PROVIDERelevenlabsSTT provider (elevenlabs, openai, deepgram)TTS_PROVIDERelevenlabsTTS provider (elevenlabs, openai, deepgram)TTS_VOICEWillDefault voice (name or ID)VOICE_RELAY_PORT9878Server portCLACK_ECHO_MODEfalseEnable echo test mode server-wideCLACK_MAX_INPUT_CHARS300Max transcript length (chars)CLACK_HISTORY_DIR/var/lib/clack/historyHistory file storage directoryCLACK_MAX_HISTORY50Max conversation history messagesCLACK_AGENT_NAMEStormAgent name shown in the iOS app

Provider API keys (ELEVENLABS_API_KEY, OPENAI_API_KEY, DEEPGRAM_API_KEY) are stored in config.json with restricted file permissions, not as environment variables. The setup script manages these interactively.
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: fbn3799
- Version: 1.5.3
## Source health
- Status: healthy
- Item download looks usable.
- Yavira can redirect you to the upstream package for this item.
- Health scope: item
- Reason: direct_download_ok
- Checked at: 2026-04-29T10:16:13.239Z
- Expires at: 2026-05-06T10:16:13.239Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/clack)
- [Send to Agent page](https://openagent3.xyz/skills/clack/agent)
- [JSON manifest](https://openagent3.xyz/skills/clack/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/clack/agent.md)
- [Download page](https://openagent3.xyz/downloads/clack)