Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Turn ideas into podcasts, explainer videos, voice narration, and AI images via ListenHub. Use when the user wants to "make a podcast", "create an explainer v...
Turn ideas into podcasts, explainer videos, voice narration, and AI images via ListenHub. Use when the user wants to "make a podcast", "create an explainer v...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Generate podcasts, explainer videos, TTS audio, and AI images through shell scripts that wrap the ListenHub API.
Set LISTENHUB_API_KEY before first use. Two options: Option A โ OpenClaw env config (recommended): Add to ~/.openclaw/openclaw.json under env: { "env": { "LISTENHUB_API_KEY": "lh_sk_..." } } Option B โ Shell export: export LISTENHUB_API_KEY="lh_sk_..." Get your key: https://listenhub.ai/settings/api-keys For image generation, also set LISTENHUB_OUTPUT_DIR (defaults to ~/Downloads).
All scripts live at scripts/ relative to this SKILL.md. Resolve the path: SCRIPTS="$(cd "$(dirname "<path-to-this-SKILL.md>")" && pwd)/scripts" Dependencies: curl, jq (install if missing).
ModeScriptUse CasePodcastcreate-podcast.sh1-2 speaker discussionExplainercreate-explainer.sh + generate-video.shNarration + AI visualsTTScreate-tts.shPure voice readingSpeechcreate-speech.shMulti-speaker scripted audioImagegenerate-image.shAI image generation Helper scripts: get-speakers.sh (list voices), check-status.sh (poll progress).
Execute ONLY through provided scripts. Direct API calls are forbidden. Never hardcode speakerIds โ call get-speakers.sh to discover them. The API is proprietary; endpoints and parameters are internal to scripts.
Auto-detect from user input: Podcast: "podcast", "chat about", "discuss", "debate" โ create-podcast.sh Explainer: "explain", "introduce", "video", "tutorial" โ create-explainer.sh TTS: "read aloud", "convert to speech", "tts" โ create-tts.sh Image: "generate image", "draw", "create picture" โ generate-image.sh If ambiguous, ask user.
$SCRIPTS/get-speakers.sh --language zh # or en Returns JSON with data.items[].speakerId. If user doesn't specify a voice, pick the first match for the language.
$SCRIPTS/create-podcast.sh --query "topic" --language zh|en --mode quick|deep|debate --speakers <id1[,id2]> [--source-url URL] [--source-text TEXT] quick is default mode. debate requires 2 speakers. Multiple --source-url / --source-text allowed.
Use only when user wants to review/edit the script before audio generation. Stage 1: $SCRIPTS/create-podcast-text.sh (same args as one-stage) Review: Poll with check-status.sh --wait, save draft, STOP and wait for user approval. Stage 2: $SCRIPTS/create-podcast-audio.sh --episode <id> [--scripts modified.json]
$SCRIPTS/create-explainer.sh --content "text" --language zh|en --mode info|story --speakers <id> $SCRIPTS/generate-video.sh --episode <id>
$SCRIPTS/create-tts.sh --type text|url --content "text or URL" --language zh|en --mode smart|direct --speakers <id> Default mode: direct (no content modification). smart fixes grammar/punctuation. Text limit: 10,000 characters; use URL for longer content.
$SCRIPTS/create-speech.sh --scripts scripts.json JSON format: {"scripts": [{"content": "...", "speakerId": "..."}]}
$SCRIPTS/generate-image.sh --prompt "description" [--size 1K|2K|4K] [--ratio 16:9|1:1|9:16|...] [--reference-images "url1,url2"] Default: 2K, 16:9. Max 14 reference images. Output saved to $LISTENHUB_OUTPUT_DIR (default ~/Downloads).
$SCRIPTS/check-status.sh --episode <id> --type podcast|flow-speech|explainer [--wait] [--timeout 300] Exit codes: 0=done, 1=failed, 2=timeout (retry safe). Use --wait for automated polling. Run generation in background for long tasks.
Detect mode from user input If no speaker specified, call get-speakers.sh, pick first match Run the appropriate script (background for long tasks) Report submission, give estimated time (podcast 2-3min, explainer 3-5min, TTS 1-2min) On "done yet?" โ run check-status.sh --wait Show result link. Offer download only when asked.
Match response language to user input language. Chinese input โ Chinese responses. English โ English.
Podcast library: https://listenhub.ai/app/podcast Explainer library: https://listenhub.ai/app/explainer TTS library: https://listenhub.ai/app/text-to-speech API keys: https://listenhub.ai/settings/api-keys
Writing, remixing, publishing, visual generation, and marketing content production.
Largest current source with strong distribution and engagement signals.