{
  "schemaVersion": "1.0",
  "item": {
    "slug": "dialogue-audio",
    "name": "Dialogue Audio",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/okaris/dialogue-audio",
    "canonicalUrl": "https://clawhub.ai/okaris/dialogue-audio",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/dialogue-audio",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=dialogue-audio",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/dialogue-audio"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/dialogue-audio",
    "agentPageUrl": "https://openagent3.xyz/skills/dialogue-audio/agent",
    "manifestUrl": "https://openagent3.xyz/skills/dialogue-audio/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/dialogue-audio/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Dialogue Audio",
        "body": "Create realistic multi-speaker dialogue with Dia TTS via inference.sh CLI."
      },
      {
        "title": "Quick Start",
        "body": "curl -fsSL https://cli.inference.sh | sh && infsh login\n\n# Two-speaker conversation\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Have you tried the new feature yet? [S2] Not yet, but I heard it saves a ton of time. [S1] It really does. I cut my workflow in half. [S2] Okay, I am definitely trying it today.\"\n}'\n\nInstall note: The install script only detects your OS/architecture, downloads the matching binary from dist.inference.sh, and verifies its SHA-256 checksum. No elevated permissions or background processes. Manual install & verification available."
      },
      {
        "title": "Speaker Tags",
        "body": "Dia TTS uses [S1] and [S2] to distinguish two speakers.\n\nTagRoleVoice[S1]Speaker 1Automatically assigned voice A[S2]Speaker 2Automatically assigned voice B\n\nRules:\n\nAlways start each speaker turn with the tag\nTags must be uppercase: [S1] not [s1]\nMaximum 2 speakers per generation\nEach speaker maintains consistent voice within a session"
      },
      {
        "title": "Emotion & Expression Control",
        "body": "Dia TTS interprets punctuation and non-speech cues for emotional delivery."
      },
      {
        "title": "Punctuation Effects",
        "body": "PunctuationEffectExample.Neutral, declarative, medium pause\"This is important.\"!Emphasis, excitement, energy\"This is amazing!\"?Rising intonation, questioning\"Are you sure about that?\"...Hesitation, trailing off, long pause\"I thought it would work... but it didn't.\",Short breath pause\"First, we analyze. Then, we act.\"— or --Interruption or pivot\"I was going to say — never mind.\""
      },
      {
        "title": "Non-Speech Sounds",
        "body": "Dia TTS supports parenthetical sound descriptions:\n\n(laughs)      — laughter\n(sighs)       — exasperation or relief\n(clears throat) — attention-getting pause\n(whispers)    — softer delivery\n(gasps)       — surprise"
      },
      {
        "title": "Examples with Emotion",
        "body": "# Excited conversation\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Guess what happened today! [S2] What? Tell me! [S1] We hit ten thousand users! [S2] (gasps) No way! That is incredible! [S1] I know... I still cannot believe it.\"\n}'\n\n# Serious/thoughtful dialogue\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] We need to talk about the timeline. [S2] (sighs) I know. It is tight. [S1] Can we cut anything from the scope? [S2] Maybe... but it would mean dropping the analytics dashboard. [S1] That is a tough trade-off.\"\n}'\n\n# Teaching/explaining\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] So how does it actually work? [S2] Great question. Think of it like a pipeline. Data comes in on one end, gets processed in the middle, and comes out transformed on the other side. [S1] Like an assembly line? [S2] Exactly! Each step adds something.\"\n}'"
      },
      {
        "title": "Pause Hierarchy",
        "body": "TechniquePause LengthUse ForComma ,~0.3 secondsBetween clauses, list itemsPeriod .~0.5 secondsBetween sentencesEllipsis ...~1.0 secondsDramatic pause, thinking, hesitationNew speaker tag~0.3 secondsNatural turn-taking gap"
      },
      {
        "title": "Speed Control",
        "body": "Shorter sentences = faster perceived pace\nLonger sentences with commas = measured, thoughtful pace\nQuestions followed by answers = engaging back-and-forth rhythm\n\n# Fast-paced, energetic\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Ready? [S2] Ready. [S1] Let us go! Three features. Five minutes. [S2] Hit it! [S1] Feature one: real-time sync.\"\n}'\n\n# Slow, contemplative\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] I have been thinking about this for a while... and I think we need to change direction. [S2] What do you mean? [S1] The market has shifted. What worked last year... is not working now.\"\n}'"
      },
      {
        "title": "Interview Format",
        "body": "infsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Welcome to the show. Today we have a special guest. Tell us about yourself. [S2] Thanks for having me! I am a product designer, and I have been building tools for creators for about ten years. [S1] What got you started in design? [S2] Honestly? I was terrible at coding but loved making things look good. (laughs) So design was the natural path.\"\n}'"
      },
      {
        "title": "Tutorial / Explainer",
        "body": "infsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Can you walk me through the setup process? [S2] Sure. Step one, install the CLI. It takes about thirty seconds. [S1] And then? [S2] Step two, run the login command. It will open your browser for authentication. [S1] That sounds simple. [S2] It is! Step three, you are ready to run your first app.\"\n}'"
      },
      {
        "title": "Debate / Discussion",
        "body": "infsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] I think we should go with option A. It is faster to implement. [S2] But option B scales better long-term. [S1] Sure, but we need something shipping this quarter. [S2] Fair point... what if we do A now with a migration path to B? [S1] That could work. Let us prototype it.\"\n}'"
      },
      {
        "title": "Volume Normalization",
        "body": "Both speakers should be at consistent volume. If one is louder:\n\n# Merge with balanced audio\ninfsh app run infsh/video-audio-merger --input '{\n  \"video\": \"talking-head.mp4\",\n  \"audio\": \"dialogue.mp3\",\n  \"audio_volume\": 1.0\n}'"
      },
      {
        "title": "Adding Background/Music",
        "body": "# Merge dialogue with background music\ninfsh app run infsh/media-merger --input '{\n  \"media\": [\"dialogue.mp3\", \"background-music.mp3\"]\n}'"
      },
      {
        "title": "Segmenting Long Conversations",
        "body": "For conversations longer than ~30 seconds, generate in segments:\n\n# Segment 1: Introduction\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Welcome back to another episode...\"\n}'\n\n# Segment 2: Main content\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] So let us dive into today s topic...\"\n}'\n\n# Segment 3: Wrap-up\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Great conversation today...\"\n}'\n\n# Merge all segments\ninfsh app run infsh/media-merger --input '{\n  \"media\": [\"segment1.mp3\", \"segment2.mp3\", \"segment3.mp3\"]\n}'"
      },
      {
        "title": "Script Writing Tips",
        "body": "DoDon'tWrite how people talkWrite how people writeShort sentences (< 15 words)Long academic sentencesContractions (\"can't\", \"won't\")Formal (\"cannot\", \"will not\")Natural fillers (\"So,\", \"Well,\")Every sentence perfectly formedVary sentence lengthAll sentences same lengthInclude reactions (\"Exactly!\", \"Hmm.\")One-sided monologuesRead it aloud before generatingAssume it sounds right"
      },
      {
        "title": "Common Mistakes",
        "body": "MistakeProblemFixMonologues longer than 3 sentencesSounds like a lecture, not conversationBreak into exchangesNo emotional variationFlat, robotic deliveryUse punctuation and non-speech cuesMissing speaker tagsVoices don't alternateStart every turn with [S1] or [S2]Formal written languageSounds unnatural spokenUse contractions, short sentencesNo pauses between topicsFeels rushedUse ... or scene breaksAll same energy levelMonotonousVary between high/low energy moments"
      },
      {
        "title": "Related Skills",
        "body": "npx skills add inference-sh/skills@text-to-speech\nnpx skills add inference-sh/skills@ai-podcast-creation\nnpx skills add inference-sh/skills@ai-avatar-video\n\nBrowse all apps: infsh app list"
      }
    ],
    "body": "Dialogue Audio\n\nCreate realistic multi-speaker dialogue with Dia TTS via inference.sh CLI.\n\nQuick Start\ncurl -fsSL https://cli.inference.sh | sh && infsh login\n\n# Two-speaker conversation\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Have you tried the new feature yet? [S2] Not yet, but I heard it saves a ton of time. [S1] It really does. I cut my workflow in half. [S2] Okay, I am definitely trying it today.\"\n}'\n\n\nInstall note: The install script only detects your OS/architecture, downloads the matching binary from dist.inference.sh, and verifies its SHA-256 checksum. No elevated permissions or background processes. Manual install & verification available.\n\nSpeaker Tags\n\nDia TTS uses [S1] and [S2] to distinguish two speakers.\n\nTag\tRole\tVoice\n[S1]\tSpeaker 1\tAutomatically assigned voice A\n[S2]\tSpeaker 2\tAutomatically assigned voice B\n\nRules:\n\nAlways start each speaker turn with the tag\nTags must be uppercase: [S1] not [s1]\nMaximum 2 speakers per generation\nEach speaker maintains consistent voice within a session\nEmotion & Expression Control\n\nDia TTS interprets punctuation and non-speech cues for emotional delivery.\n\nPunctuation Effects\nPunctuation\tEffect\tExample\n.\tNeutral, declarative, medium pause\t\"This is important.\"\n!\tEmphasis, excitement, energy\t\"This is amazing!\"\n?\tRising intonation, questioning\t\"Are you sure about that?\"\n...\tHesitation, trailing off, long pause\t\"I thought it would work... but it didn't.\"\n,\tShort breath pause\t\"First, we analyze. Then, we act.\"\n— or --\tInterruption or pivot\t\"I was going to say — never mind.\"\nNon-Speech Sounds\n\nDia TTS supports parenthetical sound descriptions:\n\n(laughs)      — laughter\n(sighs)       — exasperation or relief\n(clears throat) — attention-getting pause\n(whispers)    — softer delivery\n(gasps)       — surprise\n\nExamples with Emotion\n# Excited conversation\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Guess what happened today! [S2] What? Tell me! [S1] We hit ten thousand users! [S2] (gasps) No way! That is incredible! [S1] I know... I still cannot believe it.\"\n}'\n\n# Serious/thoughtful dialogue\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] We need to talk about the timeline. [S2] (sighs) I know. It is tight. [S1] Can we cut anything from the scope? [S2] Maybe... but it would mean dropping the analytics dashboard. [S1] That is a tough trade-off.\"\n}'\n\n# Teaching/explaining\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] So how does it actually work? [S2] Great question. Think of it like a pipeline. Data comes in on one end, gets processed in the middle, and comes out transformed on the other side. [S1] Like an assembly line? [S2] Exactly! Each step adds something.\"\n}'\n\nPacing Control\nPause Hierarchy\nTechnique\tPause Length\tUse For\nComma ,\t~0.3 seconds\tBetween clauses, list items\nPeriod .\t~0.5 seconds\tBetween sentences\nEllipsis ...\t~1.0 seconds\tDramatic pause, thinking, hesitation\nNew speaker tag\t~0.3 seconds\tNatural turn-taking gap\nSpeed Control\nShorter sentences = faster perceived pace\nLonger sentences with commas = measured, thoughtful pace\nQuestions followed by answers = engaging back-and-forth rhythm\n# Fast-paced, energetic\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Ready? [S2] Ready. [S1] Let us go! Three features. Five minutes. [S2] Hit it! [S1] Feature one: real-time sync.\"\n}'\n\n# Slow, contemplative\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] I have been thinking about this for a while... and I think we need to change direction. [S2] What do you mean? [S1] The market has shifted. What worked last year... is not working now.\"\n}'\n\nConversation Structure Patterns\nInterview Format\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Welcome to the show. Today we have a special guest. Tell us about yourself. [S2] Thanks for having me! I am a product designer, and I have been building tools for creators for about ten years. [S1] What got you started in design? [S2] Honestly? I was terrible at coding but loved making things look good. (laughs) So design was the natural path.\"\n}'\n\nTutorial / Explainer\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Can you walk me through the setup process? [S2] Sure. Step one, install the CLI. It takes about thirty seconds. [S1] And then? [S2] Step two, run the login command. It will open your browser for authentication. [S1] That sounds simple. [S2] It is! Step three, you are ready to run your first app.\"\n}'\n\nDebate / Discussion\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] I think we should go with option A. It is faster to implement. [S2] But option B scales better long-term. [S1] Sure, but we need something shipping this quarter. [S2] Fair point... what if we do A now with a migration path to B? [S1] That could work. Let us prototype it.\"\n}'\n\nPost-Production Tips\nVolume Normalization\n\nBoth speakers should be at consistent volume. If one is louder:\n\n# Merge with balanced audio\ninfsh app run infsh/video-audio-merger --input '{\n  \"video\": \"talking-head.mp4\",\n  \"audio\": \"dialogue.mp3\",\n  \"audio_volume\": 1.0\n}'\n\nAdding Background/Music\n# Merge dialogue with background music\ninfsh app run infsh/media-merger --input '{\n  \"media\": [\"dialogue.mp3\", \"background-music.mp3\"]\n}'\n\nSegmenting Long Conversations\n\nFor conversations longer than ~30 seconds, generate in segments:\n\n# Segment 1: Introduction\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Welcome back to another episode...\"\n}'\n\n# Segment 2: Main content\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] So let us dive into today s topic...\"\n}'\n\n# Segment 3: Wrap-up\ninfsh app run falai/dia-tts --input '{\n  \"prompt\": \"[S1] Great conversation today...\"\n}'\n\n# Merge all segments\ninfsh app run infsh/media-merger --input '{\n  \"media\": [\"segment1.mp3\", \"segment2.mp3\", \"segment3.mp3\"]\n}'\n\nScript Writing Tips\nDo\tDon't\nWrite how people talk\tWrite how people write\nShort sentences (< 15 words)\tLong academic sentences\nContractions (\"can't\", \"won't\")\tFormal (\"cannot\", \"will not\")\nNatural fillers (\"So,\", \"Well,\")\tEvery sentence perfectly formed\nVary sentence length\tAll sentences same length\nInclude reactions (\"Exactly!\", \"Hmm.\")\tOne-sided monologues\nRead it aloud before generating\tAssume it sounds right\nCommon Mistakes\nMistake\tProblem\tFix\nMonologues longer than 3 sentences\tSounds like a lecture, not conversation\tBreak into exchanges\nNo emotional variation\tFlat, robotic delivery\tUse punctuation and non-speech cues\nMissing speaker tags\tVoices don't alternate\tStart every turn with [S1] or [S2]\nFormal written language\tSounds unnatural spoken\tUse contractions, short sentences\nNo pauses between topics\tFeels rushed\tUse ... or scene breaks\nAll same energy level\tMonotonous\tVary between high/low energy moments\nRelated Skills\nnpx skills add inference-sh/skills@text-to-speech\nnpx skills add inference-sh/skills@ai-podcast-creation\nnpx skills add inference-sh/skills@ai-avatar-video\n\n\nBrowse all apps: infsh app list"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/okaris/dialogue-audio",
    "publisherUrl": "https://clawhub.ai/okaris/dialogue-audio",
    "owner": "okaris",
    "version": "0.1.5",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/dialogue-audio",
    "downloadUrl": "https://openagent3.xyz/downloads/dialogue-audio",
    "agentUrl": "https://openagent3.xyz/skills/dialogue-audio/agent",
    "manifestUrl": "https://openagent3.xyz/skills/dialogue-audio/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/dialogue-audio/agent.md"
  }
}