{
  "schemaVersion": "1.0",
  "item": {
    "slug": "seedance",
    "name": "Seedance",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/honeybee1130/seedance",
    "canonicalUrl": "https://clawhub.ai/honeybee1130/seedance",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/seedance",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=seedance",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/seedance"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/seedance",
    "agentPageUrl": "https://openagent3.xyz/skills/seedance/agent",
    "manifestUrl": "https://openagent3.xyz/skills/seedance/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/seedance/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Seedance 2.0 Prompt Generator",
        "body": "Generate production-ready prompts for ByteDance's Seedance 2.0 AI video model."
      },
      {
        "title": "Prompt Architecture",
        "body": "Every prompt follows this strict order. Deviating causes drift.\n\nSubject → Action → Camera → Style → Audio → Constraints"
      },
      {
        "title": "1. Subject (WHO/WHAT)",
        "body": "One primary subject. Multiple subjects split model attention.\nInclude: age/material, clothing, distinguishing features\nExample: \"Wooden Koda creature with glowing orange eyes, green gems on head, purple cape\""
      },
      {
        "title": "2. Action (WHAT HAPPENS)",
        "body": "Specific verb phrases, present tense\nDescribe beat by beat for complex sequences\nOne action per beat. Chain beats chronologically.\nExample: \"walks to cliff edge, pauses, turns head slowly to camera, cape billowing\""
      },
      {
        "title": "3. Camera (HOW WE SEE IT)",
        "body": "Shot size FIRST: wide / medium / close-up / extreme close-up\nMovement SECOND: dolly-in, dolly-out, track left/right, crane up/down, pan, tilt, handheld, gimbal, locked-off\nAngle: eye level, low angle, high angle, bird's eye, Dutch\nLens feel: wide (24-28mm), normal (35-50mm), telephoto (85mm+)\nONE verb per shot. Compound moves = separate beats: \"Start: slow dolly-in. Then: gentle pan right for final 2s\"\n\nShot Cheat Sheet:\n\nShotUsePair WithWideEstablish space/contextSlow dolly, locked-offMediumSubject + context, dialogueHandheld (personal), gimbal (polished)Close-upDetail, emotionTiny push-in, avoid pansTrackingMovement, energyLateral follow, side profile"
      },
      {
        "title": "4. Style (THE LOOK)",
        "body": "ONE visual anchor > six adjectives\nLighting: key light type, time of day, practical sources\nColor treatment: muted/saturated/monochrome/specific palette\nTexture: film grain, clean digital, anamorphic, etc.\nReference format: \"[film/artist/era] aesthetic\""
      },
      {
        "title": "5. Audio (WHAT WE HEAR)",
        "body": "Seedance 2.0 generates dual-channel stereo audio\nSpecify: background music genre, environmental sounds, dialogue/VO, silence\nAudio syncs to visual action automatically\nExample: \"ambient wind, distant thunder, no music, footsteps on stone\""
      },
      {
        "title": "6. Constraints (GUARDRAILS)",
        "body": "Ban list: no text overlays, no extra characters, no snap zooms, no watermarks\nTiming: hold frames, beat durations, total length (5s or 10s for testing, 15s max)\nConsistency: \"maintain character identity throughout\", \"no morphing\"\nPhysics: \"realistic cloth physics\", \"gravity-accurate\""
      },
      {
        "title": "Reference System (@Tags)",
        "body": "When user provides images/videos, use @tags:\n\n@Image1, @Image2, etc. for uploaded images\n@Video1, @Video2, etc. for uploaded videos\n@Audio1, etc. for uploaded audio\n\nUsage patterns:\n\nCharacter identity: \"@Image1 is the main character\"\nFirst/last frame: \"@Image1 as first frame, @Image2 as last frame\"\nMotion transfer: \"@Image1 performs the dance from @Video1\"\nStyle reference: \"match the color palette of @Image3\"\nMulti-reference: up to 9 images + 3 videos + 3 audio clips"
      },
      {
        "title": "Cinematic Scene",
        "body": "[Scene type] style. [Subject with details]. [Action beat 1], [action beat 2], [action beat 3].\n[Camera: shot size], [movement], [angle], [lens feel].\n[Lighting description], [color treatment], [texture/grain].\n[Audio: music/sfx/ambience].\n[Constraints: bans, timing, consistency notes]."
      },
      {
        "title": "Multi-Shot Narrative",
        "body": "Shot 1: [Wide/establishing]. [Scene description]. [Camera movement]. [Duration].\nShot 2: [Medium/close]. [Character action]. [Camera movement]. [Duration].\nShot 3: [Close-up/detail]. [Emotional beat]. [Camera movement]. [Duration].\n[Overall style], [color grade], [audio design].\n[Constraints]."
      },
      {
        "title": "Action Sequence",
        "body": "[Genre] action sequence. [Setup description].\nBeat 1: [Action], [camera follows with movement type].\nBeat 2: [Reaction/counter], [cut to shot size], [slow motion if needed].\nBeat 3: [Resolution], [camera pulls to reveal].\n[Style: reference film/show]. [Audio: impact sounds, score].\n[Constraints: physics accuracy, no artifacts]."
      },
      {
        "title": "Negative Prompt Checklist (pick 3-5 per generation)",
        "body": "Visual noise: no text overlays, no watermarks, no floating UI, no lens flares\nIdentity drift: no extra characters, no crowd, no mirrors reflecting others\nCamera chaos: no snap zooms, no whip pans, no Dutch angles, no jump cuts\nBody artifacts: no extra fingers, no deformed hands, no warped objects, no melting edges\nBranding: no logos, no labels, no recognizable brands\nColor: no neon lighting, no heavy teal/orange, no cartoon saturation\nEnvironment: no rain/fog/smoke unless stated, no confetti, no dust particles"
      },
      {
        "title": "Advanced: Clean High-Motion Technique",
        "body": "Learned from real-world results. These techniques produce sharp, blur-free motion even at extreme speed."
      },
      {
        "title": "The Continuous Shot Lock",
        "body": "Declare \"single continuous shot\" upfront — forces temporal coherence, prevents inter-scene interpolation artifacts\nThe model treats the entire generation as one fluid motion path instead of stitched segments"
      },
      {
        "title": "Physics-Motivated Camera",
        "body": "Every camera move needs a VERB with physical motivation: dive, slingshot, whip, dart, blast\nNever say \"dynamic camera\" — say WHY the camera moves (following subject, reacting to explosion, releasing into reveal)\nCamera attached to subject (\"lock-on,\" \"staying glued\") = subject stays sharp because relative motion is zero"
      },
      {
        "title": "Environmental Anchoring",
        "body": "Scatter static reference geometry throughout: walls, arches, furniture, hanging objects\nThe model needs stable background to render motion AGAINST — parallax creates perceived speed without subject blur\nStatic objects streaking past a centered subject = clean speed"
      },
      {
        "title": "Scale Progression Arc",
        "body": "Structure as Macro → Micro → Macro (wide establish → tight detail → wide reveal)\nGives model clear resolution targets at each stage — doesn't try to render everything at once\nThe \"reveal\" at the end (pulling wide after sustained close action) creates cinematic payoff"
      },
      {
        "title": "Sensory Render Instructions (Not Mood Words)",
        "body": "Replace adjectives with computable effects: \"heat haze\" not \"hot,\" \"grit snapping off ledge\" not \"dusty,\" \"mist turning into rainbow\" not \"magical\"\nEach detail should be something the model can physically simulate"
      },
      {
        "title": "Rhythm Through Verbs",
        "body": "Pacing lives in action chain length, not \"hold for Xs\" timers\nQuick beat: \"snaps a last-inch swerve\" (short clause = fast)\nSustained beat: \"threads through hanging laundry lines and open windows in one fluid line\" (long clause = flowing)\nClimax: contrast — \"sudden calm\" after chaos = tension release"
      },
      {
        "title": "Reference Prompt (Proven Clean High-Motion)",
        "body": "Speeder chase across a cliff city (single continuous shot)\nFrom a monumental cliffside city carved into stone, the camera dives toward a tiny streak of light ripping along a narrow ledge-road. Lock-on: a speeder hugging the wall at insane speed. The camera slingshots ahead, whips back, then drops tight to the rear thrusters: heat haze, grit snapping off the ledge, warning lights flashing. A collapsing balcony rains debris; the rider snaps a last-inch swerve under a falling arch, then threads through hanging laundry lines and open windows in one fluid line. The camera darts through the same openings, staying glued to the motion. One final bend and sudden calm: the camera blasts outward into a reveal of the city opening onto a boundless waterfall-fed valley, mist turning into rainbow."
      },
      {
        "title": "Pro Tips",
        "body": "High-res references — 2K/4K input images = better output. Blurry in = blurry out\nTest at 5s first — iterate fast, extend to 10-15s once the motion is right\nOne change at a time — don't rewrite the whole prompt on a miss, tweak one element\nCreativity/Consistency sliders — 60% consistency / 40% creativity is the sweet spot\nBeat timing — write \"hold for 2s\" or \"pause 1s\" to control pacing\nCompound camera = separate beats — never jam two movements in one clause\nStyle anchor > adjective soup — \"Blade Runner 2049 aesthetic\" > \"cinematic dark moody neon futuristic\"\nEnvironmental audio — even without dialogue, specify ambient sounds for immersion"
      },
      {
        "title": "⚠️ Content Policy Rules (CRITICAL)",
        "body": "Seedance is a video diffusion model with no internet access. It does not know:\n\n\"Otherside\" — means nothing to it\n\"BAYC,\" \"Koda,\" \"Other Games\" — proprietary names get flagged\nGaming/metaverse/NFT terminology — triggers content moderation\nCrowd scenes — flagged automatically\nAny brand, game title, or IP name\n\nThe rule: Never use brand names. Describe what you see, not what it's called."
      },
      {
        "title": "Translation Layer — Concept → Visual Description",
        "body": "What you meanWhat to write in the promptOtherside world\"alien landscape with bioluminescent terrain, floating rock formations, purple and teal crystal growths, twin moons visible through violet sky\"Otherside biome\"ancient stone ruins overtaken by glowing fungal growth, rivers of liquid light flowing through cracked earth\"Otherside portal\"swirling circular energy vortex of purple and gold light, crackling at the edges, suspended in midair\"Koda character\"small wooden creature with bark-textured skin, glowing amber eyes in a dark hollow face, green teardrop gems across the top of its head, dark feathered collar, flowing purple cape, small dark armored clawed hands, chibi proportions\"Other Games event\"large gathering of stylized 3D avatar characters in a glowing arena, spotlights, festive atmosphere\" (avoid \"crowd\" — use \"scattered figures\" or \"a handful of characters\")Gaming/metaverseUse: \"virtual world,\" \"digital realm,\" \"fantastical landscape,\" \"animated environment\""
      },
      {
        "title": "Banned Words (will trigger rejection)",
        "body": "Otherside, Yuga, BAYC, Koda, ApeCoin, NFT, metaverse, web3, blockchain, cryptocurrency\ncrowd, mob, large group, mass of people\nAny real brand name, game title, or IP"
      },
      {
        "title": "Koda-Specific Prompts (Other Games IP)",
        "body": "When generating Koda content:\n\nNEVER say \"Koda\" — describe the character visually every time\nUse: \"small wooden creature with bark-textured skin, glowing amber eyes in a dark hollow face, green teardrop gems across the top of its head, dark feathered collar, flowing purple cape, small dark armored clawed hands, chibi proportions\"\nNEVER say \"Otherside\" — describe the environment visually\nUse: \"alien landscape with bioluminescent terrain, floating rock formations, purple and teal crystal growths\"\nMaintain character consistency across shots\nAlways provide Honey B's Koda image as @Image1 for I2V generations (best result)\nTest at 5s first to confirm character renders correctly before extending"
      },
      {
        "title": "Platform Access",
        "body": "Jimeng AI (即梦): jimeng.jianying.com → Video Generation → Seedance 2.0\nDoubao App: dialogue box → Seedance 2.0 → select 2.0 model\nVolcano Engine: experience center → Doubao-Seedance-2.0"
      },
      {
        "title": "When User Asks for a Prompt",
        "body": "Ask what scene/concept they want (or use their description)\nDetermine: T2V (text only), I2V (image + text), or R2V (multi-reference)\nPick the right template\nFill in all 6 layers (subject → constraints)\nAdd 3-5 relevant negative constraints\nOutput the final prompt ready to paste\nSuggest aspect ratio (16:9 cinematic, 9:16 social, 1:1 square)\nSuggest starting duration (5s test → extend)"
      }
    ],
    "body": "Seedance 2.0 Prompt Generator\n\nGenerate production-ready prompts for ByteDance's Seedance 2.0 AI video model.\n\nPrompt Architecture\n\nEvery prompt follows this strict order. Deviating causes drift.\n\nSubject → Action → Camera → Style → Audio → Constraints\n\n1. Subject (WHO/WHAT)\nOne primary subject. Multiple subjects split model attention.\nInclude: age/material, clothing, distinguishing features\nExample: \"Wooden Koda creature with glowing orange eyes, green gems on head, purple cape\"\n2. Action (WHAT HAPPENS)\nSpecific verb phrases, present tense\nDescribe beat by beat for complex sequences\nOne action per beat. Chain beats chronologically.\nExample: \"walks to cliff edge, pauses, turns head slowly to camera, cape billowing\"\n3. Camera (HOW WE SEE IT)\nShot size FIRST: wide / medium / close-up / extreme close-up\nMovement SECOND: dolly-in, dolly-out, track left/right, crane up/down, pan, tilt, handheld, gimbal, locked-off\nAngle: eye level, low angle, high angle, bird's eye, Dutch\nLens feel: wide (24-28mm), normal (35-50mm), telephoto (85mm+)\nONE verb per shot. Compound moves = separate beats: \"Start: slow dolly-in. Then: gentle pan right for final 2s\"\n\nShot Cheat Sheet:\n\nShot\tUse\tPair With\nWide\tEstablish space/context\tSlow dolly, locked-off\nMedium\tSubject + context, dialogue\tHandheld (personal), gimbal (polished)\nClose-up\tDetail, emotion\tTiny push-in, avoid pans\nTracking\tMovement, energy\tLateral follow, side profile\n4. Style (THE LOOK)\nONE visual anchor > six adjectives\nLighting: key light type, time of day, practical sources\nColor treatment: muted/saturated/monochrome/specific palette\nTexture: film grain, clean digital, anamorphic, etc.\nReference format: \"[film/artist/era] aesthetic\"\n5. Audio (WHAT WE HEAR)\nSeedance 2.0 generates dual-channel stereo audio\nSpecify: background music genre, environmental sounds, dialogue/VO, silence\nAudio syncs to visual action automatically\nExample: \"ambient wind, distant thunder, no music, footsteps on stone\"\n6. Constraints (GUARDRAILS)\nBan list: no text overlays, no extra characters, no snap zooms, no watermarks\nTiming: hold frames, beat durations, total length (5s or 10s for testing, 15s max)\nConsistency: \"maintain character identity throughout\", \"no morphing\"\nPhysics: \"realistic cloth physics\", \"gravity-accurate\"\nReference System (@Tags)\n\nWhen user provides images/videos, use @tags:\n\n@Image1, @Image2, etc. for uploaded images\n@Video1, @Video2, etc. for uploaded videos\n@Audio1, etc. for uploaded audio\n\nUsage patterns:\n\nCharacter identity: \"@Image1 is the main character\"\nFirst/last frame: \"@Image1 as first frame, @Image2 as last frame\"\nMotion transfer: \"@Image1 performs the dance from @Video1\"\nStyle reference: \"match the color palette of @Image3\"\nMulti-reference: up to 9 images + 3 videos + 3 audio clips\nPrompt Templates\nCinematic Scene\n[Scene type] style. [Subject with details]. [Action beat 1], [action beat 2], [action beat 3].\n[Camera: shot size], [movement], [angle], [lens feel].\n[Lighting description], [color treatment], [texture/grain].\n[Audio: music/sfx/ambience].\n[Constraints: bans, timing, consistency notes].\n\nMulti-Shot Narrative\nShot 1: [Wide/establishing]. [Scene description]. [Camera movement]. [Duration].\nShot 2: [Medium/close]. [Character action]. [Camera movement]. [Duration].\nShot 3: [Close-up/detail]. [Emotional beat]. [Camera movement]. [Duration].\n[Overall style], [color grade], [audio design].\n[Constraints].\n\nAction Sequence\n[Genre] action sequence. [Setup description].\nBeat 1: [Action], [camera follows with movement type].\nBeat 2: [Reaction/counter], [cut to shot size], [slow motion if needed].\nBeat 3: [Resolution], [camera pulls to reveal].\n[Style: reference film/show]. [Audio: impact sounds, score].\n[Constraints: physics accuracy, no artifacts].\n\nNegative Prompt Checklist (pick 3-5 per generation)\n\nVisual noise: no text overlays, no watermarks, no floating UI, no lens flares Identity drift: no extra characters, no crowd, no mirrors reflecting others Camera chaos: no snap zooms, no whip pans, no Dutch angles, no jump cuts Body artifacts: no extra fingers, no deformed hands, no warped objects, no melting edges Branding: no logos, no labels, no recognizable brands Color: no neon lighting, no heavy teal/orange, no cartoon saturation Environment: no rain/fog/smoke unless stated, no confetti, no dust particles\n\nAdvanced: Clean High-Motion Technique\n\nLearned from real-world results. These techniques produce sharp, blur-free motion even at extreme speed.\n\nThe Continuous Shot Lock\nDeclare \"single continuous shot\" upfront — forces temporal coherence, prevents inter-scene interpolation artifacts\nThe model treats the entire generation as one fluid motion path instead of stitched segments\nPhysics-Motivated Camera\nEvery camera move needs a VERB with physical motivation: dive, slingshot, whip, dart, blast\nNever say \"dynamic camera\" — say WHY the camera moves (following subject, reacting to explosion, releasing into reveal)\nCamera attached to subject (\"lock-on,\" \"staying glued\") = subject stays sharp because relative motion is zero\nEnvironmental Anchoring\nScatter static reference geometry throughout: walls, arches, furniture, hanging objects\nThe model needs stable background to render motion AGAINST — parallax creates perceived speed without subject blur\nStatic objects streaking past a centered subject = clean speed\nScale Progression Arc\nStructure as Macro → Micro → Macro (wide establish → tight detail → wide reveal)\nGives model clear resolution targets at each stage — doesn't try to render everything at once\nThe \"reveal\" at the end (pulling wide after sustained close action) creates cinematic payoff\nSensory Render Instructions (Not Mood Words)\nReplace adjectives with computable effects: \"heat haze\" not \"hot,\" \"grit snapping off ledge\" not \"dusty,\" \"mist turning into rainbow\" not \"magical\"\nEach detail should be something the model can physically simulate\nRhythm Through Verbs\nPacing lives in action chain length, not \"hold for Xs\" timers\nQuick beat: \"snaps a last-inch swerve\" (short clause = fast)\nSustained beat: \"threads through hanging laundry lines and open windows in one fluid line\" (long clause = flowing)\nClimax: contrast — \"sudden calm\" after chaos = tension release\nReference Prompt (Proven Clean High-Motion)\nSpeeder chase across a cliff city (single continuous shot)\nFrom a monumental cliffside city carved into stone, the camera dives toward a tiny streak of light ripping along a narrow ledge-road. Lock-on: a speeder hugging the wall at insane speed. The camera slingshots ahead, whips back, then drops tight to the rear thrusters: heat haze, grit snapping off the ledge, warning lights flashing. A collapsing balcony rains debris; the rider snaps a last-inch swerve under a falling arch, then threads through hanging laundry lines and open windows in one fluid line. The camera darts through the same openings, staying glued to the motion. One final bend and sudden calm: the camera blasts outward into a reveal of the city opening onto a boundless waterfall-fed valley, mist turning into rainbow.\n\nPro Tips\nHigh-res references — 2K/4K input images = better output. Blurry in = blurry out\nTest at 5s first — iterate fast, extend to 10-15s once the motion is right\nOne change at a time — don't rewrite the whole prompt on a miss, tweak one element\nCreativity/Consistency sliders — 60% consistency / 40% creativity is the sweet spot\nBeat timing — write \"hold for 2s\" or \"pause 1s\" to control pacing\nCompound camera = separate beats — never jam two movements in one clause\nStyle anchor > adjective soup — \"Blade Runner 2049 aesthetic\" > \"cinematic dark moody neon futuristic\"\nEnvironmental audio — even without dialogue, specify ambient sounds for immersion\n⚠️ Content Policy Rules (CRITICAL)\n\nSeedance is a video diffusion model with no internet access. It does not know:\n\n\"Otherside\" — means nothing to it\n\"BAYC,\" \"Koda,\" \"Other Games\" — proprietary names get flagged\nGaming/metaverse/NFT terminology — triggers content moderation\nCrowd scenes — flagged automatically\nAny brand, game title, or IP name\n\nThe rule: Never use brand names. Describe what you see, not what it's called.\n\nTranslation Layer — Concept → Visual Description\nWhat you mean\tWhat to write in the prompt\nOtherside world\t\"alien landscape with bioluminescent terrain, floating rock formations, purple and teal crystal growths, twin moons visible through violet sky\"\nOtherside biome\t\"ancient stone ruins overtaken by glowing fungal growth, rivers of liquid light flowing through cracked earth\"\nOtherside portal\t\"swirling circular energy vortex of purple and gold light, crackling at the edges, suspended in midair\"\nKoda character\t\"small wooden creature with bark-textured skin, glowing amber eyes in a dark hollow face, green teardrop gems across the top of its head, dark feathered collar, flowing purple cape, small dark armored clawed hands, chibi proportions\"\nOther Games event\t\"large gathering of stylized 3D avatar characters in a glowing arena, spotlights, festive atmosphere\" (avoid \"crowd\" — use \"scattered figures\" or \"a handful of characters\")\nGaming/metaverse\tUse: \"virtual world,\" \"digital realm,\" \"fantastical landscape,\" \"animated environment\"\nBanned Words (will trigger rejection)\nOtherside, Yuga, BAYC, Koda, ApeCoin, NFT, metaverse, web3, blockchain, cryptocurrency\ncrowd, mob, large group, mass of people\nAny real brand name, game title, or IP\nKoda-Specific Prompts (Other Games IP)\n\nWhen generating Koda content:\n\nNEVER say \"Koda\" — describe the character visually every time\nUse: \"small wooden creature with bark-textured skin, glowing amber eyes in a dark hollow face, green teardrop gems across the top of its head, dark feathered collar, flowing purple cape, small dark armored clawed hands, chibi proportions\"\nNEVER say \"Otherside\" — describe the environment visually\nUse: \"alien landscape with bioluminescent terrain, floating rock formations, purple and teal crystal growths\"\nMaintain character consistency across shots\nAlways provide Honey B's Koda image as @Image1 for I2V generations (best result)\nTest at 5s first to confirm character renders correctly before extending\nPlatform Access\nJimeng AI (即梦): jimeng.jianying.com → Video Generation → Seedance 2.0\nDoubao App: dialogue box → Seedance 2.0 → select 2.0 model\nVolcano Engine: experience center → Doubao-Seedance-2.0\nWhen User Asks for a Prompt\nAsk what scene/concept they want (or use their description)\nDetermine: T2V (text only), I2V (image + text), or R2V (multi-reference)\nPick the right template\nFill in all 6 layers (subject → constraints)\nAdd 3-5 relevant negative constraints\nOutput the final prompt ready to paste\nSuggest aspect ratio (16:9 cinematic, 9:16 social, 1:1 square)\nSuggest starting duration (5s test → extend)"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/honeybee1130/seedance",
    "publisherUrl": "https://clawhub.ai/honeybee1130/seedance",
    "owner": "honeybee1130",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/seedance",
    "downloadUrl": "https://openagent3.xyz/downloads/seedance",
    "agentUrl": "https://openagent3.xyz/skills/seedance/agent",
    "manifestUrl": "https://openagent3.xyz/skills/seedance/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/seedance/agent.md"
  }
}