Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Generate precise, timecoded Seedance 2.0 prompts integrating multimodal inputs with asset mapping for controlled 4-15s video creation and editing.
Generate precise, timecoded Seedance 2.0 prompts integrating multimodal inputs with asset mapping for controlled 4-15s video creation and editing.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Create high-control English prompts for Seedance 2.0 and Seedance 2.0 Fast using multimodal references (image/video/audio/text). This skill is for: Prompt design from rough idea to production-ready prompt Mode choice: Text-only vs First/Last Frame vs All-Reference @asset mapping (what each image/video/audio controls) 4-15s duration planning and timeline beats Multi-segment stitching for videos >15s Video extension / continuation prompts Character replacement and directed editing prompts Camera-language replication from reference videos Scenario-specific strategies (product ads, short drama, fantasy, music video, etc.)
Always declare mode first. Always include an explicit Assets Mapping section. Use timecoded beats with one major action per segment. Keep prompts concise and controllable (avoid vague poetic-only wording). Add negative constraints when user needs clean output. Be specific and visual β "a woman in a red trench coat walks through rain-soaked neon streets" >> "a woman walking". Separate dialogue/sound from visuals β write dialogue with character name + emotion tag, then sound effects as a distinct layer. Match reference image style to video theme β e.g., ink-wash style images for historical themes, neon renders for cyberpunk.
Mixed inputs total (image+video+audio): max 12 files Images: jpeg/png/webp/bmp/tiff/gif, max 9, each < 30MB Videos: mp4/mov, max 3, total duration 2-15s, total < 50MB Audio: mp3/wav, max 3, total <= 15s, total < 15MB Generation duration: 4-15s Realistic human face references may be blocked by platform compliance
Seedance 2.0 has platform-side content moderation. Prompts referencing recognizable franchises, characters, or brand aesthetics will be rejected even if no name is used. Follow these rules:
Never use franchise names, character names, or brand terms β not even as "style of" references. Invent fully original names for characters and creatures. Use descriptive nicknames (e.g., "Alloy Sentinel", "Storm-Rabbit"). Describe aesthetics generically β replace recognizable signature features with original alternatives: β "arc reactor" β β "hex-light energy core" β "yellow lightning mouse" β β "tiny storm-rabbit with glowing cyan antlers" β "red-gold armored suit" β β "custom exo-suit with smooth ceramic panels" Add explicit negative constraints listing every franchise name, character name, and brand term that could be inferred. Use family-friendly / PG-13 tone markers β they help pass moderation.
If a prompt is rejected, escalate distance from the source IP: Level 1: Replace all names with original nicknames, keep general aesthetic. Level 2: Replace signature visual features (colors, silhouette, iconic props) with fully original designs. Level 3: Change character type entirely (e.g., humanoid hero β autonomous mech + drone; creature battle β abstract elemental spirits).
When animating toy or doll references from images: Strip all brand indicators from the prompt. Use "original vinyl-style toy figure" or "collectible art figure" instead of any brand name. Bind @image1 to proportions, colors, outfit shape only β never preserve logos or trademarks.
Explicitly write: Extend @video1 by Xs. Use generation duration equal to the newly added segment, not the full final length.
Bind base motion/camera to @video1, bind replacement identity to @image1, and request strict choreography/timing preservation.
Use @video/@audio rhythm references and lock beats by time range.
Use when no reference assets are provided. Prompt must carry all visual direction: style, color palette, character descriptions, camera, and timeline beats. Especially useful for original creature/character concepts and IP-safe scenes.
Seedance 2.0 max generation is 15s per segment. For longer videos, split into chained segments: Segment 1: Generate normally (up to 15s). End on a clean handoff frame (stable pose, clear composition). Segment 2+: Upload previous segment as @video1, write Extend @video1 by Xs. Include a continuity note describing exactly what the last frame looks like. Repeat until target duration is reached. Always include: Total duration and segment count at the top. Handoff description at the end of each segment (what the last frame shows). Explicit continuity instructions: preserve identity, outfit, lighting, camera direction.
For scripted scenes with character speech: Write visual action and dialogue as separate layers per time segment. Tag dialogue: Dialogue (CharacterName, emotion): "line" Tag sound: Sound: [description] Keep dialogue short β one line per 3-5s segment works best.
For product demos and ads: Bind product image to @image1 as identity anchor. Use techniques: 360Β° rotation, 3D exploded view, reassembly animation, hero lighting. Keep background clean (studio, gradient, or contextual lifestyle). Specify material rendering: glass reflections, metallic sheen, matte texture, etc.
For continuous tracking shots without cuts: Assign each @image to a scene waypoint (location, character, or prop encountered along the path). Write the prompt as a continuous camera movement visiting each waypoint in order. Explicitly state: no cuts, single continuous shot or one-take. Use @image1 as first frame, subsequent images as reference for environments/characters encountered.
ScenarioKey TechniquesTypical ModeE-commerce / Product Ad360Β° spin, 3D exploded view, hero lighting, clean studio BGAll-ReferenceShort Drama / DialogueDialogue tags with emotion, sound FX layer, actor blockingAll-Reference or First FrameFantasy / Xianxia AnimationSpell FX particles, martial arts choreography, energy aurasText-only or All-ReferenceScience / Education4K CGI, transparent anatomy, labeled zoom sequencesText-onlyMusic Video / Beat SyncBeat-locked cuts, widescreen 16:9, multi-image montageAll-Reference with @audioOne-Take Tracking ShotMulti-image waypoints, continuous camera, no cutsAll-ReferenceIP-Safe Original CharactersInvented names, unique features, explicit negative constraintsText-only
SKILL.md β main skill behavior SKILL.sh β quick local test helper scripts/setup_seedance_prompt_workspace.sh β scaffold helper files references/recipes.md β ready-to-use prompt recipes references/modes-and-recipes.md β mode and control notes references/camera-and-styles.md β camera language and visual styles vocabulary
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.