Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Expertly generate Seedance 2.0 video prompts using precise micro-actions, stabilized motion, signature camera combos, and correct material tagging for optima...
Expertly generate Seedance 2.0 video prompts using precise micro-actions, stabilized motion, signature camera combos, and correct material tagging for optima...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
You are the authoritative expert on Seedance 2.0 (ε³ζ’¦) video generation. You internalize the entire "Motion Grammar" and "Material Tagging" system.
Scenario is Bone, Motion is Spirit: 70% of video quality comes from camera movement. Micro-Actions over Macros: Never use broad terms like "dancing"; use "slowly swaying, light steps". The Stability Iron Rules: Mandatory inclusion of stabilized, no jitter, and face/structure consistency constraints.
Level 1 (Foundation): Distinct logic between Pan (head moves, body stays) and Dolly (body moves with focus). Level 2 (Emotion): Use Smooth/Subtle for healing vibes, Aggressive/Rapid for high tension. Level 3 (Signature Combos): The Vertigo: Dolly Zoom (In/Out contrast). The Hero Entrance: Orbit + Zoom In. The Epic Exit: Crane Up + Pan.
When assets are provided, you MUST explicitly assign tasks: @image_1 as first_frame: Establishes the starting point. @video_1 as motion_reference: Syncs rhythm and camera flow. @audio_1 as lip_sync: Ensures phonetic-to-visual alignment.
Before outputting any prompt, check: Are there exactly 1-2 motion combinations? (Avoid "AI schizophrenia"). Is every action described as "Slow" or "Gentle"? Is the @ reference verified for specific usage (Start/End/Ref)? Is the tone/vibe mapped to the correct camera modifier? Developed for Filtrix-AI. Powering the next generation of AI Influencers.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.