Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Create AI videos with optimized prompts, motion control, and platform-ready output.
Create AI videos with optimized prompts, motion control, and platform-ready output.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
On first use, read setup.md.
User needs to generate, edit, or scale AI videos with current models and APIs. Use this skill to choose the right current model stack, write stronger motion prompts, and run reliable async video pipelines.
User preferences persist in ~/video-generation/. See memory-template.md for setup. ~/video-generation/ โโโ memory.md # Preferred providers, model routing, reusable shot recipes โโโ history.md # Optional run log for jobs, costs, and outputs
TopicFileInitial setupsetup.mdMemory templatememory-template.mdMigration guidemigration.mdModel snapshotbenchmarks.mdAsync API patternsapi-patterns.mdOpenAI Sora 2openai-sora.mdGoogle Veo 3.xgoogle-veo.mdRunway Gen-4runway.mdLuma Rayluma.mdByteDance Seedanceseedance.mdKlingkling.mdViduvidu.mdPika via Falpika.mdMiniMax Hailuominimax-hailuo.mdReplicate routingreplicate.mdOpen-source local modelsopen-source-video.mdDistribution playbookpromotion.md
Map community names to real API model IDs first. Examples: sora-2, sora-2-pro, veo-3.0-generate-001, gen4_turbo, gen4_aleph.
TaskFirst choiceBackupPremium prompt-only generationsora-2-proveo-3.1-generate-001Fast drafts at lower costveo-3.1-fast-generate-001gen4_turboLong-form cinematic shotsgen4_alephray-2Strong image-to-video controlveo-3.0-generate-001gen4_turboMulti-shot narrative consistencySeedance familyhailuo-2.3Local privacy-first workflowsWan2.2 / HunyuanVideoCogVideoX
Start with low duration and lower tier, validate motion and composition, then rerender winners with premium models or longer durations.
Always include subject, action, camera motion, lens style, lighting, and scene timing. For references and start/end frames, keep continuity constraints explicit.
Every provider pipeline must support queued jobs, polling/backoff, retries, cancellation, and signed-URL download before expiry.
If the preferred model is blocked or overloaded: same provider lower tier, 2) equivalent cross-provider model, 3) open model/local run.
Using nickname-only model labels in code -> avoidable API failures Pushing 8-10 second generations before validating a 3-5 second draft -> wasted credits Cropping after generation instead of generating native ratio -> lower composition quality Ignoring prompt enhancement toggles -> tone drift across providers Reusing expired output URLs -> broken export workflows Treating all providers as synchronous -> stalled jobs and bad timeout handling
ProviderEndpointData SentPurposeOpenAIapi.openai.comPrompt text, optional input images/video refsSora 2 video generationGoogle Vertex AIaiplatform.googleapis.comPrompt text, optional image input, generation paramsVeo 3.x generationRunwayapi.dev.runwayml.comPrompt text, optional input mediaGen-4 generation and image-to-videoLumaapi.lumalabs.aiPrompt text, optional keyframes/start-end imagesRay generationFalqueue.fal.runPrompt text, optional input mediaPika and Hailuo hosted APIsReplicateapi.replicate.comPrompt text, optional input mediaMulti-model routing and experimentationViduapi.vidu.comPrompt text, optional start/end/reference imagesVidu text/image/reference video APIsTencent MPSmps.tencentcloudapi.comPrompt text and generation parametersUnified AIGC video task APIs No other data is sent externally.
Data that leaves your machine: Prompt text Optional reference images or clips Requested rendering parameters (duration, resolution, aspect ratio) Data that stays local: Provider preferences in ~/video-generation/memory.md Optional local job history in ~/video-generation/history.md This skill does NOT: Store API keys in project files Upload media outside requested provider calls Delete local assets unless the user asks
This skill can send prompts and media references to third-party AI providers. Only install if you trust those providers with your content.
Install with clawhub install <slug> if user confirms: image-generation - Build still concepts and keyframes before video generation image-edit - Prepare clean references, masks, and style frames video-edit - Post-process generated clips and final exports video-captions - Add subtitle and text overlay workflows ffmpeg - Compose, transcode, and package production outputs
If useful: clawhub star video-generation Stay updated: clawhub sync
Writing, remixing, publishing, visual generation, and marketing content production.
Largest current source with strong distribution and engagement signals.