Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Generate videos using OpenAI's Sora API. Use when the user asks to generate, create, or make videos from text prompts or reference images. Supports image-to-video generation with automatic resizing.
Generate videos using OpenAI's Sora API. Use when the user asks to generate, create, or make videos from text prompts or reference images. Supports image-to-video generation with automatic resizing.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Generate videos using OpenAI's Sora API.
Endpoint: POST https://api.openai.com/v1/videos
ParameterValuesDescriptionpromptstringText description of the video (required)input_referencefileOptional image that guides generationmodelsora-2, sora-2-proModel to use (default: sora-2)seconds4, 8, 12Video duration (default: 4)size720x1280, 1280x720, 1024x1792, 1792x1024Output resolution
Image dimensions must match video size exactly - the script auto-resizes Video generation takes 1-3 minutes typically Videos expire after ~1 hour - download immediately
# Basic text-to-video uv run ~/.clawdbot/skills/sora/scripts/generate_video.py \ --prompt "A cat playing piano" \ --filename "output.mp4" # Image-to-video (auto-resizes image) uv run ~/.clawdbot/skills/sora/scripts/generate_video.py \ --prompt "Slow dolly shot, steam rising, warm lighting" \ --filename "output.mp4" \ --input-image "reference.png" \ --seconds 8 \ --size 720x1280 # With specific model uv run ~/.clawdbot/skills/sora/scripts/generate_video.py \ --prompt "Cinematic scene" \ --filename "output.mp4" \ --model sora-2-pro \ --seconds 12
FlagDescriptionDefault--prompt, -pVideo description (required)---filename, -fOutput file path (required)---input-image, -iReference image pathNone--seconds, -sDuration: 4, 8, or 128--size, -szResolution720x1280--model, -msora-2 or sora-2-prosora-2--api-key, -kOpenAI API keyenv var--poll-intervalCheck status every N seconds10
Set OPENAI_API_KEY environment variable or pass --api-key.
Camera movement: dolly, pan, zoom, tracking shot Motion description: swirling, rising, falling, shifting Lighting: golden hour, candlelight, dramatic rim lighting Atmosphere: steam, particles, bokeh, haze Mood/style: cinematic, commercial, lifestyle, editorial
Food commercial: Slow dolly shot of gourmet dish, soft morning sunlight streaming through window, subtle steam rising, warm cozy atmosphere, premium food commercial aesthetic Lifestyle: Golden hour light slowly shifting across mountains, gentle breeze rustling leaves, serene morning atmosphere, premium lifestyle commercial Product shot: Cinematic close-up, dramatic lighting with warm highlights, slow reveal, luxury commercial style
Generate image with Nano Banana Pro (or use existing) Pass image as --input-image to Sora Write prompt describing desired motion/atmosphere Script auto-resizes image to match video dimensions
Videos saved as MP4 Typical file size: 1.5-3MB for 8 seconds Resolution matches --size parameter
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.