Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Generate new videos from text prompts, images, or reference inputs using EachLabs AI models. Supports text-to-video, image-to-video, transitions, motion control, talking head, and avatar generation. Use when the user wants to create new video content. For editing existing videos, see eachlabs-video-edit.
Generate new videos from text prompts, images, or reference inputs using EachLabs AI models. Supports text-to-video, image-to-video, transitions, motion control, talking head, and avatar generation. Use when the user wants to create new video content. For editing existing videos, see eachlabs-video-edit.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Generate new videos from text prompts, images, or reference inputs using 165+ AI models via the EachLabs Predictions API. For editing existing videos (upscaling, lip sync, extension, subtitles), see the eachlabs-video-edit skill.
Header: X-API-Key: <your-api-key> Set the EACHLABS_API_KEY environment variable or pass it directly. Get your key at eachlabs.ai.
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "pixverse-v5-6-text-to-video", "version": "0.0.1", "input": { "prompt": "A golden retriever running through a meadow at sunset, cinematic slow motion", "resolution": "720p", "duration": "5", "aspect_ratio": "16:9" } }'
curl https://api.eachlabs.ai/v1/prediction/{prediction_id} \ -H "X-API-Key: $EACHLABS_API_KEY" Poll until status is "success" or "failed". The output video URL is in the response.
ModelSlugBest ForPixverse v5.6pixverse-v5-6-text-to-videoGeneral purpose, audio generationXAI Grok Imaginexai-grok-imagine-text-to-videoFast creativeKandinsky 5 Prokandinsky5-pro-text-to-videoArtistic, high qualitySeedance v1.5 Proseedance-v1-5-pro-text-to-videoCinematic qualityWan v2.6wan-v2-6-text-to-videoLong/narrative contentKling v2.6 Prokling-v2-6-pro-text-to-videoMotion controlPika v2.2pika-v2-2-text-to-videoStylized, effectsMinimax Hailuo V2.3 Prominimax-hailuo-v2-3-pro-text-to-videoHigh fidelitySora 2 Prosora-2-text-to-video-proPremium qualityVeo 3veo-3Google's best qualityVeo 3.1veo3-1-text-to-videoLatest Google modelLTX v2 Fastltx-v-2-text-to-video-fastFastest generationMoonvalley Mareymoonvalley-marey-text-to-videoCinematic styleOviovi-text-to-videoGeneral purpose
ModelSlugBest ForPixverse v5.6pixverse-v5-6-image-to-videoGeneral purposeXAI Grok Imaginexai-grok-imagine-image-to-videoCreative editsWan v2.6 Flashwan-v2-6-image-to-video-flashFastestWan v2.6wan-v2-6-image-to-videoHigh qualitySeedance v1.5 Proseedance-v1-5-pro-image-to-videoCinematicKandinsky 5 Prokandinsky5-pro-image-to-videoArtisticKling v2.6 Pro I2Vkling-v2-6-pro-image-to-videoBest Kling qualityKling O1kling-o1-image-to-videoLatest Kling modelPika v2.2 I2Vpika-v2-2-image-to-videoEffects, PikaScenesMinimax Hailuo V2.3 Prominimax-hailuo-v2-3-pro-image-to-videoHigh fidelitySora 2 I2Vsora-2-image-to-videoPremium qualityVeo 3.1 I2Vveo3-1-image-to-videoGoogle's latestRunway Gen4 Turbogen4-turboFast, film qualityVeed Fabric 1.0veed-fabric-1-0Social media
ModelSlugBest ForPixverse v5.6 Transitionpixverse-v5-6-transitionSmooth transitionsPika v2.2 PikaScenespika-v2-2-pikascenesScene effectsPixverse v4.5 Effectpixverse-v4-5-effectVideo effectsVeo 3.1 First Last Frameveo3-1-first-last-frame-to-videoInterpolation
ModelSlugBest ForKling v2.6 Pro Motionkling-v2-6-pro-motion-controlPro motion controlKling v2.6 Standard Motionkling-v2-6-standard-motion-controlStandard motionMotion Fastmotion-fastFast motion transferMotion Video 14Bmotion-video-14bHigh quality motionWan v2.6 R2Vwan-v2-6-reference-to-videoReference-basedKling O1 Reference I2Vkling-o1-reference-image-to-videoReference-based
ModelSlugBest ForBytedance Omnihuman v1.5bytedance-omnihuman-v1-5Full body animationCreatify Auroracreatify-auroraAudio-driven avatarInfinitalk I2Vinfinitalk-image-to-videoImage talking headInfinitalk V2Vinfinitalk-video-to-videoVideo talking headSync Lipsync v2 Prosync-lipsync-v2-proLip syncKling Avatar v2 Prokling-avatar-v2-proPro avatarKling Avatar v2 Standardkling-avatar-v2-standardStandard avatarEchomimic V3echomimic-v3Face animationStable Avatarstable-avatarStable talking head
Check model GET https://api.eachlabs.ai/v1/model?slug=<slug> โ validates the model exists and returns the request_schema with exact input parameters. Always do this before creating a prediction to ensure correct inputs. POST https://api.eachlabs.ai/v1/prediction with model slug, version "0.0.1", and input parameters matching the schema Poll GET https://api.eachlabs.ai/v1/prediction/{id} until status is "success" or "failed" Extract the output video URL from the response
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "wan-v2-6-image-to-video-flash", "version": "0.0.1", "input": { "image_url": "https://example.com/photo.jpg", "prompt": "The person turns to face the camera and smiles", "duration": "5", "resolution": "1080p" } }'
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "pixverse-v5-6-transition", "version": "0.0.1", "input": { "prompt": "Smooth morphing transition between the two images", "first_image_url": "https://example.com/start.jpg", "end_image_url": "https://example.com/end.jpg", "duration": "5", "resolution": "720p" } }'
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "kling-v2-6-pro-motion-control", "version": "0.0.1", "input": { "image_url": "https://example.com/character.jpg", "video_url": "https://example.com/dance-reference.mp4", "character_orientation": "video" } }'
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "bytedance-omnihuman-v1-5", "version": "0.0.1", "input": { "image_url": "https://example.com/portrait.jpg", "audio_url": "https://example.com/speech.mp3", "resolution": "1080p" } }'
Be specific about motion: "camera slowly pans left" rather than "nice camera movement" Include style keywords: "cinematic", "anime", "3D animation", "cyberpunk" Describe timing: "slow motion", "time-lapse", "fast-paced" For image-to-video, describe what should change from the static image Use negative prompts to avoid unwanted elements (where supported)
See references/MODELS.md for complete parameter details for each model.
Writing, remixing, publishing, visual generation, and marketing content production.
Largest current source with strong distribution and engagement signals.