Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Edit, transform, extend, upscale, and enhance videos using EachLabs AI models. Supports lip sync, video translation, subtitle generation, audio merging, style transfer, and video extension. Use when the user wants to edit or transform existing video content.
Edit, transform, extend, upscale, and enhance videos using EachLabs AI models. Supports lip sync, video translation, subtitle generation, audio merging, style transfer, and video extension. Use when the user wants to edit or transform existing video content.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Edit, transform, and enhance existing videos using 25+ AI models via the EachLabs Predictions API.
Header: X-API-Key: <your-api-key> Set the EACHLABS_API_KEY environment variable. Get your key at eachlabs.ai.
ModelSlugBest ForVeo 3.1 Extendveo3-1-extend-videoBest quality extensionVeo 3.1 Fast Extendveo3-1-fast-extend-videoFast extensionPixVerse v5 Extendpixverse-v5-extendPixVerse extensionPixVerse v4.5 Extendpixverse-v4-5-extendOlder PixVerse extension
ModelSlugBest ForSync Lipsync v2 Prosync-lipsync-v2-proBest lip sync qualityPixVerse Lip Syncpixverse-lip-syncPixVerse lip syncLatentSynclatentsyncOpen-source lip syncVideo Retalkingvideo-retalkingAudio-based lip sync
ModelSlugBest ForRunway Gen4 Alephrunway-gen4-alephVideo transformationKling O1 Video Editkling-o1-video-to-video-editAI video editingKling O1 V2V Referencekling-o1-video-to-video-referenceReference-based editByteDance Video Stylizebytedance-video-stylizeStyle transferWan v2.2 Animate Movewan-v2-2-14b-animate-moveMotion animationWan v2.2 Animate Replacewan-v2-2-14b-animate-replaceObject replacement
ModelSlugBest ForTopaz Upscale Videotopaz-upscale-videoBest quality upscaleLuma Ray 2 Video Reframeluma-dream-machine-ray-2-video-reframeVideo reframingLuma Ray 2 Flash Reframeluma-dream-machine-ray-2-flash-video-reframeFast reframing
ModelSlugBest ForFFmpeg Merge Audio Videoffmpeg-api-merge-audio-videoMerge audio trackMMAudio V2mm-audio-v-2Add audio to videoMMAudiommaudioAdd audio to videoAuto Subtitleauto-subtitleGenerate subtitlesMerge Videosmerge-videosConcatenate videos
ModelSlugBest ForHeygen Video Translateheygen-video-translateTranslate video speech
ModelSlugBest ForMotion Fastmotion-fastFast motion transferInfinitalk V2Vinfinitalk-video-to-videoTalking head from video
ModelSlugBest ForFaceswap Videofaceswap-videoSwap face in video
Check model GET https://api.eachlabs.ai/v1/model?slug=<slug> β validates the model exists and returns the request_schema with exact input parameters. Always do this before creating a prediction to ensure correct inputs. POST https://api.eachlabs.ai/v1/prediction with model slug, version "0.0.1", and input matching the schema Poll GET https://api.eachlabs.ai/v1/prediction/{id} until status is "success" or "failed" Extract the output video URL from the response
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "veo3-1-extend-video", "version": "0.0.1", "input": { "video_url": "https://example.com/video.mp4", "prompt": "Continue the scene with the camera slowly pulling back" } }'
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "sync-lipsync-v2-pro", "version": "0.0.1", "input": { "video_url": "https://example.com/talking-head.mp4", "audio_url": "https://example.com/new-audio.mp3" } }'
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "auto-subtitle", "version": "0.0.1", "input": { "video_url": "https://example.com/video.mp4" } }'
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "ffmpeg-api-merge-audio-video", "version": "0.0.1", "input": { "video_url": "https://example.com/video.mp4", "audio_url": "https://example.com/music.mp3", "start_offset": 0 } }'
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "topaz-upscale-video", "version": "0.0.1", "input": { "video_url": "https://example.com/low-res-video.mp4" } }'
See references/MODELS.md for complete parameter details for each model.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.