Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Edit, transform, extend, upscale, and enhance videos using EachLabs AI models. Supports lip sync, video translation, subtitle generation, audio merging, style transfer, and video extension. Use when the user wants to edit or transform existing video content.
Edit, transform, extend, upscale, and enhance videos using EachLabs AI models. Supports lip sync, video translation, subtitle generation, audio merging, style transfer, and video extension. Use when the user wants to edit or transform existing video content.
This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.
Use the source page and any available docs to guide the install because the item currently does not return a direct package file.
I tried to install a skill package from Yavira, but the item currently does not return a direct package file. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required.
I tried to upgrade a skill package from Yavira, but the item currently does not return a direct package file. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need.
Edit, transform, and enhance existing videos using 25+ AI models via the EachLabs Predictions API.
Header: X-API-Key: <your-api-key> Set the EACHLABS_API_KEY environment variable. Get your key at eachlabs.ai.
ModelSlugBest ForVeo 3.1 Extendveo3-1-extend-videoBest quality extensionVeo 3.1 Fast Extendveo3-1-fast-extend-videoFast extensionPixVerse v5 Extendpixverse-v5-extendPixVerse extensionPixVerse v4.5 Extendpixverse-v4-5-extendOlder PixVerse extension
ModelSlugBest ForSync Lipsync v2 Prosync-lipsync-v2-proBest lip sync qualityPixVerse Lip Syncpixverse-lip-syncPixVerse lip syncLatentSynclatentsyncOpen-source lip syncVideo Retalkingvideo-retalkingAudio-based lip sync
ModelSlugBest ForRunway Gen4 Alephrunway-gen4-alephVideo transformationKling O1 Video Editkling-o1-video-to-video-editAI video editingKling O1 V2V Referencekling-o1-video-to-video-referenceReference-based editByteDance Video Stylizebytedance-video-stylizeStyle transferWan v2.2 Animate Movewan-v2-2-14b-animate-moveMotion animationWan v2.2 Animate Replacewan-v2-2-14b-animate-replaceObject replacement
ModelSlugBest ForTopaz Upscale Videotopaz-upscale-videoBest quality upscaleLuma Ray 2 Video Reframeluma-dream-machine-ray-2-video-reframeVideo reframingLuma Ray 2 Flash Reframeluma-dream-machine-ray-2-flash-video-reframeFast reframing
ModelSlugBest ForFFmpeg Merge Audio Videoffmpeg-api-merge-audio-videoMerge audio trackMMAudio V2mm-audio-v-2Add audio to videoMMAudiommaudioAdd audio to videoAuto Subtitleauto-subtitleGenerate subtitlesMerge Videosmerge-videosConcatenate videos
ModelSlugBest ForHeygen Video Translateheygen-video-translateTranslate video speech
ModelSlugBest ForMotion Fastmotion-fastFast motion transferInfinitalk V2Vinfinitalk-video-to-videoTalking head from video
ModelSlugBest ForFaceswap Videofaceswap-videoSwap face in video
Check model GET https://api.eachlabs.ai/v1/model?slug=<slug> β validates the model exists and returns the request_schema with exact input parameters. Always do this before creating a prediction to ensure correct inputs. POST https://api.eachlabs.ai/v1/prediction with model slug, version "0.0.1", and input matching the schema Poll GET https://api.eachlabs.ai/v1/prediction/{id} until status is "success" or "failed" Extract the output video URL from the response
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "veo3-1-extend-video", "version": "0.0.1", "input": { "video_url": "https://example.com/video.mp4", "prompt": "Continue the scene with the camera slowly pulling back" } }'
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "sync-lipsync-v2-pro", "version": "0.0.1", "input": { "video_url": "https://example.com/talking-head.mp4", "audio_url": "https://example.com/new-audio.mp3" } }'
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "auto-subtitle", "version": "0.0.1", "input": { "video_url": "https://example.com/video.mp4" } }'
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "ffmpeg-api-merge-audio-video", "version": "0.0.1", "input": { "video_url": "https://example.com/video.mp4", "audio_url": "https://example.com/music.mp3", "start_offset": 0 } }'
curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "topaz-upscale-video", "version": "0.0.1", "input": { "video_url": "https://example.com/low-res-video.mp4" } }'
See references/MODELS.md for complete parameter details for each model.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.