โ† All skills
Tencent SkillHub ยท Content Creation

Eachlabs Video Generation

Generate new videos from text prompts, images, or reference inputs using EachLabs AI models. Supports text-to-video, image-to-video, transitions, motion control, talking head, and avatar generation. Use when the user wants to create new video content. For editing existing videos, see eachlabs-video-edit.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Generate new videos from text prompts, images, or reference inputs using EachLabs AI models. Supports text-to-video, image-to-video, transitions, motion control, talking head, and avatar generation. Use when the user wants to create new video content. For editing existing videos, see eachlabs-video-edit.

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Known item issue.

This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.

Quick setup
  1. Open the source page and confirm the package flow manually.
  2. Review SKILL.md if you can obtain the files.
  3. Treat this source as manual setup until the download is verified.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Manual review
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, references/MODELS.md

Validation

  • Open the source listing and confirm there is a real package or setup artifact available.
  • Review SKILL.md before asking your agent to continue.
  • Treat this source as manual setup until the upstream download flow is fixed.

Install with your agent

Agent handoff

Use the source page and any available docs to guide the install because the item currently does not return a direct package file.

  1. Open the source page via Open source listing.
  2. If you can obtain the package, extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the source page and extracted files.
New install

I tried to install a skill package from Yavira, but the item currently does not return a direct package file. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required.

Upgrade existing

I tried to upgrade a skill package from Yavira, but the item currently does not return a direct package file. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
0.1.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 16 sections Open source page

EachLabs Video Generation

Generate new videos from text prompts, images, or reference inputs using 165+ AI models via the EachLabs Predictions API. For editing existing videos (upscaling, lip sync, extension, subtitles), see the eachlabs-video-edit skill.

Authentication

Header: X-API-Key: <your-api-key> Set the EACHLABS_API_KEY environment variable or pass it directly. Get your key at eachlabs.ai.

1. Create a Prediction

curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "pixverse-v5-6-text-to-video", "version": "0.0.1", "input": { "prompt": "A golden retriever running through a meadow at sunset, cinematic slow motion", "resolution": "720p", "duration": "5", "aspect_ratio": "16:9" } }'

2. Poll for Result

curl https://api.eachlabs.ai/v1/prediction/{prediction_id} \ -H "X-API-Key: $EACHLABS_API_KEY" Poll until status is "success" or "failed". The output video URL is in the response.

Text-to-Video

ModelSlugBest ForPixverse v5.6pixverse-v5-6-text-to-videoGeneral purpose, audio generationXAI Grok Imaginexai-grok-imagine-text-to-videoFast creativeKandinsky 5 Prokandinsky5-pro-text-to-videoArtistic, high qualitySeedance v1.5 Proseedance-v1-5-pro-text-to-videoCinematic qualityWan v2.6wan-v2-6-text-to-videoLong/narrative contentKling v2.6 Prokling-v2-6-pro-text-to-videoMotion controlPika v2.2pika-v2-2-text-to-videoStylized, effectsMinimax Hailuo V2.3 Prominimax-hailuo-v2-3-pro-text-to-videoHigh fidelitySora 2 Prosora-2-text-to-video-proPremium qualityVeo 3veo-3Google's best qualityVeo 3.1veo3-1-text-to-videoLatest Google modelLTX v2 Fastltx-v-2-text-to-video-fastFastest generationMoonvalley Mareymoonvalley-marey-text-to-videoCinematic styleOviovi-text-to-videoGeneral purpose

Image-to-Video

ModelSlugBest ForPixverse v5.6pixverse-v5-6-image-to-videoGeneral purposeXAI Grok Imaginexai-grok-imagine-image-to-videoCreative editsWan v2.6 Flashwan-v2-6-image-to-video-flashFastestWan v2.6wan-v2-6-image-to-videoHigh qualitySeedance v1.5 Proseedance-v1-5-pro-image-to-videoCinematicKandinsky 5 Prokandinsky5-pro-image-to-videoArtisticKling v2.6 Pro I2Vkling-v2-6-pro-image-to-videoBest Kling qualityKling O1kling-o1-image-to-videoLatest Kling modelPika v2.2 I2Vpika-v2-2-image-to-videoEffects, PikaScenesMinimax Hailuo V2.3 Prominimax-hailuo-v2-3-pro-image-to-videoHigh fidelitySora 2 I2Vsora-2-image-to-videoPremium qualityVeo 3.1 I2Vveo3-1-image-to-videoGoogle's latestRunway Gen4 Turbogen4-turboFast, film qualityVeed Fabric 1.0veed-fabric-1-0Social media

Transitions & Effects

ModelSlugBest ForPixverse v5.6 Transitionpixverse-v5-6-transitionSmooth transitionsPika v2.2 PikaScenespika-v2-2-pikascenesScene effectsPixverse v4.5 Effectpixverse-v4-5-effectVideo effectsVeo 3.1 First Last Frameveo3-1-first-last-frame-to-videoInterpolation

Motion Control & Animation

ModelSlugBest ForKling v2.6 Pro Motionkling-v2-6-pro-motion-controlPro motion controlKling v2.6 Standard Motionkling-v2-6-standard-motion-controlStandard motionMotion Fastmotion-fastFast motion transferMotion Video 14Bmotion-video-14bHigh quality motionWan v2.6 R2Vwan-v2-6-reference-to-videoReference-basedKling O1 Reference I2Vkling-o1-reference-image-to-videoReference-based

Talking Head & Lip Sync

ModelSlugBest ForBytedance Omnihuman v1.5bytedance-omnihuman-v1-5Full body animationCreatify Auroracreatify-auroraAudio-driven avatarInfinitalk I2Vinfinitalk-image-to-videoImage talking headInfinitalk V2Vinfinitalk-video-to-videoVideo talking headSync Lipsync v2 Prosync-lipsync-v2-proLip syncKling Avatar v2 Prokling-avatar-v2-proPro avatarKling Avatar v2 Standardkling-avatar-v2-standardStandard avatarEchomimic V3echomimic-v3Face animationStable Avatarstable-avatarStable talking head

Prediction Flow

Check model GET https://api.eachlabs.ai/v1/model?slug=<slug> โ€” validates the model exists and returns the request_schema with exact input parameters. Always do this before creating a prediction to ensure correct inputs. POST https://api.eachlabs.ai/v1/prediction with model slug, version "0.0.1", and input parameters matching the schema Poll GET https://api.eachlabs.ai/v1/prediction/{id} until status is "success" or "failed" Extract the output video URL from the response

Image-to-Video with Wan v2.6 Flash

curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "wan-v2-6-image-to-video-flash", "version": "0.0.1", "input": { "image_url": "https://example.com/photo.jpg", "prompt": "The person turns to face the camera and smiles", "duration": "5", "resolution": "1080p" } }'

Video Transition with Pixverse

curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "pixverse-v5-6-transition", "version": "0.0.1", "input": { "prompt": "Smooth morphing transition between the two images", "first_image_url": "https://example.com/start.jpg", "end_image_url": "https://example.com/end.jpg", "duration": "5", "resolution": "720p" } }'

Motion Control with Kling v2.6

curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "kling-v2-6-pro-motion-control", "version": "0.0.1", "input": { "image_url": "https://example.com/character.jpg", "video_url": "https://example.com/dance-reference.mp4", "character_orientation": "video" } }'

Talking Head with Omnihuman

curl -X POST https://api.eachlabs.ai/v1/prediction \ -H "Content-Type: application/json" \ -H "X-API-Key: $EACHLABS_API_KEY" \ -d '{ "model": "bytedance-omnihuman-v1-5", "version": "0.0.1", "input": { "image_url": "https://example.com/portrait.jpg", "audio_url": "https://example.com/speech.mp3", "resolution": "1080p" } }'

Prompt Tips

Be specific about motion: "camera slowly pans left" rather than "nice camera movement" Include style keywords: "cinematic", "anime", "3D animation", "cyberpunk" Describe timing: "slow motion", "time-lapse", "fast-paced" For image-to-video, describe what should change from the static image Use negative prompts to avoid unwanted elements (where supported)

Parameter Reference

See references/MODELS.md for complete parameter details for each model.

Category context

Writing, remixing, publishing, visual generation, and marketing content production.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
2 Docs
  • SKILL.md Primary doc
  • references/MODELS.md Docs