# Send Eachlabs Video Generation to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "eachlabs-video-generation",
    "name": "Eachlabs Video Generation",
    "source": "tencent",
    "type": "skill",
    "category": "内容创作",
    "sourceUrl": "https://clawhub.ai/eftalyurtseven/eachlabs-video-generation",
    "canonicalUrl": "https://clawhub.ai/eftalyurtseven/eachlabs-video-generation",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/eachlabs-video-generation",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=eachlabs-video-generation",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "SKILL.md",
      "references/MODELS.md"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/eachlabs-video-generation"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/eachlabs-video-generation",
    "downloadUrl": "https://openagent3.xyz/downloads/eachlabs-video-generation",
    "agentUrl": "https://openagent3.xyz/skills/eachlabs-video-generation/agent",
    "manifestUrl": "https://openagent3.xyz/skills/eachlabs-video-generation/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/eachlabs-video-generation/agent.md"
  }
}
```
## Documentation

### EachLabs Video Generation

Generate new videos from text prompts, images, or reference inputs using 165+ AI models via the EachLabs Predictions API. For editing existing videos (upscaling, lip sync, extension, subtitles), see the eachlabs-video-edit skill.

### Authentication

Header: X-API-Key: <your-api-key>

Set the EACHLABS_API_KEY environment variable or pass it directly. Get your key at eachlabs.ai.

### 1. Create a Prediction

curl -X POST https://api.eachlabs.ai/v1/prediction \\
  -H "Content-Type: application/json" \\
  -H "X-API-Key: $EACHLABS_API_KEY" \\
  -d '{
    "model": "pixverse-v5-6-text-to-video",
    "version": "0.0.1",
    "input": {
      "prompt": "A golden retriever running through a meadow at sunset, cinematic slow motion",
      "resolution": "720p",
      "duration": "5",
      "aspect_ratio": "16:9"
    }
  }'

### 2. Poll for Result

curl https://api.eachlabs.ai/v1/prediction/{prediction_id} \\
  -H "X-API-Key: $EACHLABS_API_KEY"

Poll until status is "success" or "failed". The output video URL is in the response.

### Text-to-Video

ModelSlugBest ForPixverse v5.6pixverse-v5-6-text-to-videoGeneral purpose, audio generationXAI Grok Imaginexai-grok-imagine-text-to-videoFast creativeKandinsky 5 Prokandinsky5-pro-text-to-videoArtistic, high qualitySeedance v1.5 Proseedance-v1-5-pro-text-to-videoCinematic qualityWan v2.6wan-v2-6-text-to-videoLong/narrative contentKling v2.6 Prokling-v2-6-pro-text-to-videoMotion controlPika v2.2pika-v2-2-text-to-videoStylized, effectsMinimax Hailuo V2.3 Prominimax-hailuo-v2-3-pro-text-to-videoHigh fidelitySora 2 Prosora-2-text-to-video-proPremium qualityVeo 3veo-3Google's best qualityVeo 3.1veo3-1-text-to-videoLatest Google modelLTX v2 Fastltx-v-2-text-to-video-fastFastest generationMoonvalley Mareymoonvalley-marey-text-to-videoCinematic styleOviovi-text-to-videoGeneral purpose

### Image-to-Video

ModelSlugBest ForPixverse v5.6pixverse-v5-6-image-to-videoGeneral purposeXAI Grok Imaginexai-grok-imagine-image-to-videoCreative editsWan v2.6 Flashwan-v2-6-image-to-video-flashFastestWan v2.6wan-v2-6-image-to-videoHigh qualitySeedance v1.5 Proseedance-v1-5-pro-image-to-videoCinematicKandinsky 5 Prokandinsky5-pro-image-to-videoArtisticKling v2.6 Pro I2Vkling-v2-6-pro-image-to-videoBest Kling qualityKling O1kling-o1-image-to-videoLatest Kling modelPika v2.2 I2Vpika-v2-2-image-to-videoEffects, PikaScenesMinimax Hailuo V2.3 Prominimax-hailuo-v2-3-pro-image-to-videoHigh fidelitySora 2 I2Vsora-2-image-to-videoPremium qualityVeo 3.1 I2Vveo3-1-image-to-videoGoogle's latestRunway Gen4 Turbogen4-turboFast, film qualityVeed Fabric 1.0veed-fabric-1-0Social media

### Transitions & Effects

ModelSlugBest ForPixverse v5.6 Transitionpixverse-v5-6-transitionSmooth transitionsPika v2.2 PikaScenespika-v2-2-pikascenesScene effectsPixverse v4.5 Effectpixverse-v4-5-effectVideo effectsVeo 3.1 First Last Frameveo3-1-first-last-frame-to-videoInterpolation

### Motion Control & Animation

ModelSlugBest ForKling v2.6 Pro Motionkling-v2-6-pro-motion-controlPro motion controlKling v2.6 Standard Motionkling-v2-6-standard-motion-controlStandard motionMotion Fastmotion-fastFast motion transferMotion Video 14Bmotion-video-14bHigh quality motionWan v2.6 R2Vwan-v2-6-reference-to-videoReference-basedKling O1 Reference I2Vkling-o1-reference-image-to-videoReference-based

### Talking Head & Lip Sync

ModelSlugBest ForBytedance Omnihuman v1.5bytedance-omnihuman-v1-5Full body animationCreatify Auroracreatify-auroraAudio-driven avatarInfinitalk I2Vinfinitalk-image-to-videoImage talking headInfinitalk V2Vinfinitalk-video-to-videoVideo talking headSync Lipsync v2 Prosync-lipsync-v2-proLip syncKling Avatar v2 Prokling-avatar-v2-proPro avatarKling Avatar v2 Standardkling-avatar-v2-standardStandard avatarEchomimic V3echomimic-v3Face animationStable Avatarstable-avatarStable talking head

### Prediction Flow

Check model GET https://api.eachlabs.ai/v1/model?slug=<slug> — validates the model exists and returns the request_schema with exact input parameters. Always do this before creating a prediction to ensure correct inputs.
POST https://api.eachlabs.ai/v1/prediction with model slug, version "0.0.1", and input parameters matching the schema
Poll GET https://api.eachlabs.ai/v1/prediction/{id} until status is "success" or "failed"
Extract the output video URL from the response

### Image-to-Video with Wan v2.6 Flash

curl -X POST https://api.eachlabs.ai/v1/prediction \\
  -H "Content-Type: application/json" \\
  -H "X-API-Key: $EACHLABS_API_KEY" \\
  -d '{
    "model": "wan-v2-6-image-to-video-flash",
    "version": "0.0.1",
    "input": {
      "image_url": "https://example.com/photo.jpg",
      "prompt": "The person turns to face the camera and smiles",
      "duration": "5",
      "resolution": "1080p"
    }
  }'

### Video Transition with Pixverse

curl -X POST https://api.eachlabs.ai/v1/prediction \\
  -H "Content-Type: application/json" \\
  -H "X-API-Key: $EACHLABS_API_KEY" \\
  -d '{
    "model": "pixverse-v5-6-transition",
    "version": "0.0.1",
    "input": {
      "prompt": "Smooth morphing transition between the two images",
      "first_image_url": "https://example.com/start.jpg",
      "end_image_url": "https://example.com/end.jpg",
      "duration": "5",
      "resolution": "720p"
    }
  }'

### Motion Control with Kling v2.6

curl -X POST https://api.eachlabs.ai/v1/prediction \\
  -H "Content-Type: application/json" \\
  -H "X-API-Key: $EACHLABS_API_KEY" \\
  -d '{
    "model": "kling-v2-6-pro-motion-control",
    "version": "0.0.1",
    "input": {
      "image_url": "https://example.com/character.jpg",
      "video_url": "https://example.com/dance-reference.mp4",
      "character_orientation": "video"
    }
  }'

### Talking Head with Omnihuman

curl -X POST https://api.eachlabs.ai/v1/prediction \\
  -H "Content-Type: application/json" \\
  -H "X-API-Key: $EACHLABS_API_KEY" \\
  -d '{
    "model": "bytedance-omnihuman-v1-5",
    "version": "0.0.1",
    "input": {
      "image_url": "https://example.com/portrait.jpg",
      "audio_url": "https://example.com/speech.mp3",
      "resolution": "1080p"
    }
  }'

### Prompt Tips

Be specific about motion: "camera slowly pans left" rather than "nice camera movement"
Include style keywords: "cinematic", "anime", "3D animation", "cyberpunk"
Describe timing: "slow motion", "time-lapse", "fast-paced"
For image-to-video, describe what should change from the static image
Use negative prompts to avoid unwanted elements (where supported)

### Parameter Reference

See references/MODELS.md for complete parameter details for each model.
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: eftalyurtseven
- Version: 0.1.0
## Source health
- Status: healthy
- Source download looks usable.
- Yavira can redirect you to the upstream package for this source.
- Health scope: source
- Reason: direct_download_ok
- Checked at: 2026-04-23T16:43:11.935Z
- Expires at: 2026-04-30T16:43:11.935Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/eachlabs-video-generation)
- [Send to Agent page](https://openagent3.xyz/skills/eachlabs-video-generation/agent)
- [JSON manifest](https://openagent3.xyz/skills/eachlabs-video-generation/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/eachlabs-video-generation/agent.md)
- [Download page](https://openagent3.xyz/downloads/eachlabs-video-generation)