← All skills
Tencent SkillHub · AI

VEED UGC

Generate UGC-style promotional videos with AI lip-sync. Takes an image (person with product from Morpheus/Ad-Ready) and a script (pure dialogue), creates a video of the person speaking. Uses ElevenLabs for voice synthesis.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Generate UGC-style promotional videos with AI lip-sync. Takes an image (person with product from Morpheus/Ad-Ready) and a script (pure dialogue), creates a video of the person speaking. Uses ElevenLabs for voice synthesis.

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, scripts/generate.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.1

Documentation

ClawHub primary doc Primary doc: SKILL.md 12 sections Open source page

Veed-UGC

Generate UGC (User Generated Content) style promotional videos with AI lip-sync using ComfyDeploy's Veed-UGC workflow.

Overview

Veed-UGC transforms static images into dynamic promotional videos: Takes a photo of a person with a product (from Morpheus or Ad-Ready) Receives a script (pure dialogue text) Creates a lip-synced video of the person speaking the script Perfect for creating authentic-feeling promotional content at scale.

API Details

Endpoint: https://api.comfydeploy.com/api/run/deployment/queue Deployment ID: 627c8fb5-1285-4074-a17c-ae54f8a5b5c6

Required Inputs

InputDescriptionExampleimageURL of person+product imageOutput from Morpheus/Ad-ReadyscriptPure dialogue text"Hola che! Cómo anda todo por allá?"voice_idElevenLabs voice IDDefault: PBi4M0xL4G7oVYxKgqww

⚠️ CRITICAL: Script Format

The script input must be PURE DIALOGUE ONLY: ✅ CORRECT: Hola che! Cómo anda todo por allá? Mirá esto que acabo de probar, una locura total. ❌ WRONG - No annotations: [Entusiasta] Hola che! (pausa) Cómo anda? ❌ WRONG - No tone directions: Tono argentino informal: Hola che! ❌ WRONG - No stage directions: *sonríe* Hola che! *levanta el producto* ❌ WRONG - No titles/labels: ESCENA 1: Hola che! Just write exactly what the person should say. Nothing else.

Voice IDs (ElevenLabs)

VoiceIDDescriptionDefaultPBi4M0xL4G7oVYxKgqwwMain voice More voices can be added from ElevenLabs

Usage

uv run ~/.clawdbot/skills/veed-ugc/scripts/generate.py \ --image "https://example.com/person-with-product.png" \ --script "Hola! Les quiero mostrar este producto increíble que acabo de probar." \ --output "ugc-video.mp4"

With local image file:

uv run ~/.clawdbot/skills/veed-ugc/scripts/generate.py \ --image "./morpheus-output.png" \ --script "Mirá, yo antes no usaba esto pero ahora no puedo vivir sin él." \ --voice-id "PBi4M0xL4G7oVYxKgqww" \ --output "promo-video.mp4"

Direct API Call

const response = await fetch("https://api.comfydeploy.com/api/run/deployment/queue", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": "Bearer YOUR_API_KEY" }, body: JSON.stringify({ "deployment_id": "627c8fb5-1285-4074-a17c-ae54f8a5b5c6", "inputs": { "image": "/* put your image url here */", "voice_id": "PBi4M0xL4G7oVYxKgqww", "script": "Hola che! Cómo anda todo por allá?" } }) });

Typical Pipeline

Generate image with Morpheus/Ad-Ready uv run morpheus... --output product-shot.png Write the script (pure dialogue) Create UGC video from the image uv run veed-ugc... --image product-shot.png --script "..." --output promo.mp4

Output

The workflow outputs an MP4 video file with: The original image animated with lip-sync AI-generated voiceover from the script Natural head movements and expressions

Notes

Image should clearly show a person's face (frontal or 3/4 view works best) Script is spoken exactly as written - no interpretation Video length depends on script length Processing time: ~2-5 minutes depending on script length

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
1 Docs1 Scripts
  • SKILL.md Primary doc
  • scripts/generate.py Scripts