Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Generate photorealistic images, videos, talking heads, and natural TTS audio using GPU-accelerated AI models and scripts on a remote server.
Generate photorealistic images, videos, talking heads, and natural TTS audio using GPU-accelerated AI models and scripts on a remote server.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Full-stack AI media generation powered by GPU server (RTX 3090/3080/2070S).
Image Generation β Photorealistic images via ComfyUI (z-image, Juggernaut XL) Video Generation β Video synthesis via ComfyUI (AnimateDiff, LTX-2) Talking Heads β Animated talking faces via SadTalker Voice Synthesis β Natural TTS via Voxtral (whisper.cpp)
Host: ${GPU_USER}@${GPU_HOST} SSH Key: ~/.ssh/id_ed25519_gpu ComfyUI: /data/ai-stack/comfyui/ComfyUI/ (port 8188) SadTalker: /data/ai-stack/sadtalker/ Voxtral: /data/ai-stack/whisper/ Output: /data/ai-stack/output/
./scripts/image.sh "lady on beach at sunset" realistic ./scripts/image.sh "cyberpunk cityscape" artistic Arguments: $1: Prompt text $2: Style (realistic|artistic) β optional, default: realistic Output: Path to generated image (e.g., /data/ai-stack/output/image_001.png)
./scripts/video.sh "waves crashing on shore" animatediff 4 ./scripts/video.sh "city traffic timelapse" ltx2 8 Arguments: $1: Prompt text $2: Model (animatediff|ltx2) β optional, default: animatediff $3: Duration in seconds β optional, default: 4 Output: Path to generated video (e.g., /data/ai-stack/output/video_001.mp4)
./scripts/talking-head.sh "Hello, I'm Agent" gentle input.jpg ./scripts/talking-head.sh "Welcome to the future" neutral photo.png Arguments: $1: Speech text $2: Voice style (gentle|neutral|energetic) β optional, default: gentle $3: Avatar image path β optional, generates default if not provided Output: Path to talking head video (e.g., /data/ai-stack/output/talking_001.mp4)
./scripts/audio.sh "This is a test message" en male ./scripts/audio.sh "Bonjour le monde" fr female Arguments: $1: Text to speak $2: Language code (en|fr|es|etc) β optional, default: en $3: Voice gender (male|female) β optional, default: male Output: Path to audio file (e.g., /data/ai-stack/output/audio_001.wav)
z-image β 6B params, S3-DiT, photorealistic (downloading, 43% complete) Juggernaut XL v9 β SDXL-based, versatile (7.1GB, ready)
AnimateDiff β SD 1.5 motion module (512x512, working β ) LTX-2 β 19B params, high quality (14GB checkpoint ready, Gemma encoder ready)
SadTalker β Audio-driven head animation (working β )
Voxtral β whisper.cpp-based TTS (installed)
All dependencies are pre-installed on GPU server: ComfyUI with custom nodes (AnimateDiff-Evolved, VideoHelperSuite) SadTalker with face enhancer Voxtral with whisper.cpp FFmpeg for video encoding
Scripts will: Check SSH connectivity before execution Validate GPU server is running Return meaningful error messages Clean up failed generations automatically
Image: ~10-20s for 1024x1024 Video (AnimateDiff): ~20-30s for 512x512, 16 frames Video (LTX-2): ~60-90s for 768x512, 4s @ 24fps Talking Head: ~30-40s for 10s video Audio: ~2-5s for 30s speech
Batch generation support Style transfer capabilities Video upscaling (spatial + temporal) Multi-language voice cloning Real-time preview streaming Status: Active development Maintainer: Agent GPU Server: ${GPU_USER}@${GPU_HOST}
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.