Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Generate images and videos via Runware API. Access to FLUX, Stable Diffusion, Kling AI, and other top models. Supports text-to-image, image-to-image, upscaling, text-to-video, and image-to-video. Use when generating images, creating videos from prompts or images, upscaling images, or doing AI image transformation.
Generate images and videos via Runware API. Access to FLUX, Stable Diffusion, Kling AI, and other top models. Supports text-to-image, image-to-image, upscaling, text-to-video, and image-to-video. Use when generating images, creating videos from prompts or images, upscaling images, or doing AI image transformation.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Image and video generation via Runware's unified API. Access FLUX, Stable Diffusion XL, Kling AI, and more.
Set RUNWARE_API_KEY environment variable, or pass --api-key to scripts. Get API key: https://runware.ai
python3 scripts/image.py gen "a cyberpunk city at sunset, neon lights, rain" --count 2 -o ./images Options: --model: Model ID (default: runware:101@1 / FLUX.1 Dev) --width/--height: Dimensions (default: 1024x1024) --steps: Inference steps (default: 25) --cfg: CFG scale (default: 7.5) --count/-n: Number of images --negative: Negative prompt --seed: Reproducible seed --lora: LoRA model ID --format: png/jpg/webp
Transform an existing image: python3 scripts/image.py img2img ./photo.jpg "watercolor painting style" --strength 0.7 --strength: How much to transform (0=keep original, 1=ignore original)
python3 scripts/image.py upscale ./small.png --factor 4 -o ./large.png
python3 scripts/image.py models
python3 scripts/video.py gen "a cat playing with yarn, cute, high quality" --duration 5 -o ./cat.mp4 Options: --model: Model ID (default: klingai:5@3 / Kling AI 1.6 Pro) --duration: Length in seconds --width/--height: Resolution (default: 1920x1080) --negative: Negative prompt --format: mp4/webm/mov --max-wait: Polling timeout (default: 600s)
Animate an image or interpolate between frames: # Single image (becomes first frame) python3 scripts/video.py img2vid ./start.png --prompt "zoom out slowly" -o ./animated.mp4 # Two images (first and last frame) python3 scripts/video.py img2vid ./start.png ./end.png --duration 5
python3 scripts/video.py models
ModelIDFLUX.1 Devrunware:101@1FLUX.1 Schnell (fast)runware:100@1FLUX.1 Kontextrunware:106@1Stable Diffusion XLcivitai:101055@128080RealVisXLcivitai:139562@297320
ModelIDKling AI 1.6 Proklingai:5@3Kling AI 1.5 Proklingai:3@2Runway Gen-3runwayml:1@1 Browse all: https://runware.ai/models
Video generation is async; scripts poll until complete Costs vary by model โ check https://runware.ai/pricing FLUX models are excellent for quality; Schnell is faster For best video results, use descriptive prompts with motion words
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.