Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Generates images and videos using MuleRouter or MuleRun multimodal APIs. Text-to-Image, Image-to-Image, Text-to-Video, Image-to-Video, video editing (VACE, k...
Generates images and videos using MuleRouter or MuleRun multimodal APIs. Text-to-Image, Image-to-Image, Text-to-Video, Image-to-Video, video editing (VACE, k...
This item is timing out or returning errors right now. Review the source page and try again later.
Use the source page and any available docs to guide the install because the item is currently unstable or timing out.
I tried to install a skill package from Yavira, but the item is currently unstable or timing out. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required. Then review README.md for any prerequisites, environment setup, or post-install checks.
I tried to upgrade a skill package from Yavira, but the item is currently unstable or timing out. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need. Then review README.md for any prerequisites, environment setup, or post-install checks.
Generate images and videos using MuleRouter or MuleRun multimodal APIs.
This skill requires the following environment variables to be set before use: VariableRequiredDescriptionMULEROUTER_API_KEYYesAPI key for authentication (get one here)MULEROUTER_BASE_URLYes*Custom API base URL (e.g., https://api.mulerouter.ai). Takes priority over SITE.MULEROUTER_SITEYes*API site: mulerouter or mulerun. Used if BASE_URL is not set. *At least one of MULEROUTER_BASE_URL or MULEROUTER_SITE must be set. The API key is included in Authorization: Bearer headers when making network calls to the configured API endpoint. If any of these variables are missing, the scripts will fail with a configuration error. Check the Configuration section below to set them up.
Before running any commands, verify the environment is configured:
Run the built-in config check script: uv run python -c "from core.config import load_config; load_config(); print('Configuration OK')" If this prints "Configuration OK", skip to Step 3. If it raises a ValueError, proceed to Step 2.
If the variables above are not set, ask the user to provide their API key and preferred endpoint. Create a .env file in the skill's working directory: # Option 1: Use custom base URL (takes priority over SITE) MULEROUTER_BASE_URL=https://api.mulerouter.ai MULEROUTER_API_KEY=your-api-key # Option 2: Use site (if BASE_URL not set) # MULEROUTER_SITE=mulerun # MULEROUTER_API_KEY=your-api-key Note: MULEROUTER_BASE_URL takes priority over MULEROUTER_SITE. If both are set, MULEROUTER_BASE_URL is used. Note: The skill only loads variables prefixed with MULEROUTER_ from the .env file. Other variables in the file are ignored. Important: Do NOT use export shell commands to set credentials. Use a .env file or ensure the variables are already present in your shell environment before invoking the skill.
The skill uses uv for dependency management and execution. Make sure uv is installed and available in your PATH. Run uv sync to install dependencies.
uv run python scripts/list_models.py
uv run python models/alibaba/wan2.6-t2v/generation.py --list-params
Text-to-Video: uv run python models/alibaba/wan2.6-t2v/generation.py --prompt "A cat walking through a garden" Text-to-Image: uv run python models/alibaba/wan2.6-t2i/generation.py --prompt "A serene mountain lake" Image-to-Video: uv run python models/alibaba/wan2.6-i2v/generation.py --prompt "Gentle zoom in" --image "https://example.com/photo.jpg" #remote image url uv run python models/alibaba/wan2.6-i2v/generation.py --prompt "Gentle zoom in" --image "/path/to/local/image.png" #local image path
For image parameters (--image, --images, etc.), prefer local file paths over base64. # Preferred: local file path (auto-converted to base64) --image /tmp/photo.png --images ["/tmp/photo.png"] Local file paths are validated before reading: only files with recognized image extensions (.png, .jpg, .jpeg, .gif, .bmp, .webp, .tiff, .tif, .svg, .ico, .heic, .heif, .avif) are accepted. Paths pointing to sensitive system directories or non-image files are rejected. Valid image files are converted to base64 and sent to the API, avoiding command-line length limits that occur with raw base64 strings.
Check configuration: verify MULEROUTER_API_KEY and either MULEROUTER_BASE_URL or MULEROUTER_SITE are set Install dependencies: run uv sync Run uv run python scripts/list_models.py to discover available models Run uv run python models/<path>/<action>.py --list-params to see parameters Execute with appropriate parameters Parse output URLs from results
When listing models, each model's tags (e.g., [SOTA]) are displayed by default next to its name. Tags help identify model characteristics at a glance โ for example, SOTA indicates a state-of-the-art model. You can also filter models by tag using --tag: uv run python scripts/list_models.py --tag SOTA If you are unsure which model to use, present the available options to the user and let them choose. Use the AskUserQuestion tool (or equivalent interactive prompt) to ask the user which model they prefer. For example, if the user asks to "generate an image" without specifying a model, list the relevant image generation models with their tags and descriptions, and ask the user to pick one.
For an image generation model, a suggested timeout is 5 minutes. For a video generation model, a suggested timeout is 15 minutes.
REFERENCE.md - API configuration and CLI options MODELS.md - Complete model specifications
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.