Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Generate or edit images through OpenRouter's multimodal image generation endpoint (`/api/v1/chat/completions`) using OpenRouter-compatible image models. Use...
Generate or edit images through OpenRouter's multimodal image generation endpoint (`/api/v1/chat/completions`) using OpenRouter-compatible image models. Use...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Generate new images or edit existing ones using OpenRouter image-capable models via the Chat Completions API.
Run the script using absolute path (do NOT cd to the skill directory first): Generate new image: # Ensure outbound directory exists first mkdir -p ~/.openclaw/media/outbound uv run ~/.openclaw/workspace/skills/openrouter-image-generation/scripts/generate_image.py \ --prompt "your image description" \ --filename "~/.openclaw/media/outbound/output-name.png" \ --model google/gemini-2.5-flash-image \ [--aspect-ratio 16:9] \ [--image-size 2K] Edit existing image (image-to-image): # Ensure outbound directory exists first mkdir -p ~/.openclaw/media/outbound uv run ~/.openclaw/workspace/skills/openrouter-image-generation/scripts/generate_image.py \ --prompt "editing instructions" \ --filename "~/.openclaw/media/outbound/output-name.png" \ --input-image "path/to/input.png" \ --model google/gemini-2.5-flash-image Important: Default OpenClaw delivery path is ~/.openclaw/media/outbound/. Save generated images there so other OpenClaw flows can pick them up easily.
The script checks for API key in this order: --api-key argument OPENROUTER_API_KEY environment variable Optional OpenRouter attribution headers: --site-url or OPENROUTER_SITE_URL --app-name or OPENROUTER_APP_NAME
--model <openrouter-model-id> is required (no script default) Example model: google/gemini-2.5-flash-image Use --aspect-ratio for image_config.aspect_ratio (for example 1:1, 16:9) Use --image-size for image_config.image_size (1K, 2K, 4K) Use --image-config-json '{"key":"value"}' for advanced/provider-specific extras (merged into image_config) Note: OpenRouter docs show aspect_ratio and image_size as the common image config fields for image generation. Additional keys may exist for specific providers/models (for example Sourceful features). If a request fails, remove unsupported options or switch models. Note: The script always sends modalities: ["image", "text"]. Image-only models (some FLUX variants) may reject this โ if you get an unexpected error with a non-Gemini model, this may be the cause. No workaround is currently exposed via CLI args.
Goal: iterate quickly before spending time on higher-quality settings. Draft: smaller size / faster model --image-size 1K Iterate: adjust prompt in small diffs and keep a new filename each run Final: larger size or higher quality if the selected model supports it Example: --image-size 4K --aspect-ratio 16:9
Preflight: command -v uv test -n "$OPENROUTER_API_KEY" (or pass --api-key) test -d ~/.openclaw/media/outbound || mkdir -p ~/.openclaw/media/outbound If editing: test -f "path/to/input.png" Common failures: Error: No API key provided. -> set OPENROUTER_API_KEY or pass --api-key Error loading input image: -> bad path or unreadable file HTTP 400 with model/image config error -> unsupported model or invalid image_config.aspect_ratio / image_config.image_size HTTP 401/403 -> invalid key, no model access, or quota/credits issue No image found in response -> model may not support image output or request format rejected
Generate filenames with the pattern: ~/.openclaw/media/outbound/yyyy-mm-dd-hh-mm-ss-name.png Examples: ~/.openclaw/media/outbound/2026-02-26-14-23-05-product-shot.png ~/.openclaw/media/outbound/2026-02-26-14-25-30-sky-edit.png
For generation: pass the user's description as-is unless it is too vague to be actionable. For editing: make the requested change explicit and preserve everything else. Prompt template for precise edits: Change ONLY: <change>. Keep identical: subject, composition/crop, pose, lighting, color palette, background, text, and overall style. Do not add new objects.
Save the first returned image to ~/.openclaw/media/outbound/output-name.png by default (pass that full path in --filename) Supports OpenRouter's base64 data URL image responses (message.images[0].image_url.url) Prints the saved file path Do not read the image back unless the user asks
Generate new image: mkdir -p ~/.openclaw/media/outbound uv run ~/.openclaw/workspace/skills/openrouter-image-generation/scripts/generate_image.py \ --prompt "A cinematic product photo of a matte black mechanical keyboard on a wooden desk, warm window light" \ --filename "~/.openclaw/media/outbound/2026-02-26-14-23-05-keyboard-product-shot.png" \ --model google/gemini-2.5-flash-image \ --aspect-ratio 16:9 \ --image-size 2K Edit existing image: mkdir -p ~/.openclaw/media/outbound uv run ~/.openclaw/workspace/skills/openrouter-image-generation/scripts/generate_image.py \ --prompt "Change ONLY: make the sky dramatic with orange sunset clouds. Keep identical: subject, composition, lighting on foreground, and overall style." \ --filename "~/.openclaw/media/outbound/2026-02-26-14-25-30-sunset-sky-edit.png" \ --model google/gemini-2.5-flash-image \ --input-image "original-photo.jpg"
OpenRouter docs: https://openrouter.ai/docs/guides/overview/multimodal/image-generation
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.