Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Agentic Vision via Gemini's native Code Execution sandbox. Use for spatial grounding, visual math, and UI auditing.
Agentic Vision via Gemini's native Code Execution sandbox. Use for spatial grounding, visual math, and UI auditing.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Leverage Gemini's native code execution to analyze images with high precision. The model writes and runs Python code in a Google-hosted sandbox to verify visual data, perfect for UI auditing, spatial grounding, and visual reasoning.
clawhub install vision-sandbox
uv run vision-sandbox --image "path/to/image.png" --prompt "Identify all buttons and provide [x, y] coordinates."
Ask the model to find specific items and return coordinates. Prompt: "Locate the 'Submit' button in this screenshot. Use code execution to verify its center point and return the [x, y] coordinates in a [0, 1000] scale."
Ask the model to count or calculate based on the image. Prompt: "Count the number of items in the list. Use Python to sum their values if prices are visible."
Check layout and readability. Prompt: "Check if the header text overlaps with any icons. Use the sandbox to calculate the bounding box intersections."
Solve visual counting tasks with code verification. Prompt: "Count the number of fingers on this hand. Use code execution to identify the bounding box for each finger and return the total count."
This skill is designed to provide Visual Grounding for automated coding agents like OpenCode. Step 1: Use vision-sandbox to extract UI metadata (coordinates, sizes, colors). Step 2: Pass the JSON output to OpenCode to generate or fix CSS/HTML.
GEMINI_API_KEY: Required environment variable. Model: Defaults to gemini-3-flash-preview.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.