Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Build and deploy ML demo interfaces with proper state management, queuing, and production patterns.
Build and deploy ML demo interfaces with proper state management, queuing, and production patterns.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
gr.Interface is for single-function demos β use gr.Blocks for anything with multiple steps, conditional UI, or custom layout Blocks gives you .click(), .change(), .submit() event handlers β Interface only has one function Mixing Interface inside Blocks works but creates confusing state β pick one pattern per app
gr.State() creates per-session state β it resets when the user refreshes the page State values must be JSON-serializable or Gradio silently drops them β no custom classes without serialization Pass State as both input AND output to persist changes: fn(state) -> state β forgetting the output loses updates Global variables shared across users cause race conditions β always use gr.State() for user-specific data
Without .queue(), long-running functions block all other users β always call demo.queue() before .launch() concurrency_limit=1 on a function serializes calls β use for GPU-bound inference that can't parallelize max_size in queue limits waiting users β without it, memory grows unbounded under load Generator functions with yield enable streaming β but they hold a queue slot until complete
Uploaded files are temp paths that get deleted after the request β copy them if you need persistence gr.File(type="binary") returns bytes, type="filepath" returns a string path β mismatching causes silent failures Return gr.File(value="path/to/file") for downloads, not raw bytes β the component handles content-disposition headers File uploads have a default 200MB limit β set max_file_size in launch() to change it
gr.Dropdown(value=None) with allow_custom_value=False crashes if the user submits nothing β set a default or make it optional gr.Image(type="pil") returns a PIL Image, type="numpy" returns an array, type="filepath" returns a path β inconsistent inputs break functions gr.Chatbot expects list of tuples [(user, bot), ...] β returning just strings doesn't render visible=False components still run their functions β use gr.update(interactive=False) to disable without hiding
auth=("user", "pass") is plaintext in code β use auth=auth_function for production with proper credential checking Auth applies to the whole app β there's no per-route or per-component auth without custom middleware share=True with auth still exposes auth to Gradio's servers β use your own tunnel for sensitive apps
share=True creates a 72-hour public URL through Gradio's servers β not for production, just demos Environment variables in local dev don't exist in Hugging Face Spaces β use Spaces secrets or the Settings UI server_name="0.0.0.0" to accept external connections β default 127.0.0.1 only allows localhost Behind a reverse proxy, set root_path="/subpath" or assets and API routes break
Return gr.update(value=x, visible=True) to modify component properties β returning just the value only changes value Chain events with .then() for sequential operations β parallel .click() handlers race every=5 on a function polls every 5 seconds β but it holds connections open, scale carefully trigger_mode="once" prevents double-clicks from firing twice β default allows rapid duplicate submissions
cache_examples=True pre-computes example outputs at startup β speeds up demos but increases load time Large model loading in the function runs per-request β load in global scope or use gr.State with initialization batch=True with max_batch_size=N groups concurrent requests β essential for GPU throughput
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.