Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Complete Open WebUI API integration for managing LLM models, chat completions, Ollama proxy operations, file uploads, knowledge bases (RAG), image generation, audio processing, and pipelines. Use this skill when interacting with Open WebUI instances via REST API - listing models, chatting with LLMs, uploading files for RAG, managing knowledge collections, or executing Ollama commands through the Open WebUI proxy. Requires OPENWEBUI_URL and OPENWEBUI_TOKEN environment variables or explicit parameters.
Complete Open WebUI API integration for managing LLM models, chat completions, Ollama proxy operations, file uploads, knowledge bases (RAG), image generation, audio processing, and pipelines. Use this skill when interacting with Open WebUI instances via REST API - listing models, chatting with LLMs, uploading files for RAG, managing knowledge collections, or executing Ollama commands through the Open WebUI proxy. Requires OPENWEBUI_URL and OPENWEBUI_TOKEN environment variables or explicit parameters.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Complete API integration for Open WebUI - a unified interface for LLMs including Ollama, OpenAI, and other providers.
Activate this skill when the user wants to: List available models from their Open WebUI instance Send chat completions to models through Open WebUI Upload files for RAG (Retrieval Augmented Generation) Manage knowledge collections and add files to them Use Ollama proxy endpoints (generate, embed, pull models) Generate images or process audio through Open WebUI Check Ollama status or manage models (load, unload, delete) Create or manage pipelines Do NOT activate for: Installing or configuring Open WebUI server itself (use system admin skills) General questions about what Open WebUI is (use general knowledge) Troubleshooting Open WebUI server issues (use troubleshooting guides) Local file operations unrelated to Open WebUI API
export OPENWEBUI_URL="http://localhost:3000" # Your Open WebUI instance URL export OPENWEBUI_TOKEN="your-api-key-here" # From Settings > Account in Open WebUI
Bearer Token authentication required Token obtained from Open WebUI: Settings > Account Alternative: JWT token for advanced use cases
Example requests that SHOULD activate this skill: "List all models available in my Open WebUI" "Send a chat completion to llama3.2 via Open WebUI with prompt 'Explain quantum computing'" "Upload /path/to/document.pdf to Open WebUI knowledge base" "Create a new knowledge collection called 'Research Papers' in Open WebUI" "Generate an embedding for 'Open WebUI is great' using the nomic-embed-text model" "Pull the llama3.2 model through Open WebUI Ollama proxy" "Get Ollama status from my Open WebUI instance" "Chat with gpt-4 using my Open WebUI with RAG enabled on collection 'docs'" "Generate an image using Open WebUI with prompt 'A futuristic city'" "Delete the old-model from Open WebUI Ollama" Example requests that should NOT activate this skill: "How do I install Open WebUI?" (Installation/Admin) "What is Open WebUI?" (General knowledge) "Configure the Open WebUI environment variables" (Server config) "Troubleshoot why Open WebUI won't start" (Server troubleshooting) "Compare Open WebUI to other UIs" (General comparison)
Verify OPENWEBUI_URL and OPENWEBUI_TOKEN are set Validate URL format (http/https) Test connection with GET /api/models or /ollama/api/tags
Use the CLI tool or direct API calls: # Using the CLI tool (recommended) python3 scripts/openwebui-cli.py --help python3 scripts/openwebui-cli.py models list python3 scripts/openwebui-cli.py chat --model llama3.2 --message "Hello" # Using curl (alternative) curl -H "Authorization: Bearer $OPENWEBUI_TOKEN" \ "$OPENWEBUI_URL/api/models"
HTTP 200: Success - parse and present JSON HTTP 401: Authentication failed - check token HTTP 404: Endpoint/model not found HTTP 422: Validation error - check request parameters
EndpointMethodDescription/api/chat/completionsPOSTOpenAI-compatible chat completions/api/modelsGETList all available models/ollama/api/chatPOSTNative Ollama chat completion/ollama/api/generatePOSTOllama text generation
EndpointMethodDescription/ollama/api/tagsGETList Ollama models/ollama/api/pullPOSTPull/download a model/ollama/api/deleteDELETEDelete a model/ollama/api/embedPOSTGenerate embeddings/ollama/api/psGETList loaded models
EndpointMethodDescription/api/v1/files/POSTUpload file for RAG/api/v1/files/{id}/process/statusGETCheck file processing status/api/v1/knowledge/GET/POSTList/create knowledge collections/api/v1/knowledge/{id}/file/addPOSTAdd file to knowledge base
EndpointMethodDescription/api/v1/images/generationsPOSTGenerate images/api/v1/audio/speechPOSTText-to-speech/api/v1/audio/transcriptionsPOSTSpeech-to-text
Always confirm before: Deleting models (DELETE /ollama/api/delete) - Irreversible Pulling large models - May take significant time/bandwidth Deleting knowledge collections - Data loss risk Uploading sensitive files - Privacy consideration
Never log the full API token - Redact to sk-...XXXX format Sanitize file paths - Verify files exist before upload Validate URLs - Ensure HTTPS for external instances Handle errors gracefully - Don't expose stack traces with tokens
File uploads default to workspace directory Confirm before accessing files outside workspace No sudo/root operations required (pure API client)
python3 scripts/openwebui-cli.py models list
python3 scripts/openwebui-cli.py chat \ --model llama3.2 \ --message "Explain the benefits of RAG" \ --stream
python3 scripts/openwebui-cli.py files upload \ --file /path/to/document.pdf \ --process
python3 scripts/openwebui-cli.py knowledge add-file \ --collection-id "research-papers" \ --file-id "doc-123-uuid"
python3 scripts/openwebui-cli.py ollama embed \ --model nomic-embed-text \ --input "Open WebUI is great for LLM management"
python3 scripts/openwebui-cli.py ollama pull \ --model llama3.2:70b # Agent must confirm: "This will download ~40GB. Proceed? [y/N]"
python3 scripts/openwebui-cli.py ollama status
ErrorCauseSolution401 UnauthorizedInvalid or missing tokenVerify OPENWEBUI_TOKEN404 Not FoundModel/endpoint doesn't existCheck model name spelling422 Validation ErrorInvalid parametersCheck request body format400 Bad RequestFile still processingWait for processing completionConnection refusedWrong URLVerify OPENWEBUI_URL
Files uploaded for RAG are processed asynchronously. Before adding to knowledge: Upload file โ get file_id Poll /api/v1/files/{id}/process/status until status: "completed" Then add to knowledge collection
Pulling models (e.g., 70B parameters) can take hours. Always: Confirm with user before starting Show progress if possible Allow cancellation
Chat completions support streaming. Use --stream flag for real-time output or collect full response for non-streaming.
The included CLI tool (scripts/openwebui-cli.py) provides: Automatic authentication from environment variables Structured JSON output with optional formatting Built-in help for all commands Error handling with user-friendly messages Progress indicators for long operations Run python3 scripts/openwebui-cli.py --help for full usage.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.