Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Manage local Ollama models autonomously with health monitoring, automatic fallback, self-healing, and offline operation without internet dependency.
Manage local Ollama models autonomously with health monitoring, automatic fallback, self-healing, and offline operation without internet dependency.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Autonomously manage and use local Ollama models for continuous operation without internet dependency. Includes model health monitoring, automatic fallback, and self-healing capabilities.
This skill enables autonomous operation with local Ollama models. It monitors model health, automatically switches between models when issues occur, and maintains functionality even without internet connectivity. The skill includes self-healing capabilities to restart services and clear resources when needed.
Health Monitoring: Continuously check model availability and performance Automatic Fallback: Switch to alternative models when primary fails Model Switching: Dynamically select best available model for task
Service Restart: Automatically restart Ollama when models become unavailable Resource Management: Clear cache and temporary files to free resources Model Reinstallation: Reinstall problematic models automatically
Internet Detection: Monitor internet connectivity status Smart Fallback: Switch to remote models when local models unavailable and internet is present Offline Mode: Maintain full functionality without internet
Primary: llama-3.1-8b-instruct (general tasks) Secondary: mistral-7b-instruct (faster responses) Specialized: code-llama-7b (coding tasks)
Model Status: Monitor availability every 30 seconds Latency Tracking: Monitor response times every minute Resource Usage: Monitor GPU/CPU and memory every 5 minutes
Model Switching: Automatically switch to alternative local models Response Retry: Retry failed requests with exponential backoff Degraded Mode: Continue with limited functionality if all models unavailable
Use local models primarily Fallback to remote models if local models unavailable Maintain optimal performance
Use local models exclusively Continue all operations without interruption Provide degraded functionality if needed
model_status - Check current model health switch_model - Manually switch between models restart_ollama - Restart Ollama service
check_health - Run comprehensive health check monitor_resources - Monitor system resources clear_cache - Clear model cache and temporary files
Service Restart: Triggered when model becomes unavailable Resource Cleanup: Triggered when high memory usage detected Model Reinstallation: Triggered when persistent failures occur
Manual Restart: User can manually restart services Cache Clearing: User can manually clear resources Model Updates: User can update models as needed
All operations performed locally No external dependencies required Secure model management Privacy-preserving by default
Resource Monitoring: Track GPU/CPU usage and memory Latency Tracking: Monitor response times and performance Model Selection: Choose optimal model based on task requirements
Health Checks: Run periodic health checks Cache Management: Clear unused cache regularly Model Updates: Keep models updated when possible
Log Analysis: Monitor logs for issues Performance Metrics: Track performance over time Error Handling: Graceful error handling and recovery
This skill integrates with: Ollama: Local model management System Resources: Monitor and manage system resources Network: Detect internet connectivity OpenClaw: Seamless integration with existing tools
Model Training: Support for custom model training Advanced Routing: Intelligent model selection based on task Multi-GPU Support: Scale across multiple GPUs Cloud Sync: Optional cloud backup and synchronization
This skill is part of the OpenClaw ecosystem and follows the same licensing terms as OpenClaw itself.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.