Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
ML engineering skill for productionizing models, building MLOps pipelines, and integrating LLMs. Covers model deployment, feature stores, drift monitoring, RAG systems, and cost optimization.
ML engineering skill for productionizing models, building MLOps pipelines, and integrating LLMs. Covers model deployment, feature stores, drift monitoring, RAG systems, and cost optimization.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Production ML engineering patterns for model deployment, MLOps infrastructure, and LLM integration.
Model Deployment Workflow MLOps Pipeline Setup LLM Integration Workflow RAG System Implementation Model Monitoring Reference Documentation Tools
Deploy a trained model to production with monitoring: Export model to standardized format (ONNX, TorchScript, SavedModel) Package model with dependencies in Docker container Deploy to staging environment Run integration tests against staging Deploy canary (5% traffic) to production Monitor latency and error rates for 1 hour Promote to full production if metrics pass Validation: p95 latency < 100ms, error rate < 0.1%
FROM python:3.11-slim COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY model/ /app/model/ COPY src/ /app/src/ HEALTHCHECK CMD curl -f http://localhost:8080/health || exit 1 EXPOSE 8080 CMD ["uvicorn", "src.server:app", "--host", "0.0.0.0", "--port", "8080"]
OptionLatencyThroughputUse CaseFastAPI + UvicornLowMediumREST APIs, small modelsTriton Inference ServerVery LowVery HighGPU inference, batchingTensorFlow ServingLowHighTensorFlow modelsTorchServeLowHighPyTorch modelsRay ServeMediumHighComplex pipelines, multi-model
Establish automated training and deployment: Configure feature store (Feast, Tecton) for training data Set up experiment tracking (MLflow, Weights & Biases) Create training pipeline with hyperparameter logging Register model in model registry with version metadata Configure staging deployment triggered by registry events Set up A/B testing infrastructure for model comparison Enable drift monitoring with alerting Validation: New models automatically evaluated against baseline
from feast import Entity, Feature, FeatureView, FileSource user = Entity(name="user_id", value_type=ValueType.INT64) user_features = FeatureView( name="user_features", entities=["user_id"], ttl=timedelta(days=1), features=[ Feature(name="purchase_count_30d", dtype=ValueType.INT64), Feature(name="avg_order_value", dtype=ValueType.FLOAT), ], online=True, source=FileSource(path="data/user_features.parquet"), )
TriggerDetectionActionScheduledCron (weekly/monthly)Full retrainPerformance dropAccuracy < thresholdImmediate retrainData driftPSI > 0.2Evaluate, then retrainNew data volumeX new samplesIncremental update
Integrate LLM APIs into production applications: Create provider abstraction layer for vendor flexibility Implement retry logic with exponential backoff Configure fallback to secondary provider Set up token counting and context truncation Add response caching for repeated queries Implement cost tracking per request Add structured output validation with Pydantic Validation: Response parses correctly, cost within budget
from abc import ABC, abstractmethod from tenacity import retry, stop_after_attempt, wait_exponential class LLMProvider(ABC): @abstractmethod def complete(self, prompt: str, **kwargs) -> str: pass @retry(stop=stop_after_attempt(3), wait=wait_exponential(min=1, max=10)) def call_llm_with_retry(provider: LLMProvider, prompt: str) -> str: return provider.complete(prompt)
ProviderInput CostOutput CostGPT-4$0.03/1K$0.06/1KGPT-3.5$0.0005/1K$0.0015/1KClaude 3 Opus$0.015/1K$0.075/1KClaude 3 Haiku$0.00025/1K$0.00125/1K
Build retrieval-augmented generation pipeline: Choose vector database (Pinecone, Qdrant, Weaviate) Select embedding model based on quality/cost tradeoff Implement document chunking strategy Create ingestion pipeline with metadata extraction Build retrieval with query embedding Add reranking for relevance improvement Format context and send to LLM Validation: Response references retrieved context, no hallucinations
DatabaseHostingScaleLatencyBest ForPineconeManagedHighLowProduction, managedQdrantBothHighVery LowPerformance-criticalWeaviateBothHighLowHybrid searchChromaSelf-hostedMediumLowPrototypingpgvectorSelf-hostedMediumMediumExisting Postgres
StrategyChunk SizeOverlapBest ForFixed500-1000 tokens50-100General textSentence3-5 sentences1 sentenceStructured textSemanticVariableBased on meaningResearch papersRecursiveHierarchicalParent-childLong documents
Monitor production models for drift and degradation: Set up latency tracking (p50, p95, p99) Configure error rate alerting Implement input data drift detection Track prediction distribution shifts Log ground truth when available Compare model versions with A/B metrics Set up automated retraining triggers Validation: Alerts fire before user-visible degradation
from scipy.stats import ks_2samp def detect_drift(reference, current, threshold=0.05): statistic, p_value = ks_2samp(reference, current) return { "drift_detected": p_value < threshold, "ks_statistic": statistic, "p_value": p_value }
MetricWarningCriticalp95 latency> 100ms> 200msError rate> 0.1%> 1%PSI (drift)> 0.1> 0.2Accuracy drop> 2%> 5%
references/mlops_production_patterns.md contains: Model deployment pipeline with Kubernetes manifests Feature store architecture with Feast examples Model monitoring with drift detection code A/B testing infrastructure with traffic splitting Automated retraining pipeline with MLflow
references/llm_integration_guide.md contains: Provider abstraction layer pattern Retry and fallback strategies with tenacity Prompt engineering templates (few-shot, CoT) Token optimization with tiktoken Cost calculation and tracking
references/rag_system_architecture.md contains: RAG pipeline implementation with code Vector database comparison and integration Chunking strategies (fixed, semantic, recursive) Embedding model selection guide Hybrid search and reranking patterns
python scripts/model_deployment_pipeline.py --model model.pkl --target staging Generates deployment artifacts: Dockerfile, Kubernetes manifests, health checks.
python scripts/rag_system_builder.py --config rag_config.yaml --analyze Scaffolds RAG pipeline with vector store integration and retrieval logic.
python scripts/ml_monitoring_suite.py --config monitoring.yaml --deploy Sets up drift detection, alerting, and performance dashboards.
CategoryToolsML FrameworksPyTorch, TensorFlow, Scikit-learn, XGBoostLLM FrameworksLangChain, LlamaIndex, DSPyMLOpsMLflow, Weights & Biases, KubeflowDataSpark, Airflow, dbt, KafkaDeploymentDocker, Kubernetes, TritonDatabasesPostgreSQL, BigQuery, Pinecone, Redis
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.