Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Audit and optimize your company's AI spending by identifying waste, measuring ROI, right-sizing tool tiers, and consolidating vendors for cost savings.
Audit and optimize your company's AI spending by identifying waste, measuring ROI, right-sizing tool tiers, and consolidating vendors for cost savings.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Audit your company's AI spending โ find waste, measure ROI, and right-size your tool stack.
Quarterly AI budget reviews Before renewing AI tool subscriptions When AI spend exceeds 3% of revenue without clear ROI Evaluating build vs buy decisions for AI capabilities
Map all AI spending across these categories: CategoryExamplesTypical WasteFoundation ModelsOpenAI, Anthropic, Google API keys40-60% (unused capacity, wrong model tier)SaaS with AISalesforce Einstein, HubSpot AI, Notion AI30-50% (features enabled but unused)Custom DevelopmentInternal ML teams, fine-tuning, RAG pipelines25-45% (duplicate efforts, over-engineering)InfrastructureGPU instances, vector DBs, embedding compute35-55% (over-provisioned, always-on dev instances)Data & TrainingLabeling services, training data, synthetic data20-40% (one-time costs recurring unnecessarily)
Usage Score (0-30) 0: Nobody uses it 10: <25% of licensed users active 20: 25-75% active 30: >75% active, daily use ROI Score (0-40) 0: No measurable business impact 10: Saves time but no revenue/cost link 20: Measurable cost reduction (<2x spend) 30: Clear ROI (2-5x spend) 40: High ROI (>5x spend) Replaceability Score (0-30) 0: Commodity (10+ alternatives at lower cost) 10: Some alternatives exist 20: Few alternatives, moderate switching cost 30: Irreplaceable, deep integration Action Thresholds: Score 0-30: CUT โ cancel immediately Score 31-50: REVIEW โ renegotiate or find alternative Score 51-70: OPTIMIZE โ right-size tier/usage Score 71-100: KEEP โ monitor quarterly
For every API-based AI tool, check: Model Selection: Are you using GPT-4 where GPT-3.5 suffices? Claude Opus where Sonnet works? Rule: Use the cheapest model that meets quality threshold Test: Run 100 production queries through cheaper model, measure quality delta Caching: Are you re-processing identical or similar queries? Semantic cache can cut 20-40% of API calls Exact-match cache catches another 5-15% Batch vs Real-time: Which requests actually need sub-second response? Batch processing is 50% cheaper on most providers Queue non-urgent requests for batch windows Token Optimization: Trim system prompts (every token costs money at scale) Use structured output to reduce response tokens Implement max_tokens limits per use case
Map overlapping capabilities: Current State โ Target State โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ ChatGPT Teams + Claude Pro + Gemini โ Pick ONE primary + ONE backup Jasper + Copy.ai + ChatGPT for content โ Single content tool 3 different vector databases โ Consolidate to 1 Internal embeddings + OpenAI embeddings โ Standardize on one Consolidation savings: Typically 25-40% of total AI spend.
AI SPEND AUDIT โ [Company Name] โ [Quarter/Year] โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Total AI Spend: $___/month ($___/year) AI Spend as % Revenue: ___% Industry Benchmark: 2-5% (early adopter) / 0.5-2% (mainstream) WASTE IDENTIFIED โโโ Unused licenses: $___/month โโโ Over-provisioned infra: $___/month โโโ Model tier downgrades: $___/month โโโ Vendor consolidation: $___/month โโโ TOTAL RECOVERABLE: $___/month ($___/year) ACTIONS โโ CUT (Score 0-30): [list tools] โโ REVIEW (Score 31-50): [list tools] โโ OPTIMIZE (Score 51-70): [list tools] โโ KEEP (Score 71-100): [list tools] 90-DAY PLAN Week 1-2: Cancel CUT items, begin REVIEW negotiations Week 3-4: Implement model downgrades and caching Week 5-8: Vendor consolidation migration Week 9-12: Measure savings, establish ongoing monitoring
Company SizeTypical AI SpendTypical WasteRecoverable10-25 employees$2K-$8K/mo35-50%$700-$4K/mo25-50 employees$8K-$25K/mo30-45%$2.4K-$11K/mo50-200 employees$25K-$80K/mo25-40%$6K-$32K/mo200-500 employees$80K-$300K/mo20-35%$16K-$105K/mo500+ employees$300K-$1M+/mo15-30%$45K-$300K/mo
AI spend growing faster than revenue (unsustainable) More than 3 overlapping tools in same category No usage tracking on AI SaaS licenses GPU instances running 24/7 for dev/test workloads Paying for enterprise tiers with startup-level usage No A/B testing between model tiers "Innovation budget" with no success metrics
SaaS/Tech: Higher AI spend acceptable (5-8%) if it's in the product Professional Services: Focus on billable hour impact โ $1 AI spend should save $5+ in labor Manufacturing: AI spend should tie to defect reduction or throughput gains Healthcare: Compliance costs inflate spend 20-30% โ factor in before judging waste Financial Services: Model risk management adds 15-25% overhead โ legitimate cost Ecommerce: Measure AI spend per order โ should decrease as volume scales Built by AfrexAI โ AI operations context packs for business teams. Run the AI Revenue Calculator to find your biggest automation opportunities.
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.