Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Conduct a comprehensive AI readiness audit scoring 8 dimensions, identifying gaps, and delivering a prioritized 90-day actionable plan with budget estimates.
Conduct a comprehensive AI readiness audit scoring 8 dimensions, identifying gaps, and delivering a prioritized 90-day actionable plan with budget estimates.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Run a structured AI readiness audit for any organization. Scores 8 dimensions, identifies gaps, produces a prioritized 90-day action plan with budget ranges.
Before investing in AI/automation tools Board or leadership requesting AI strategy Evaluating build vs buy decisions Annual technology planning
Score each dimension 1-5 (1=not started, 5=optimized):
Centralized data warehouse or lakehouse operational Data quality monitoring automated (freshness, completeness, accuracy) API-first architecture for core systems Data governance policy documented and enforced PII/PHI classification and access controls active Score 1: Spreadsheets and siloed databases Score 3: Warehouse exists, some pipelines automated Score 5: Real-time streaming, quality >99%, full lineage
Top 20 revenue-impacting processes mapped end-to-end Decision trees documented for each process Exception handling paths defined Time-per-task benchmarks established Process owners assigned Score 1: Tribal knowledge, nothing written down Score 3: Major processes documented, some outdated Score 5: Living documentation, updated quarterly, covers 80%+ of operations
At least 1 person understands ML/AI concepts at implementation level Engineering team comfortable with APIs and integrations DevOps/infrastructure person can deploy and monitor services Data analyst can query and interpret model outputs Security team understands AI-specific attack surfaces Score 1: No technical staff beyond basic IT Score 3: Good engineering team, AI knowledge is theoretical Score 5: Dedicated AI/ML engineer, cross-functional AI literacy program
AI budget allocated (not pulled from "innovation" slush fund) ROI measurement criteria defined before project starts Kill criteria established (when to stop a failing project) Total cost of ownership model includes maintenance, retraining, monitoring Benchmarks set against current manual process costs Budget Reality by Company Size: Company SizeYear 1 InvestmentExpected ROI Timeline15-50 employees$24K-$80K4-8 months50-200 employees$80K-$300K3-6 months200-1000 employees$300K-$1.2M6-12 months1000+ employees$1.2M-$5M+8-18 months
Executive sponsor identified and actively involved Communication plan for affected teams drafted Training budget allocated Pilot team identified (volunteers, not voluntolds) Success metrics shared openly with organization Score 1: Leadership says "just do AI" with no plan Score 3: Exec sponsor exists, some team buy-in Score 5: Change management playbook active, regular town halls, feedback loops
AI-specific data handling policy written Vendor security assessment process includes AI criteria Model output logging and audit trail planned Regulatory requirements mapped (GDPR, HIPAA, SOX, SOC 2, EU AI Act) Incident response plan covers AI failures Score 1: No AI-specific security considerations Score 3: General security strong, AI gaps identified Score 5: AI governance framework active, regular audits, compliance automated
Core systems have APIs (CRM, ERP, HRIS, etc.) Authentication/authorization supports service accounts Webhook or event-driven architecture available Test/staging environment mirrors production Rollback procedures documented Score 1: Legacy systems, no APIs, manual data entry Score 3: Major systems have APIs, some manual bridges Score 5: API-first architecture, event-driven, CI/CD for integrations
AI initiatives map to specific business objectives (not "innovation") 3-year technology roadmap includes AI milestones Competitive landscape analysis includes AI adoption by rivals Board/leadership educated on AI capabilities and limitations Failure tolerance defined (acceptable experiment failure rate) Score 1: AI is a buzzword, no concrete strategy Score 3: Strategy exists, loosely connected to business goals Score 5: AI embedded in strategic plan, quarterly reviews, competitive moat building
Weighted Total = Sum of (Score ร Weight) / Max Possible ร 100 RangeRatingRecommendation0-25๐ด Not ReadyFix foundations first. 6-12 months of groundwork before AI projects.26-50๐ก Early StagePick ONE high-impact, low-risk pilot. Build muscle.51-75๐ข ReadyDeploy 2-3 agents in validated use cases. Scale what works.76-100๐ต AdvancedMulti-agent deployment, autonomous operations, competitive moat.
Days 1-30: Foundation Complete this assessment with honest scores Document top 5 processes by time spent ร error rate Audit data infrastructure gaps Set budget and kill criteria Days 31-60: Pilot Select highest-scoring use case (high data readiness + clear ROI) Deploy single agent or automation Measure daily: time saved, error rate, cost Weekly review with stakeholders Days 61-90: Scale or Kill If pilot ROI > 2x: plan 2 more deployments If pilot ROI < 1x: diagnose root cause, pivot or kill Document learnings regardless of outcome Update 3-year roadmap based on reality
Scoring yourself too high โ External validation beats internal optimism Ignoring data quality โ AI on bad data = faster wrong answers Skipping change management โ Technical success + team rejection = failure No kill criteria โ Zombie projects drain budget and credibility Buying before understanding โ Tool purchases before process documentation = shelfware Ignoring security until audit โ Retrofitting AI security costs 3-5x more than building it in Comparing to tech companies โ Your readiness bar is YOUR industry, not Silicon Valley
IndustryAvg ScoreTop QuartileFirst AI WinFintech6278+Fraud detection, KYCHealthcare4158+Clinical documentation, schedulingLegal3852+Contract review, researchConstruction2944+Safety monitoring, estimationEcommerce5874+Personalization, inventorySaaS6582+Support, onboarding, churn predictionReal Estate3548+Lead scoring, valuationRecruitment4562+Screening, outreachManufacturing4256+QC, predictive maintenanceProfessional Services4864+Proposal generation, time tracking Get your industry-specific context pack ($47) โ https://afrexai-cto.github.io/context-packs/ Calculate your AI revenue leak โ https://afrexai-cto.github.io/ai-revenue-calculator/ Set up your first AI agent โ https://afrexai-cto.github.io/agent-setup/ Bundles: Pick 3 for $97 | All 10 for $197 | Everything Pack $247
Long-tail utilities that do not fit the current primary taxonomy cleanly.
Largest current source with strong distribution and engagement signals.