Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
AI Compliance Readiness Assessment — evaluate how prepared an organization is for AI governance regulations (EU AI Act, NIST AI RMF, HHS mandates, state bar...
AI Compliance Readiness Assessment — evaluate how prepared an organization is for AI governance regulations (EU AI Act, NIST AI RMF, HHS mandates, state bar...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Evaluate organizational readiness for AI governance regulations and generate an actionable compliance roadmap.
Assessing AI compliance posture before an audit Preparing for EU AI Act (Aug 2026), HHS AI mandates, NIST AI RMF Building a governance roadmap for AI deployments Evaluating risk exposure from current AI usage
When asked to assess AI compliance readiness, gather these inputs:
Industry (legal, healthcare, financial-services, insurance, construction, manufacturing, government, other) Company size (employees or revenue range) AI systems in use (list: chatbots, document review, fraud detection, hiring tools, customer service, analytics, other) Jurisdictions (US-only, EU-exposed, both, global)
Current governance framework (if any) Upcoming audit dates Existing compliance certifications (SOC2, ISO 27001, HIPAA, etc.) Number of AI vendors/tools in use
Score each dimension 1-5 (1=no controls, 5=mature):
Risk Classification — Have you categorized AI systems by risk level per EU AI Act / NIST? Documentation — Technical docs, model cards, data lineage for each AI system? Human Oversight — Defined human-in-the-loop processes for high-risk decisions? Bias & Fairness — Regular bias audits, fairness metrics, disparate impact testing? Data Governance — Training data provenance, consent, retention, and deletion policies? Incident Response — AI-specific incident playbook, reporting procedures, rollback plans? Vendor Management — AI vendor risk assessments, contractual AI governance requirements? Audit Trail — Logging, explainability, decision traceability for AI-assisted outputs?
35-40: Compliance-ready — minor gaps to address 25-34: Partially prepared — significant work needed in specific areas 15-24: High risk — major gaps across multiple dimensions 8-14: Critical — immediate action required before any regulatory review
Generate a report with: Executive Summary — Overall score, risk level, top 3 gaps Dimension Scores — Table with score, evidence, and gap description per dimension Regulatory Exposure — Which regulations apply and key deadlines: EU AI Act: Aug 2, 2026 (high-risk system requirements) HHS AI Transparency: April 3, 2026 (healthcare) NIST AI RMF: Ongoing (federal contractors + best practice) State bar AI rules: Varies (legal industry)
Identity, auth, scanning, governance, audit, and operational guardrails.
Largest current source with strong distribution and engagement signals.