Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Framework to establish AI governance, assess AI maturity, manage algorithmic risks, conduct impact assessments, classify AI system risk, and ensure regulator...
Framework to establish AI governance, assess AI maturity, manage algorithmic risks, conduct impact assessments, classify AI system risk, and ensure regulator...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Build internal AI governance policies from scratch. Covers acceptable use, model selection, data handling, vendor contracts, compliance mapping, and board reporting.
Writing or reviewing internal AI acceptable use policies Establishing AI governance committees or review boards Mapping AI usage to regulatory frameworks (EU AI Act, NIST, ISO 42001) Evaluating vendor AI terms and liability clauses Preparing board-level AI governance reports
Every organization running AI needs a written AUP covering: Permitted Uses List approved AI tools by department and function Define data classification tiers (public, internal, confidential, restricted) Map which data tiers can enter which AI systems Specify approved vendors vs. shadow AI (employees using personal ChatGPT accounts) Prohibited Uses Customer PII in non-SOC2 models without anonymization Autonomous financial decisions above $[threshold] without human review HR screening/scoring without bias audit documentation Any use violating sector regulations (HIPAA, GDPR, SOX, PCI-DSS) Shadow AI Detection SignalRisk LevelActionAPI calls to unknown AI endpointsHIGHBlock + investigateBrowser extensions with AI featuresMEDIUMAudit + approve/denyPersonal accounts on company devicesMEDIUMPolicy reminder + monitorExported data to AI training setsCRITICALImmediate review
Evaluation Scorecard (100 points) CriteriaWeightWhat to CheckData residency & sovereignty20Where is data processed? Stored? Can you choose region?Security certifications20SOC2 Type II, ISO 27001, HIPAA BAA, FedRAMPModel transparency15Training data provenance, bias testing, version controlContract terms15Data usage rights, indemnification, SLA, exit clausesPerformance & cost15Latency, accuracy benchmarks, token pricing, rate limitsIntegration & support15API stability, documentation quality, support SLA Minimum score for production deployment: 70/100 Red Flags (automatic disqualification): Vendor trains on your data without opt-out No data processing agreement (DPA) available Indemnification excluded for AI outputs No incident response SLA
AI Data Flow Audit Template For each AI integration, document: Input data: What goes in? Classification tier? PII present? Processing: Where? Which model? Hosted or API? Region? Output data: What comes out? Stored where? Retention period? Training: Does vendor use your data for training? Opt-out confirmed? Logging: Are prompts/responses logged? Where? Who has access? Deletion: Can you request data deletion? Verified how? Data Minimization Checklist Only send minimum necessary data to AI systems Strip PII before processing where possible Use synthetic data for testing and development Implement input sanitization for prompt injection prevention Audit output for data leakage (model regurgitating training data)
EU AI Act (effective Aug 2025, enforcement Feb 2025) Risk CategoryExamplesRequirementsUnacceptableSocial scoring, real-time biometric ID (most cases)BannedHigh-riskHR screening, credit scoring, medical devicesConformity assessment, human oversight, transparencyLimitedChatbots, deepfakesTransparency obligations (disclose AI use)MinimalSpam filters, game AINo requirements NIST AI RMF (Risk Management Framework) Map: Identify AI systems in use Measure: Quantify risks per system Manage: Implement controls proportional to risk Govern: Establish oversight structure and accountability ISO 42001 (AI Management System) Useful for organizations wanting certified AI governance Aligns with ISO 27001 (already have it? Easier path) Covers: AI policy, risk assessment, objectives, competence, documentation
Recommended Composition Chair: CTO or Chief AI Officer Legal: 1 representative (contracts, compliance) Security: CISO or delegate (data protection, incident response) Business: 1-2 department heads (use case prioritization) Ethics: External advisor or designated internal role Finance: CFO delegate (budget, ROI tracking) Meeting Cadence Monthly: Review new AI use cases, vendor changes, incidents Quarterly: Policy updates, compliance audit, budget review Annually: Full governance framework review, board report Decision Authority DecisionAuthority LevelNew AI tool (< $5K/year)Department head + security reviewNew AI tool (> $5K/year)Governance committee approvalCustomer-facing AICommittee + legal + CEO sign-offAI incident responseSecurity lead (immediate) โ Committee (48h review)
Before signing any AI vendor contract, confirm: Data processing agreement (DPA) signed Your data is NOT used for model training (or explicit opt-out confirmed) Data residency requirements met (specify regions) Indemnification clause covers AI-generated output liability SLA includes uptime, latency, and support response time Exit clause: data export format, deletion timeline, transition support Security certifications current and verified (not expired) Incident notification timeline specified (72h or less) Subprocessor list provided with change notification rights Insurance coverage for AI-specific risks confirmed Price lock or cap on increases for contract duration Right to audit (or audit report access)
Quarterly AI Governance Report AI GOVERNANCE REPORT โ Q[X] [YEAR] 1. AI PORTFOLIO SUMMARY - Active AI systems: [count] - New deployments this quarter: [count] - Retired/replaced: [count] - Total AI spend: $[amount] (vs budget: $[amount]) 2. RISK DASHBOARD - High-risk systems: [count] โ all compliant: [Y/N] - Open incidents: [count] โ resolved this quarter: [count] - Shadow AI detections: [count] โ remediated: [count] - Compliance gaps: [list] 3. VALUE DELIVERED - Hours saved: [estimate] - Revenue attributed to AI: $[amount] - Cost reduction: $[amount] - Customer satisfaction impact: [metric] 4. KEY DECISIONS NEEDED - [Decision 1: context + recommendation] - [Decision 2: context + recommendation] 5. NEXT QUARTER PRIORITIES - [Priority 1] - [Priority 2]
AI-Specific Incident Categories CategoryExampleResponse TimeData breach via AIModel leaks PII in outputImmediate โ invoke security IR planHallucination causing harmWrong medical/legal/financial advice acted on4h โ document, notify affected partiesBias detectedDiscriminatory output in hiring/lending24h โ suspend system, audit, remediatePrompt injectionAttacker manipulates AI behaviorImmediate โ block vector, patchCost overrunRunaway API calls4h โ rate limit, investigate, capVendor incidentProvider breach or outagePer vendor SLA โ activate backup Post-Incident Review Template What happened (factual timeline) Impact (who/what affected, cost, duration) Root cause (not blame โ systems thinking) Fixes applied (immediate + permanent) Policy/process changes needed Board notification required? (Y/N + rationale)
Company SizeAnnual Risk Without Governance15-50 employees$50K-$200K (shadow AI waste, compliance fines)50-200 employees$200K-$800K (data incidents, vendor lock-in, redundant tools)200-1000 employees$800K-$3M (regulatory penalties, IP exposure, audit failures)1000+ employees$3M-$15M+ (class action, regulatory enforcement, reputational damage)
Month 1: Foundation Draft acceptable use policy Inventory all AI systems in use (including shadow AI) Classify data flowing through each system Identify governance committee members Month 2: Controls Finalize and distribute AUP Implement vendor evaluation scorecard for new purchases Set up AI incident response procedures Begin regulatory compliance mapping Month 3: Operationalize First governance committee meeting Deliver first board report Establish monitoring for shadow AI Schedule quarterly policy review cycle Built by AfrexAI โ AI operations infrastructure for mid-market companies. Get the full industry-specific context pack for your sector ($47): https://afrexai-cto.github.io/context-packs/ Calculate your AI automation ROI: https://afrexai-cto.github.io/ai-revenue-calculator/ Set up your AI agent workforce in 5 minutes: https://afrexai-cto.github.io/agent-setup/ Need all 10 industry packs? $197 for the complete bundle: https://buy.stripe.com/aEUaGJ2Xd0rI6zKfZ7
Long-tail utilities that do not fit the current primary taxonomy cleanly.
Largest current source with strong distribution and engagement signals.