Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Evaluate SOC 2 report quality using the SOC 2 Quality Guild rubric (Structure, Substance, Source). Use when reviewing a vendor SOC 2 Type 1/Type 2 report, tr...
Evaluate SOC 2 report quality using the SOC 2 Quality Guild rubric (Structure, Substance, Source). Use when reviewing a vendor SOC 2 Type 1/Type 2 report, tr...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
This skill was built using the SOC 2 Quality Guild resources at s2guild.org as a baseline for quality-focused SOC 2 vendor attestation reviews. This project was the first GRC agent I wanated to try creating with OpenClaw after setting up across multiple environments, including Raspberry Pi, Intel NUC, several LXC containers, and a cluster setup of 3 Mac Studios using EXO. Big thanks to the SOC 2 Quality Guild community for sharing excellent, practical guidance that helped shape this agent.
Author: Simon Tin-Yul Kok LinkedIn: https://www.linkedin.com/in/simonkok/ GitHub: https://github.com/mangopudding/ Review SOC 2 quality before trusting conclusions.
Do not use this skill for: Legal advice or legal conclusions about regulatory compliance. Formal certification decisions (this is a quality review aid, not an issuing authority). Deep technical penetration testing or exploit validation. Historical incident forensics requiring endpoint/network-level evidence collection. Vendor contract drafting as a substitute for legal/procurement review.
Confirm review profile (audience, risk posture, strictness). Confirm scope. Score all 11 signals. Run S12+ advanced diligence. Summarize critical gaps. Produce decision + follow-up requests.
Before scoring, capture these user-selectable settings: Primary audience: Security, Procurement, Customer Trust, or All Risk posture: Conservative / Balanced / Lenient Data sensitivity baseline: High / Medium / Low Evidence strictness: Escalate on Unknown / Conditional acceptance with deadline / Case-by-case Output style: Executive memo, Full analyst report, or Both Default to user-provided settings when available. If not provided, ask once before final verdict.
Capture: Report type: Type 1 or Type 2 Period covered Trust Services Categories in scope In-scope system boundary Auditor firm + signer Qualification status (unqualified/qualified/adverse/disclaimer) If key sections are missing, stop and request a full report.
Read references/rubric.md and score each signal: 2 = strong evidence 1 = partial or ambiguous 0 = missing, contradictory, or weak Use a strict standard for Section 4 testing detail and source credibility checks.
After S1βS11 scoring, run references/advanced-diligence.md and collect answers for the additional diligence set. Rules: Treat S12+ as decision-strengthening checks, not replacements for S1βS11. If an answer is unavailable, mark it explicitly as Unknown and create a follow-up request. Elevate risk when multiple S12+ items remain unknown for high-sensitivity data use cases.
Treat these as high-severity findings by default: Missing required auditor report structure (S1) Missing/incomplete unsigned management assertion (S2) Unlicensed or unverified CPA firm (S8) Pervasive testing vagueness on critical controls (S7) If one or more hard fails exist, recommend compensating evidence even if the opinion is unqualified.
Always return three artifacts.
Overall confidence: High / Medium / Low (use references/confidence-rubric.md) Decision: Accept / Accept with conditions / Escalate / Reject Top 3 reasons
List S1βS11 with: Score (0/1/2) Evidence citation (use references/evidence-citation-format.md) Why it matters Follow-up request (if score <2)
Create a vendor-facing request list using references/vendor-request-templates.md: Direct evidence needed Clarifications required Deadline recommendation Decision gate (what must be resolved)
Prioritize evidence quality over report polish. Penalize boilerplate language that could apply to any company. Penalize weak control-to-criteria logic. Penalize mismatch between exceptions and opinion severity. Separate auditor credibility concerns from control design concerns.
Use references/decision-matrix.md with the selected risk posture and evidence strictness. Baseline outcomes: Accept: no hard fails, most signals strong, no unresolved critical gaps. Accept with conditions: limited gaps, clear compensating evidence path. Escalate: mixed evidence, source credibility concerns, or unclear testing sufficiency. Reject: fundamental structure/source failures or severe unresolved substance failures.
Use this exact section order: Executive verdict Signal-by-signal scorecard (S1βS11) Advanced diligence (S12+) findings Critical risks Vendor follow-up questions Interim compensating controls (what your org should do now) For structure and quality calibration, mirror references/output-example.md.
Apply thresholds using selected profile: High sensitivity (PII/PHI/financial, including candidate resume and employer/company data): require strong minimums on S4/S6/S7/S8 and tighter follow-up deadlines. Medium sensitivity: allow limited partials with compensating evidence. Low sensitivity: tolerate minor source/substance weaknesses with conditions. Apply evidence strictness setting: Escalate on Unknown: unknowns on critical areas force Escalate. Conditional acceptance with deadline: permit temporary acceptance only with explicit due dates and owners. Case-by-case: weigh unknowns by control criticality and data sensitivity.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.