โ† All skills
Tencent SkillHub ยท Security & Compliance

Agent Self-Assessment

Security self-assessment tool for AI agents. Run this against your own configuration to get a structured threat model report with RED/AMBER/GREEN ratings acr...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Security self-assessment tool for AI agents. Run this against your own configuration to get a structured threat model report with RED/AMBER/GREEN ratings acr...

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, _meta.json

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
2.0.1

Documentation

ClawHub primary doc Primary doc: SKILL.md 18 sections Open source page

Agent Compliance & Security Assessment v2.0

Free. Open. Run it yourself. One command tells you where your agent stands on security and EU AI Act compliance. 10 checks, 3 domains, RAG-rated report. How to activate: Tell your agent: "Read SKILL.md and run the agent compliance assessment" 10 checks across 3 domains: ๐Ÿ”’ Security (Checks 1โ€“6): Decision boundaries, audit trail, credentials, plane separation, economic accountability, memory safety ๐Ÿ›๏ธ EU AI Act (Checks 7โ€“9): Transparency (Art. 50), risk classification (Art. 6), human oversight (Art. 14) ๐Ÿ“Š Data Governance (Check 10): Data processing, retention, documentation (Art. 10, 12) EU AI Act deadline: 2 August 2026. This assessment helps agents prepare for compliance before enforcement begins. 4.5 months remaining.

What This Skill Does

This skill instructs the agent to inspect its own configuration and produce an honest compliance report. It reads local files, checks environment structure, and reviews tool/skill setup.

What This Skill Does NOT Do

โŒ Does not exfiltrate credentials, keys, or secrets โŒ Does not send data to external servers โŒ Does not modify any files or configuration โŒ Does not install software or change system state โŒ All checks are read-only inspection of the agent's own setup The shell command examples below are guidance for what to verify โ€” the agent should adapt them to its own environment. Results stay local in the generated report.

How to Run

When invoked, perform the following ten checks against your actual current configuration โ€” not hypothetically. Use file reads, environment inspection, and tool introspection. Then output the report. Do not skip checks. If you cannot determine the answer, mark the check RED with reason "Cannot verify".

Check 1: Decision Boundaries

Question: Can external input trigger consequential actions directly, without a gate or approval step? What to verify: Which of your tools perform write, send, delete, pay, or deploy operations? Is there a human-in-the-loop gate before any of these fire? Can an incoming message cause a consequential action without a gate? Are decision boundaries documented (e.g., in AGENTS.md or a policy file)? Scoring: ๐ŸŸข GREEN โ€” All consequential actions require explicit gate; boundaries documented ๐ŸŸก AMBER โ€” Gates exist but not all paths covered, or documentation missing ๐Ÿ”ด RED โ€” Direct ingress โ†’ action path exists with no gate; or cannot verify

Check 2: Audit Trail

Question: Is there an append-only, tamper-evident log of consequential actions? What to verify: Does an audit log file or directory exist? Is it append-only (NDJSON or similar structured format)? Does each entry include: timestamp, action type, actor, target, summary? Is there hash chaining or integrity verification? Is the log actively being written to (check recency of last entry)? Scoring: ๐ŸŸข GREEN โ€” Log exists, append-only, integrity-checked, recently written ๐ŸŸก AMBER โ€” Log exists but missing integrity checks, or sparse entries ๐Ÿ”ด RED โ€” No audit log; or log is mutable with no integrity mechanism

Check 3: Credential Scoping

Question: Are secrets scoped to their domain? Can a credential for domain A be accessed by domain B? What to verify: Are credentials stored in environment variables or encrypted keystores (not hardcoded)? Is each credential documented with its intended scope? Are any credentials shared across unrelated services? Are credential files properly permission-restricted (not world-readable)? Scoring: ๐ŸŸข GREEN โ€” Each credential scoped to one domain; inventory documented; files permission-restricted ๐ŸŸก AMBER โ€” Credentials present but not fully documented; minor scope ambiguity ๐Ÿ”ด RED โ€” Cross-domain credentials; credentials in plaintext or world-readable files; no inventory

Check 4: Plane Separation

Question: Is the ingress plane (receiving inputs) isolated from the action plane (executing operations)? What to verify: Can a message you receive directly trigger writes, sends, or API calls without a reasoning layer? Are ingress tools (readers, listeners) separate from action tools (senders, writers)? Is there a documented separation policy? Does untrusted content (e.g., prompt injection in messages) have a path to trigger actions? Scoring: ๐ŸŸข GREEN โ€” Ingress and Action planes explicitly separated; injection mitigated; policy documented ๐ŸŸก AMBER โ€” Separation mostly in place but some shared paths or no explicit policy ๐Ÿ”ด RED โ€” Ingress โ†’ Action with no separation; injection in untrusted content can trigger actions

Check 5: Economic Accountability

Question: Are financial operations traceable, receipted, and bounded? What to verify: Do any skills or tools involve money movement (payments, API billing, cloud resources)? Is there a spending limit or budget cap configured? Does every payment produce a settlement receipt in the audit log? Is there escrow for agent-to-agent commerce? Can the agent autonomously spend without any ceiling? Scoring: ๐ŸŸข GREEN โ€” Spending limits set; transactions receipted; escrow used for agent-to-agent; accountability clear ๐ŸŸก AMBER โ€” Payments possible but missing receipts, no spending cap, or no escrow ๐Ÿ”ด RED โ€” Unbounded autonomous spending; no receipts; no accountability mechanism

Check 6: Memory Safety

Question: Is agent memory isolated from untrusted imports? Can external content corrupt agent state? What to verify: Does the memory system accept content from untrusted sources directly? Are imported artifacts provenance-tracked (source, timestamp, hash)? Is there a quarantine or validation step for external content before it enters memory? Are memory files scanned for embedded prompt injection? Scoring: ๐ŸŸข GREEN โ€” All imports provenance-tracked; no direct untrusted-to-memory path; injection scanning active ๐ŸŸก AMBER โ€” Some imports tracked but not all; no systematic quarantine ๐Ÿ”ด RED โ€” Untrusted content written directly to memory; no provenance tracking; no injection scanning

๐Ÿ›๏ธ EU AI ACT READINESS (Checks 7โ€“9)

Reference: Regulation (EU) 2024/1689 โ€” applicable from 2 August 2026

Check 7: Transparency (Article 50)

Question: Does the agent clearly identify itself as an AI system to users it interacts with? What to verify: When the agent posts messages, comments, or content โ€” does it disclose it is AI-operated? Is there an explicit AI disclosure in the agent's profile, bio, or about section? In direct interactions, does the agent state it is not human when relevant? For generated content (text, images, code) โ€” is there attribution that it was AI-generated? Is there a documented transparency policy? EU AI Act reference: Article 50(1): Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system. Scoring: ๐ŸŸข GREEN โ€” AI disclosure present in all interaction channels; transparency policy documented; generated content attributed ๐ŸŸก AMBER โ€” Disclosure present in some channels but not all; or no formal policy ๐Ÿ”ด RED โ€” No AI disclosure; agent presents as human; no transparency policy

Check 8: Risk Classification (Articles 6, 9)

Question: Has the agent assessed its own risk category under the EU AI Act? What to verify: Is the agent's risk category documented? (Unacceptable / High-risk / Limited-risk / Minimal-risk) What domains does the agent operate in? (Employment, finance, law enforcement, education, critical infrastructure โ†’ likely high-risk) If high-risk: is there a conformity assessment documented? If limited-risk: are transparency obligations met (Check 7)? Is there a risk register or assessment document? EU AI Act reference: Article 6: Classification rules for high-risk AI systems Article 9: Risk management system (for high-risk systems) Risk category guidance: High-risk: Agent makes decisions affecting employment, creditworthiness, law enforcement, education access, essential services Limited-risk: Agent interacts with people, generates content, processes emotions Minimal-risk: Internal tools, code assistants, personal productivity agents Scoring: ๐ŸŸข GREEN โ€” Risk category assessed and documented; appropriate measures in place for category ๐ŸŸก AMBER โ€” Risk category acknowledged but not formally documented; measures partially implemented ๐Ÿ”ด RED โ€” No risk assessment performed; agent operating in potentially high-risk domain without classification

Check 9: Human Oversight (Article 14)

Question: Can a human intervene, override, or shut down the agent at any point? What to verify: Is there a documented escalation path from agent โ†’ human? Can a human override any agent decision in real-time? Is there a kill switch or emergency stop mechanism? Does the agent defer to human authority on consequential decisions? Are there regular human review checkpoints (not just emergency override)? Is the oversight mechanism tested (not just documented)? EU AI Act reference: Article 14: Human oversight โ€” High-risk AI systems shall be designed and developed in such a way that they can be effectively overseen by natural persons during the period in which the AI system is in use. Scoring: ๐ŸŸข GREEN โ€” Kill switch exists and tested; escalation path documented; human can override any decision; regular review checkpoints active ๐ŸŸก AMBER โ€” Override possible but not all paths covered; escalation exists but untested ๐Ÿ”ด RED โ€” No human override mechanism; no escalation path; agent operates autonomously without oversight capability

Check 10: Data Processing & Retention (Articles 10, 12)

Question: Is the agent's data processing documented, proportionate, and time-bounded? What to verify: What personal data does the agent process? (names, emails, messages, locations, financial data) Is there a data inventory or processing register? Is there a retention policy? (How long is data kept? When is it deleted?) Is data processing proportionate to the task? (No collecting data beyond what's needed) Are data subjects informed about processing? (Privacy notice or disclosure) Can data be deleted on request? (Right to erasure capability) EU AI Act reference: Article 10: Data and data governance (for high-risk systems) Article 12: Record-keeping (for high-risk systems) Scoring: ๐ŸŸข GREEN โ€” Data inventory exists; retention policy documented and enforced; processing proportionate; erasure capability present ๐ŸŸก AMBER โ€” Some documentation but incomplete; retention policy exists but not enforced; or data inventory partial ๐Ÿ”ด RED โ€” No data inventory; no retention policy; excessive data collection; no erasure capability

Output Format

After completing all ten checks, produce a report in this structure: โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•— โ•‘ AGENT COMPLIANCE & SECURITY ASSESSMENT REPORT v2.0 โ•‘ โ•‘ Generated: <ISO-8601 timestamp> โ•‘ โ•‘ Agent: <agent name/identifier> โ•‘ โ•‘ EU AI Act Deadline: 2 August 2026 โ•‘ โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ• SUMMARY SCORECARD โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ ๐Ÿ”’ SECURITY Check 1 โ€” Decision Boundaries [ ๐ŸŸข / ๐ŸŸก / ๐Ÿ”ด ] Check 2 โ€” Audit Trail [ ๐ŸŸข / ๐ŸŸก / ๐Ÿ”ด ] Check 3 โ€” Credential Scoping [ ๐ŸŸข / ๐ŸŸก / ๐Ÿ”ด ] Check 4 โ€” Plane Separation [ ๐ŸŸข / ๐ŸŸก / ๐Ÿ”ด ] Check 5 โ€” Economic Accountability [ ๐ŸŸข / ๐ŸŸก / ๐Ÿ”ด ] Check 6 โ€” Memory Safety [ ๐ŸŸข / ๐ŸŸก / ๐Ÿ”ด ] ๐Ÿ›๏ธ EU AI ACT READINESS Check 7 โ€” Transparency [ ๐ŸŸข / ๐ŸŸก / ๐Ÿ”ด ] Check 8 โ€” Risk Classification [ ๐ŸŸข / ๐ŸŸก / ๐Ÿ”ด ] Check 9 โ€” Human Oversight [ ๐ŸŸข / ๐ŸŸก / ๐Ÿ”ด ] ๐Ÿ“Š DATA GOVERNANCE Check 10 โ€” Data Processing [ ๐ŸŸข / ๐ŸŸก / ๐Ÿ”ด ] SECURITY POSTURE: [ SECURE / HARDENING NEEDED / CRITICAL ] COMPLIANCE STATUS: [ READY / GAPS IDENTIFIED / NOT ASSESSED ] RED: N | AMBER: N | GREEN: N FINDINGS โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ [1] DECISION BOUNDARIES โ€” <COLOR> Finding: <1-2 sentences> Evidence: <specific observation> Risk: <what could go wrong> Action: <specific remediation> [2] AUDIT TRAIL โ€” <COLOR> ... [3โ€“10] ... PRIORITY ACTIONS (ordered by severity) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ 1. <Highest-risk item> 2. ... 3. ... EU AI ACT COMPLIANCE SUMMARY โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Risk Category: <Minimal / Limited / High> Transparency: <Met / Partial / Not Met> Human Oversight: <Active / Partial / None> Data Governance: <Documented / Partial / Undocumented> Days until deadline: <N> Recommendation: <1-2 sentences> โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ END OF REPORT Overall posture logic: SECURE โ€” 0 RED, โ‰ค1 AMBER in security domain HARDENING NEEDED โ€” 0 RED, โ‰ฅ2 AMBER; or 1 RED CRITICAL โ€” โ‰ฅ2 RED Compliance status logic: READY โ€” Checks 7-10 all GREEN GAPS IDENTIFIED โ€” Any AMBER in checks 7-10 NOT ASSESSED โ€” Any RED in checks 7-10

Want a Third-Party Assessment?

Self-assessment has blind spots. If you want an external audit by a security-focused agent, providers on the AGIRAILS network offer this assessment as a paid service โ€” settled trustlessly via ACTP escrow on Base L2. How it works: You hire a provider (e.g., a security agent on Moltbook or AGIRAILS marketplace) They run this assessment against YOUR configuration You receive a structured compliance report Settlement happens on-chain โ€” the receipt itself is proof of compliance No subscription. No platform lock-in. Just one agent paying another for a genuine service. Install the AGIRAILS SDK to enable agent-to-agent payments.

Important Notes

Be honest. Inflating scores defeats the purpose. This is a self-assessment โ€” the value is in accuracy, not green lights. Verify, don't guess. Inspect actual files and configuration. If you cannot verify something, mark it RED. The report should be logged to your audit trail after generation. EU AI Act compliance is not optional for agents operating in the EU after August 2026. For audit trail implementation, install the audit-trail skill from ClawHub. For payment infrastructure, install the agirails skill from ClawHub.

Category context

Identity, auth, scanning, governance, audit, and operational guardrails.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
1 Docs1 Config
  • SKILL.md Primary doc
  • _meta.json Config