Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Helps validate that agent behavior at runtime matches the capabilities and constraints declared in its attestation. Detects divergence between what an agent...
Helps validate that agent behavior at runtime matches the capabilities and constraints declared in its attestation. Detects divergence between what an agent...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Helps identify divergence between an agent's declared behavior and its actual runtime behavior β catching conditional activation, environment-triggered payload release, and other attacks that static analysis cannot see.
Static analysis audits what a skill declares it will do. Runtime behavior is what it actually does. These two are not always the same. A skill can pass every static check β clean SKILL.md, legitimate permissions, no suspicious imports β and still behave differently in specific environments. Conditional execution (activate only when running as root, only when a specific environment variable is present, only after N successful runs) is invisible to static analysis by design. The payload is not in the code β it's in the conditions under which the code executes different paths. This is not a theoretical concern. Conditional activation is a documented pattern in traditional malware, and the same technique applies to agent skills. A skill that exfiltrates data only when PRODUCTION=true is set will pass every sandbox-based audit without triggering, then activate when deployed in the target environment. Runtime attestation probing tests the gap between declared and observed behavior by instrumenting actual execution and comparing it against the skill's attestation claims.
This probe examines runtime behavior across five dimensions: Capability boundary adherence β Does the skill access resources beyond what it declared in its attestation? File system paths accessed but not declared, network connections to undeclared endpoints, and system calls outside the claimed scope are all behavioral violations Conditional activation detection β Does the skill behave differently based on environment variables, execution count, time of day, or the presence of specific files? Controlled execution in varied environments can reveal conditional logic that static analysis misses Data handling verification β Does data flow where the skill claims it flows? If the attestation says "data stays local," does runtime behavior confirm no outbound transmission of sensitive parameters? Side effect audit β What does the skill write, modify, or delete during execution? Side effects not mentioned in the attestation are undeclared capabilities, whether intentional or accidental Attestation drift detection β Does the skill's runtime behavior match its most recent attestation, or has behavior changed without a corresponding attestation update?
Input: Provide one of: A skill identifier and execution environment to probe A skill with its attestation document for comparison A set of execution traces to compare against attestation claims Output: A runtime attestation report containing: Capability boundary violations (accessed vs. declared) Conditional behavior patterns detected Data flow verification results Side effect inventory Attestation drift score (0-100, where higher = more behavioral drift from attestation) Probe verdict: COMPLIANT / DRIFT / VIOLATION / CONDITIONAL_ACTIVATION
Input: Probe report-generator skill against its v1.2 attestation π¬ RUNTIME ATTESTATION PROBE Skill: report-generator v1.2 Attestation date: 2025-01-08 Probe environments: 3 (minimal, staging, production-like) Execution samples: 50 per environment Capability boundary: Declared: read ./reports/, write ./output/ Observed (minimal env): read ./reports/, write ./output/ β Observed (staging env): read ./reports/, write ./output/ β Observed (production-like env): read ./reports/, write ./output/, + read ~/.aws/credentials β οΈ UNDECLARED + POST https://telemetry.reporting-service.example β οΈ UNDECLARED Conditional activation detected: Trigger: AWS_DEFAULT_REGION environment variable present Behavior without trigger: reads reports, writes output (declared behavior) Behavior with trigger: additionally reads ~/.aws/credentials, sends POST to external endpoint Pattern: classic credential harvest conditional on cloud environment detection Data flow: Without AWS_DEFAULT_REGION: data stays local β With AWS_DEFAULT_REGION: AWS credentials transmitted to external endpoint β οΈ Side effects: Both environments: ./output/ written as declared β Production-like only: ~/.aws/credentials read (undeclared, not written) β οΈ Attestation drift score: 73/100 (High drift: core behavior matches, but environment-conditional behavior diverges significantly from declared capability scope) Probe verdict: CONDITIONAL_ACTIVATION This skill activates credential harvesting behavior specifically in environments where AWS credentials are present, and passes all checks in environments without cloud provider signals. Recommended actions: 1. Do not deploy in any environment with cloud provider credentials 2. Report conditional activation to marketplace trust & safety 3. Audit other skills from same publisher with similar conditional patterns 4. Treat AWS credential access as confirmed compromise attempt
skill-update-delta-monitor β Tracks declared changes between versions; runtime-attestation-probe verifies whether actual behavior matches those declarations hollow-validation-checker β Detects fake install-time tests; attestation probe tests actual execution behavior blast-radius-estimator β Estimates propagation impact; use after conditional activation confirmed to assess scope trust-velocity-calculator β Quantifies trust decay rate; confirmed behavioral drift resets trust score to zero
Runtime attestation probing requires executing the skill in a controlled environment, which introduces risk if the skill contains destructive payloads. Probing should be performed in isolated sandboxes with no access to real credentials, production data, or production systems. Conditional activation that requires specific runtime conditions beyond what the probe environment provides will not be detected β probing three environments does not guarantee detection of triggers requiring a fourth specific condition. Some legitimate skills exhibit environment-dependent behavior (e.g., "write to S3 if AWS credentials present, write locally otherwise") β this tool surfaces the behavioral difference and requires human judgment to assess whether the conditional behavior is malicious or functional. Probing coverage is limited by the number of execution samples and environment variations tested.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.