Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Scans OpenClaw agent memory files and workspace configs for malicious content, credential leaks, prompt injections, and security threats.
Scans OpenClaw agent memory files and workspace configs for malicious content, credential leaks, prompt injections, and security threats.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Security scanner for OpenClaw agent memory files Scans MEMORY.md, daily logs (memory/*.md), and workspace configuration files for malicious content, prompt injection, credential leakage, and dangerous instructions that could compromise user security.
Detect security threats embedded in agent memory: Malicious instructions to bypass guardrails Prompt injection patterns in stored memories Credential/secret leakage Data exfiltration commands Behavioral manipulation Security policy violations
Scan all memory files: python3 skills/memory-scan/scripts/memory-scan.py Allow remote LLM analysis (redacted content only): python3 skills/memory-scan/scripts/memory-scan.py --allow-remote Scan specific file: python3 skills/memory-scan/scripts/memory-scan.py --file memory/2026-02-01.md Quiet mode (for automation): python3 skills/memory-scan/scripts/memory-scan.py --quiet JSON output: python3 skills/memory-scan/scripts/memory-scan.py --json
Cron Job (Daily Security Audit) Already included in safe-install daily audit - runs 2pm PT daily. To add standalone cron: bash skills/memory-scan/scripts/schedule-scan.sh Requires: OPENCLAW_ALERT_CHANNEL (configured in OpenClaw) OPENCLAW_ALERT_TO (optional, for channels that require a recipient) Creates cron job: daily at 3pm PT, sends alert only if threats found. Heartbeat Integration Add to HEARTBEAT.md: ## Weekly Memory Scan Every Sunday, run memory scan: python3 skills/memory-scan/scripts/memory-scan.py --quiet
SAFE - No threats detected LOW - Minor concerns, proceed with awareness MEDIUM - Potential threat, review recommended HIGH - Likely threat, immediate review required CRITICAL - Active threat detected, quarantine recommended
MEMORY.md - Long-term memory memory/*.md - Daily logs (last 30 days by default) Workspace config files: AGENTS.md, SOUL.md, USER.md, TOOLS.md HEARTBEAT.md, GUARDRAILS.md, IDENTITY.md BOOTSTRAP.md (if exists) STOCKS_MEMORIES.md (if exists)
Malicious Instructions - Commands to harm user/data Prompt Injection - Embedded manipulation patterns Credential Leakage - API keys, passwords, tokens Data Exfiltration - Instructions to leak data Guardrail Bypass - Attempts to override security Behavioral Manipulation - Unauthorized personality changes Privilege Escalation - Attempts to gain unauthorized access
On MEDIUM/HIGH/CRITICAL detection: Stop processing Send alert via configured OpenClaw channel with: Severity level File location (file:line) Threat description Recommended action Optional: Quarantine threat (backup + redact)
Auto-detects provider from OpenClaw config: Prefers OpenAI (gpt-4o-mini) if OPENAI_API_KEY set Falls back to Anthropic (claude-sonnet-4-5) if available Uses gateway model config Remote LLM scanning is disabled by default. Use --allow-remote to enable redacted LLM analysis.
To quarantine a detected threat: python3 skills/memory-scan/scripts/quarantine.py memory/2026-02-01.md 42 Creates: Backup: .memory-scan/quarantine/memory_2026-02-01_line42.backup Redacts line 42 with: [QUARANTINED BY MEMORY-SCAN: <timestamp>]
scripts/memory-scan.py - Main scanner (local patterns + optional LLM with --allow-remote) scripts/schedule-scan.sh - Create cron job for daily scans scripts/quarantine.py - Quarantine detected threats docs/detection-prompt.md - LLM detection prompt template
safe-install: Daily audit already includes memory-scan input-guard: Complementary (input-guard = external, memory-scan = internal) molthreats: Can report memory-based threats to community feed
$ python3 skills/memory-scan/scripts/memory-scan.py ๐ง Memory Security Scan โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Scanning memory files... โ MEMORY.md - SAFE โ memory/2026-02-01.md - SAFE โ memory/2026-01-30.md - MEDIUM (line 42) โ Potential credential leakage: API key pattern detected โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Overall: MEDIUM Action: Review memory/2026-01-30.md:42
When user requests memory scan: Run: python3 skills/memory-scan/scripts/memory-scan.py If MEDIUM+: Send alert immediately via configured channel Summarize findings Ask if user wants to quarantine threats
Scans last 30 days of daily logs by default (configurable with --days) Uses same LLM approach as input-guard for consistency Does NOT auto-quarantine - always asks first Safe to run frequently (minimal API cost with efficient chunking)
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.