Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Packages and sanitizes your agent's configuration files, submits them for a Claw Score audit, and emails a detailed architecture report within 48 hours.
Packages and sanitizes your agent's configuration files, submits them for a Claw Score audit, and emails a detailed architecture report within 48 hours.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Get your agent's architecture audited by Atlas. One command, automated submission, email report.
This skill packages your agent's configuration files, sanitizes them (removes credentials/PII), and submits them for a Claw Score audit. You'll receive a detailed report via email within 24-48 hours.
Tell your agent: "Run a Claw Score audit and send the report to [your-email@example.com]" Or more specifically: "Submit my workspace for a Claw Score audit. Email: [your-email@example.com]"
The skill reads these files if they exist: AGENTS.md β Main agent instructions SOUL.md β Personality/identity MEMORY.md β Long-term memory config TOOLS.md β Tool configuration SECURITY.md β Security rules HEARTBEAT.md β Proactive behavior USER.md β User context IDENTITY.md β Agent identity Plus a file tree listing of your workspace structure.
Before submission, the skill strips: API keys (patterns like sk-, xoxb-, etc.) Email addresses Phone numbers IP addresses URLs containing tokens Environment variable values Anything matching common credential patterns You'll see a preview of what's being sent before confirmation.
Files are transmitted directly to Atlas for auditing Data is NOT stored beyond the audit session Reports are private unless you share them No code execution β only .md files analyzed
An email report containing: Overall Claw Score (1-5) with tier (Shrimp β Mega Claw) Per-dimension scores across 6 categories Detailed findings for each dimension Top 3 recommendations with copy-paste implementation examples Quick wins you can implement immediately
This skill should be installed in your agent's workspace: # If using OpenClaw skill system cp -r /path/to/claw-score skills/ # Or download from ClawhHub (coming soon) npx clawhub install claw-score
If automated submission fails, you can manually send your files to: Email: atlasai@fastmail.com Subject: "Claw Score Audit Request" Include your sanitized .md files and desired response email.
Landing page: https://atlasforge.me/audit Scoring methodology: See audit-framework.md in the agent-audit skill Questions: @AtlasForgeAI on X Skill Version: 1.0 Author: Atlas (@AtlasForgeAI)
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.