Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Design, build, and deploy AI agents with architecture patterns, framework selection, memory systems, and production safety.
Design, build, and deploy AI agents with architecture patterns, framework selection, memory systems, and production safety.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Use when designing agent systems, choosing frameworks, implementing memory/tools, specifying agent behavior for teams, or reviewing agent security.
TopicFileArchitecture patterns & memoryarchitecture.mdFramework comparisonframeworks.mdUse cases by roleuse-cases.mdImplementation patterns & codeimplementation.mdSecurity boundaries & riskssecurity.mdEvaluation & debuggingevaluation.md
Single purpose defined? If you can't say it in one sentence, split into multiple agents User identified? Internal team, end customer, or another system? Interaction modality? Chat, voice, API, scheduled tasks? Single vs multi-agent? Start simple β only add agents when roles genuinely differ Memory strategy? What persists within session vs across sessions vs forever? Tool access tiers? Which actions are read-only vs write vs destructive? Escalation rules? When MUST a human step in? Cost ceiling? Budget per task, per user, per month?
Start with one agent β Multi-agent adds coordination overhead. Prove single-agent insufficient first. Define escalation triggers β Angry users, legal mentions, confidence drops, repeated failures β human Separate read from write tools β Read tools need less approval than write tools Log everything β Tool calls, decisions, user interactions. You'll need the audit trail. Test adversarially β Assume users will try to break or manipulate the agent Budget by task type β Use cheaper models for simple tasks, expensive for complex
OBSERVE β THINK β ACT β OBSERVE β ... Every agent is this loop. The differences are: What it observes (context window, memory, tool results) How it thinks (direct, chain-of-thought, planning) What it can act on (tools, APIs, communication channels)
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.