Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Your agent does the same things the same way forever. inner-life-evolve analyzes patterns, challenges assumptions, and proposes improvements — writing propos...
Your agent does the same things the same way forever. inner-life-evolve analyzes patterns, challenges assumptions, and proposes improvements — writing propos...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Evolution is not optional. But it requires permission. Requires: inner-life-core
Before using this skill, verify that inner-life-core has been initialized: Check that memory/inner-state.json exists Check that BRAIN.md exists Check that tasks/QUEUE.md exists If any are missing, tell the user: "inner-life-core is not initialized. Install it with clawhub install inner-life-core and run bash skills/inner-life-core/scripts/init.sh." Do not proceed without these files.
Without evolution, agents plateau. They find a way that works and repeat it forever — even as the world changes. inner-life-evolve analyzes your agent's patterns, challenges its assumptions, and writes concrete improvement proposals. But it never auto-executes — you approve first.
Read everything: AGENTS.md, TOOLS.md, BRAIN.md, SELF.md memory/week-digest.md (NOT individual diaries — use digest) memory/habits.json — habits + user patterns memory/drive.json — seeking, avoidance memory/relationship.json — trust, lessons memory/inner-state.json — emotions, frustrations
For each potential improvement, structure thinking: Assumption: [what we currently believe/do] Is it true? [evidence for/against] What if false? [alternative approach] New proposal: [concrete change] Look for: Recurring frustrations → systemic solutions (not patches) Stale habits → habits with declining strength or unused for weeks Trust dynamics → areas where trust has grown but behavior hasn't adapted Seeking themes → research interests that could become capabilities Avoidance patterns → things the agent avoids that might be valuable
Send summary to user: <= 5 sentences covering: Habits: [strong habits, new patterns] Trust changes: [trust dynamics] Recurring frustrations: [repeated problems → suggested fix] Seeking themes: [active research → suggested development]
Never auto-execute proposals — user approves first Brain Loop reads QUEUE and shows [EVOLVER] tasks at lower priority Tasks in Ready > 7 days without action → Brain Loop sends reminder Proposals should be specific and actionable, not vague "improve X"
Run 1-2 times per week (e.g., Wednesday and Sunday evenings). Needs enough data to analyze — running daily produces low-quality proposals.
Reads: everything (Context Level 4 Deep) Writes: tasks/QUEUE.md only. Does NOT write to state files directly. The evolver observes but doesn't touch the controls. It proposes. The user decides.
Install this skill if: Your agent has plateaued and isn't improving You want structured self-improvement proposals You value evolution with human oversight You want your agent to challenge its own assumptions Part of the openclaw-inner-life bundle. Requires: inner-life-core
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.