Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Give your AI emotions that grow from its own memories. Emoclaw builds a unique emotional state that shifts with every conversation, decays between sessions, and evolves over time through self-calibration. Train it on your agent's identity files and watch it develop its own emotional fingerprint.
Give your AI emotions that grow from its own memories. Emoclaw builds a unique emotional state that shifts with every conversation, decays between sessions, and evolves over time through self-calibration. Train it on your agent's identity files and watch it develop its own emotional fingerprint.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Emotions for AI agents β built from memories, shaped by relationships, always changing. Emoclaw trains a lightweight model on your agent's identity and conversation history, producing a persistent emotional state that evolves with every interaction. Emotions decay naturally between sessions, shift based on who's talking and what's being said, and gradually recalibrate as your agent grows. The result is injected into the system prompt as an [EMOTIONAL STATE] block, giving your AI a felt sense of its own inner life.
Bootstrap β extract.py reads your agent's identity/memory files. label.py scores each passage via the Claude API (opt-in). train builds a small neural net from those scores. One-time setup. Encode β Each incoming message is turned into a 384-dim vector by a frozen MiniLM sentence encoder. No fine-tuning, no network call β runs from a local cache. Feel β The encoding + context (who's talking, what channel, previous emotion) flows through a GRU and MLP head, outputting an N-dimensional emotion vector (0-1 per dimension). The GRU hidden state persists across sessions β this is the "emotional residue" that carries forward mood. Decay β Between sessions, each dimension drifts back toward its baseline at a configurable half-life (fast for arousal, slow for safety/groundedness). Time apart = cooling off. Inject β The emotion vector is formatted as an [EMOTIONAL STATE] block and inserted into the agent's system prompt, giving the AI a felt sense of its own inner state. Model is ~2MB, runs on CPU, adds <50ms per message. Network access is only used during bootstrap (opt-in).
SituationActionFirst-time setuppython scripts/setup.py (or manual steps below)Check current statepython -m emotion_model.scripts.statusInject state into promptpython -m emotion_model.scripts.inject_stateStart the daemonbash scripts/daemon.sh startSend a message to daemonSee Daemon ProtocolRetrain after new datapython -m emotion_model.scripts.trainResume interrupted trainingpython -m emotion_model.scripts.train --resumeAdd new training dataAdd .jsonl entries to emotion_model/data/, re-run prepare + trainUpgrade from v0.1See references/upgrading.mdChange baselinesEdit emoclaw.yaml β dimensions[].baselineAdd a new channelEdit emoclaw.yaml β channels listAdd a relationshipEdit emoclaw.yaml β relationships.knownCustomize summariesCreate a summary-templates.yaml and point config at it
python skills/emoclaw/scripts/setup.py This copies the bundled emotion_model engine to your project root, creates a venv, installs the package, and copies the config template. Then edit emoclaw.yaml to customize for your agent.
If you prefer to set up manually: 1. Install the package cd <project-root> # Copy engine and pyproject.toml from the skill cp -r skills/emoclaw/engine/emotion_model ./emotion_model cp skills/emoclaw/engine/pyproject.toml ./pyproject.toml # Create venv and install python3 -m venv emotion_model/.venv source emotion_model/.venv/bin/activate pip install -e . Required: Python 3.10+, PyTorch, sentence-transformers, PyYAML. 2. Copy and customize the config cp skills/emoclaw/assets/emoclaw.yaml ./emoclaw.yaml Edit emoclaw.yaml to set: name β your agent's name dimensions β emotional dimensions with baselines and decay rates relationships.known β map of relationship names to embedding indices channels β communication channels your agent uses longing β absence-based desire growth (can be disabled) model.device β cpu recommended (MPS has issues with sentence-transformers) See references/config-reference.md for the full schema.
If starting from scratch with identity/memory files: # Extract passages from your identity files python scripts/extract.py # Auto-label passages using Claude API (requires ANTHROPIC_API_KEY) python scripts/label.py # Prepare train/val split and train python -m emotion_model.scripts.prepare_dataset python -m emotion_model.scripts.train Or run the full pipeline: python scripts/bootstrap.py
python -m emotion_model.scripts.status python -m emotion_model.scripts.diagnose
The daemon loads the model once and listens on a Unix socket, avoiding the ~2s sentence-transformer load time per message. # Start bash scripts/daemon.sh start # Or directly python -m emotion_model.daemon python -m emotion_model.daemon --config path/to/emoclaw.yaml
from emotion_model.inference import EmotionEngine engine = EmotionEngine( model_path="emotion_model/checkpoints/best_model.pt", state_path="memory/emotional-state.json", ) block = engine.process_message( message_text="Good morning!", sender="alice", # or None for config default channel="chat", # or None for config default recent_context="...", # optional conversation context ) print(block)
For system prompt injection without the daemon: python -m emotion_model.scripts.inject_state This reads the persisted state, applies time-based decay, and outputs the [EMOTIONAL STATE] block.
Add the output block to your system prompt. The block format: [EMOTIONAL STATE] Valence: 0.55 (balanced) Arousal: 0.35 (balanced) Dominance: 0.50 (balanced) Safety: 0.70 (open) Desire: 0.20 (neutral) Connection: 0.50 (balanced) Playfulness: 0.40 (balanced) Curiosity: 0.50 (balanced) Warmth: 0.45 (balanced) Tension: 0.20 (relaxed) Groundedness: 0.60 (balanced) This feels like: present, alive, between one thing and the next [/EMOTIONAL STATE]
Send JSON over the Unix socket: {"text": "Good morning!", "sender": "alice", "channel": "chat"} Special commands: {"command": "ping"} {"command": "state"}
The model processes each message through this pipeline: Message Text βββ [Frozen MiniLM Encoder] βββ 384-dim embedding β Conversation Context βββ [Feature Builder] βββ context vector β Previous Emotion βββββββββββββββββββββββββββ emotion vector β βββββββββ΄ββββββββ β Input Project β β (Linear+LN+GELU)β βββββββββ¬ββββββββ β βββββββββ΄ββββββββ β GRU β β (hidden state) β β emotional residue βββββββββ¬ββββββββ β βββββββββ΄ββββββββ β Emotion Head β β (MLP+Sigmoid) β βββββββββ¬ββββββββ β N-dim emotion vector [0,1] The GRU hidden state persists across sessions β this is the "emotional residue" that carries forward mood, context, and relational memory. See references/architecture.md for full details.
Extraction (scripts/extract.py) reads markdown files listed in emoclaw.yaml β bootstrap.source_files and bootstrap.memory_patterns. These are configurable and default to identity/memory files within the repo. Extracted passages are written to emotion_model/data/extracted_passages.jsonl. Redaction β Before writing, extracted text is passed through configurable regex patterns (bootstrap.redact_patterns) that replace API keys, tokens, passwords, and other secrets with [REDACTED]. Default patterns cover Anthropic keys, GitHub PATs, bearer tokens, SSH keys, and generic key=value credentials. Add custom patterns in emoclaw.yaml. Labeling (scripts/label.py) β opt-in only. Sends extracted passages to the Anthropic API for emotional scoring. Requires both ANTHROPIC_API_KEY and explicit user consent (interactive prompt before any API call). Use --yes to skip the prompt for automation. Use --dry-run to preview without any network calls. Training runs entirely locally. No data leaves the machine during prepare_dataset or train. Inference runs entirely locally. The daemon and inject_state script make no network calls.
Network access is optional and limited to a single script: ScriptNetwork?Purposeextract.pyNoReads local files onlylabel.pyYes (opt-in)Sends passages to Anthropic APIprepare_datasetNoLocal data processingtrainNoLocal model trainingdaemon / inject_stateNoLocal inference The sentence-transformers encoder downloads model weights on first use (from Hugging Face). After that, it runs from cache with no network needed.
PathPurposeCreated bymemory/emotional-state.jsonPersisted emotion vector + trajectorydaemon / inferenceemotion_model/data/*.jsonlTraining data (extracted/labeled passages)extract.py / label.pyemotion_model/checkpoints/Model weightstrain script/tmp/{name}-emotion.sockDaemon Unix socketdaemon The daemon socket is created with permissions 0o660 (owner + group read/write) and cleaned up on shutdown. The socket path is configurable in emoclaw.yaml β paths.socket_path.
extract.py validates that every file path resolves to within the repository root before reading. Symlink chains and ../ sequences that would escape the repo boundary are rejected. This prevents a misconfigured source_files or memory_patterns from reading arbitrary files.
Add or modify patterns in emoclaw.yaml: bootstrap: redact_patterns: - '(?i)sk-ant-[a-zA-Z0-9_-]{20,}' # Anthropic API keys - '(?i)(?:api[_-]?key|token|secret|password|credential)\s*[:=]\s*\S+' - 'your-custom-pattern-here' Set redact_patterns: [] to disable redaction entirely (not recommended).
Run the bootstrap pipeline (extract β label β train) in an isolated environment or review the source file list before running Audit bootstrap.source_files and bootstrap.memory_patterns in your emoclaw.yaml to ensure only intended files are included Review emotion_model/data/extracted_passages.jsonl before running label.py to confirm no sensitive content will be sent externally The daemon should run under the same user as your agent process β avoid running as root
All configuration lives in emoclaw.yaml. The package falls back to built-in defaults if no YAML is found. Config search order: EMOCLAW_CONFIG environment variable ./emoclaw.yaml (project root) ./skills/emoclaw/emoclaw.yaml Key sections: dimensions β name, labels, baseline, decay half-life, loss weight relationships β known senders with embedding indices channels β communication channels (determines context vector size) longing β absence-based desire modulation model β architecture hyperparameters training β training hyperparameters calibration β self-calibrating baseline drift (opt-in) See references/config-reference.md for the complete schema.
scripts/extract.py reads identity and memory files, splitting them into labeled passages: python scripts/extract.py # Output: emotion_model/data/extracted_passages.jsonl Source files are configured in emoclaw.yaml β bootstrap.source_files and bootstrap.memory_patterns.
scripts/label.py uses the Claude API to score each passage on every emotion dimension: export ANTHROPIC_API_KEY=sk-ant-... python scripts/label.py # Output: emotion_model/data/passage_labels.jsonl Each passage gets a 0.0-1.0 score per dimension plus a natural language summary.
python -m emotion_model.scripts.prepare_dataset python -m emotion_model.scripts.train
To add new training data: Add entries to emotion_model/data/ in JSONL format: {"text": "message text", "labels": {"valence": 0.7, "arousal": 0.4, ...}} Re-run the preparation and training: python -m emotion_model.scripts.prepare_dataset python -m emotion_model.scripts.train
The training script saves a rich checkpoint (training_checkpoint.pt) that preserves the full optimizer state, learning rate schedule, and early stopping counter. To continue training from where you left off: # Resume from the last checkpoint automatically python -m emotion_model.scripts.train --resume # Or specify a checkpoint file python -m emotion_model.scripts.train --resume emotion_model/checkpoints/training_checkpoint.pt This is a true continuation β optimizer momentum, cosine annealing position, and patience counter all pick up exactly where they stopped.
As the AI accumulates real conversation data: Passive collection β Log messages + model predictions Correction events β When emotion feels wrong, log the correction Periodic retraining β Incorporate new data, retrain Baseline adjustment β Baselines may shift as the AI develops The system is designed to grow with the AI, not remain static.
references/architecture.md β Model architecture deep-dive references/config-reference.md β Full YAML config schema references/dimensions.md β Emotion dimension documentation references/calibration-guide.md β Baseline, decay, and self-calibration tuning references/upgrading.md β Version upgrade guide assets/emoclaw.yaml β Template config for new AIs assets/summary-templates.yaml β Generic summary templates assets/example-summary-templates.yaml β Example personality-specific templates engine/ β Bundled emotion_model Python package (copied to project root by setup.py)
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.