{
  "schemaVersion": "1.0",
  "item": {
    "slug": "emoclaw",
    "name": "EmoClaw",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/fenrirlabsnl/emoclaw",
    "canonicalUrl": "https://clawhub.ai/fenrirlabsnl/emoclaw",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/emoclaw",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=emoclaw",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "_meta.json",
      "SKILL.md",
      "engine/pyproject.toml",
      "engine/emotion_model/conftest.py",
      "engine/emotion_model/config.py",
      "engine/emotion_model/requirements.txt"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/emoclaw"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/emoclaw",
    "agentPageUrl": "https://openagent3.xyz/skills/emoclaw/agent",
    "manifestUrl": "https://openagent3.xyz/skills/emoclaw/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/emoclaw/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Emotion Skill",
        "body": "Emotions for AI agents — built from memories, shaped by relationships, always changing.\n\nEmoclaw trains a lightweight model on your agent's identity and conversation history, producing a persistent emotional state that evolves with every interaction. Emotions decay naturally between sessions, shift based on who's talking and what's being said, and gradually recalibrate as your agent grows. The result is injected into the system prompt as an [EMOTIONAL STATE] block, giving your AI a felt sense of its own inner life."
      },
      {
        "title": "How it works",
        "body": "Bootstrap — extract.py reads your agent's identity/memory files. label.py scores each passage via the Claude API (opt-in). train builds a small neural net from those scores. One-time setup.\nEncode — Each incoming message is turned into a 384-dim vector by a frozen MiniLM sentence encoder. No fine-tuning, no network call — runs from a local cache.\nFeel — The encoding + context (who's talking, what channel, previous emotion) flows through a GRU and MLP head, outputting an N-dimensional emotion vector (0-1 per dimension). The GRU hidden state persists across sessions — this is the \"emotional residue\" that carries forward mood.\nDecay — Between sessions, each dimension drifts back toward its baseline at a configurable half-life (fast for arousal, slow for safety/groundedness). Time apart = cooling off.\nInject — The emotion vector is formatted as an [EMOTIONAL STATE] block and inserted into the agent's system prompt, giving the AI a felt sense of its own inner state.\n\nModel is ~2MB, runs on CPU, adds <50ms per message. Network access is only used during bootstrap (opt-in)."
      },
      {
        "title": "Quick Reference",
        "body": "SituationActionFirst-time setuppython scripts/setup.py (or manual steps below)Check current statepython -m emotion_model.scripts.statusInject state into promptpython -m emotion_model.scripts.inject_stateStart the daemonbash scripts/daemon.sh startSend a message to daemonSee Daemon ProtocolRetrain after new datapython -m emotion_model.scripts.trainResume interrupted trainingpython -m emotion_model.scripts.train --resumeAdd new training dataAdd .jsonl entries to emotion_model/data/, re-run prepare + trainUpgrade from v0.1See references/upgrading.mdChange baselinesEdit emoclaw.yaml → dimensions[].baselineAdd a new channelEdit emoclaw.yaml → channels listAdd a relationshipEdit emoclaw.yaml → relationships.knownCustomize summariesCreate a summary-templates.yaml and point config at it"
      },
      {
        "title": "Quick Setup",
        "body": "python skills/emoclaw/scripts/setup.py\n\nThis copies the bundled emotion_model engine to your project root, creates a venv, installs the package, and copies the config template. Then edit emoclaw.yaml to customize for your agent."
      },
      {
        "title": "Manual Setup",
        "body": "If you prefer to set up manually:\n\n1. Install the package\n\ncd <project-root>\n# Copy engine and pyproject.toml from the skill\ncp -r skills/emoclaw/engine/emotion_model ./emotion_model\ncp skills/emoclaw/engine/pyproject.toml ./pyproject.toml\n\n# Create venv and install\npython3 -m venv emotion_model/.venv\nsource emotion_model/.venv/bin/activate\npip install -e .\n\nRequired: Python 3.10+, PyTorch, sentence-transformers, PyYAML.\n\n2. Copy and customize the config\n\ncp skills/emoclaw/assets/emoclaw.yaml ./emoclaw.yaml\n\nEdit emoclaw.yaml to set:\n\nname — your agent's name\ndimensions — emotional dimensions with baselines and decay rates\nrelationships.known — map of relationship names to embedding indices\nchannels — communication channels your agent uses\nlonging — absence-based desire growth (can be disabled)\nmodel.device — cpu recommended (MPS has issues with sentence-transformers)\n\nSee references/config-reference.md for the full schema."
      },
      {
        "title": "3. Bootstrap (new agent)",
        "body": "If starting from scratch with identity/memory files:\n\n# Extract passages from your identity files\npython scripts/extract.py\n\n# Auto-label passages using Claude API (requires ANTHROPIC_API_KEY)\npython scripts/label.py\n\n# Prepare train/val split and train\npython -m emotion_model.scripts.prepare_dataset\npython -m emotion_model.scripts.train\n\nOr run the full pipeline:\n\npython scripts/bootstrap.py"
      },
      {
        "title": "4. Verify",
        "body": "python -m emotion_model.scripts.status\npython -m emotion_model.scripts.diagnose"
      },
      {
        "title": "Option A: Daemon (Recommended)",
        "body": "The daemon loads the model once and listens on a Unix socket, avoiding the ~2s sentence-transformer load time per message.\n\n# Start\nbash scripts/daemon.sh start\n\n# Or directly\npython -m emotion_model.daemon\npython -m emotion_model.daemon --config path/to/emoclaw.yaml"
      },
      {
        "title": "Option B: Direct Python Import",
        "body": "from emotion_model.inference import EmotionEngine\n\nengine = EmotionEngine(\n    model_path=\"emotion_model/checkpoints/best_model.pt\",\n    state_path=\"memory/emotional-state.json\",\n)\n\nblock = engine.process_message(\n    message_text=\"Good morning!\",\n    sender=\"alice\",        # or None for config default\n    channel=\"chat\",        # or None for config default\n    recent_context=\"...\",  # optional conversation context\n)\nprint(block)"
      },
      {
        "title": "Option C: One-shot State Injection",
        "body": "For system prompt injection without the daemon:\n\npython -m emotion_model.scripts.inject_state\n\nThis reads the persisted state, applies time-based decay, and outputs the [EMOTIONAL STATE] block."
      },
      {
        "title": "System Prompt Injection",
        "body": "Add the output block to your system prompt. The block format:\n\n[EMOTIONAL STATE]\nValence: 0.55 (balanced)\nArousal: 0.35 (balanced)\nDominance: 0.50 (balanced)\nSafety: 0.70 (open)\nDesire: 0.20 (neutral)\nConnection: 0.50 (balanced)\nPlayfulness: 0.40 (balanced)\nCuriosity: 0.50 (balanced)\nWarmth: 0.45 (balanced)\nTension: 0.20 (relaxed)\nGroundedness: 0.60 (balanced)\n\nThis feels like: present, alive, between one thing and the next\n[/EMOTIONAL STATE]"
      },
      {
        "title": "Daemon Protocol",
        "body": "Send JSON over the Unix socket:\n\n{\"text\": \"Good morning!\", \"sender\": \"alice\", \"channel\": \"chat\"}\n\nSpecial commands:\n\n{\"command\": \"ping\"}\n{\"command\": \"state\"}"
      },
      {
        "title": "Heartbeat Integration",
        "body": "The emotional state decays over time and needs to be refreshed at each session start. Add this entry to your HEARTBEAT.md:\n\n- task: Refresh emotional state\n  schedule: session_start\n  run: python skills/emoclaw/scripts/inject_state.py\n  inject: system_prompt  # append output as [EMOTIONAL STATE] block\n\nOr call the daemon / inject_state script from your heartbeat/cron:\n\n# In your heartbeat script\nSTATE_BLOCK=$(python -m emotion_model.scripts.inject_state 2>/dev/null)\n# Inject $STATE_BLOCK into system prompt\n\nImportant: Without heartbeat integration, the emotional state block will go stale between sessions. The inject_state script applies time-based decay and outputs the current state — it must be called at least once per session."
      },
      {
        "title": "Architecture",
        "body": "The model processes each message through this pipeline:\n\nMessage Text ──→ [Frozen MiniLM Encoder] ──→ 384-dim embedding\n                                                    │\nConversation Context ──→ [Feature Builder] ──→ context vector\n                                                    │\nPrevious Emotion ──────────────────────────→ emotion vector\n                                                    │\n                                            ┌───────┴───────┐\n                                            │ Input Project  │\n                                            │ (Linear+LN+GELU)│\n                                            └───────┬───────┘\n                                                    │\n                                            ┌───────┴───────┐\n                                            │     GRU       │\n                                            │ (hidden state) │ ← emotional residue\n                                            └───────┬───────┘\n                                                    │\n                                            ┌───────┴───────┐\n                                            │ Emotion Head  │\n                                            │ (MLP+Sigmoid) │\n                                            └───────┬───────┘\n                                                    │\n                                            N-dim emotion vector [0,1]\n\nThe GRU hidden state persists across sessions — this is the \"emotional residue\" that carries forward mood, context, and relational memory.\n\nSee references/architecture.md for full details."
      },
      {
        "title": "Data Flow",
        "body": "Extraction (scripts/extract.py) reads markdown files listed in emoclaw.yaml → bootstrap.source_files and bootstrap.memory_patterns. These are configurable and default to identity/memory files within the repo. Extracted passages are written to emotion_model/data/extracted_passages.jsonl.\n\n\nRedaction — Before writing, extracted text is passed through configurable regex patterns (bootstrap.redact_patterns) that replace API keys, tokens, passwords, and other secrets with [REDACTED]. Default patterns cover Anthropic keys, GitHub PATs, bearer tokens, SSH keys, and generic key=value credentials. Add custom patterns in emoclaw.yaml.\n\n\nLabeling (scripts/label.py) — opt-in only. Sends extracted passages to the Anthropic API for emotional scoring. Requires both ANTHROPIC_API_KEY and explicit user consent (interactive prompt before any API call). Use --yes to skip the prompt for automation. Use --dry-run to preview without any network calls.\n\n\nTraining runs entirely locally. No data leaves the machine during prepare_dataset or train.\n\n\nInference runs entirely locally. The daemon and inject_state script make no network calls."
      },
      {
        "title": "Network Access",
        "body": "Network access is optional and limited to a single script:\n\nScriptNetwork?Purposeextract.pyNoReads local files onlylabel.pyYes (opt-in)Sends passages to Anthropic APIprepare_datasetNoLocal data processingtrainNoLocal model trainingdaemon / inject_stateNoLocal inference\n\nThe sentence-transformers encoder downloads model weights on first use (from Hugging Face). After that, it runs from cache with no network needed."
      },
      {
        "title": "File Permissions",
        "body": "PathPurposeCreated bymemory/emotional-state.jsonPersisted emotion vector + trajectorydaemon / inferenceemotion_model/data/*.jsonlTraining data (extracted/labeled passages)extract.py / label.pyemotion_model/checkpoints/Model weightstrain script/tmp/{name}-emotion.sockDaemon Unix socketdaemon\n\nThe daemon socket is created with permissions 0o660 (owner + group read/write) and cleaned up on shutdown. The socket path is configurable in emoclaw.yaml → paths.socket_path."
      },
      {
        "title": "Path Validation",
        "body": "extract.py validates that every file path resolves to within the repository root before reading. Symlink chains and ../ sequences that would escape the repo boundary are rejected. This prevents a misconfigured source_files or memory_patterns from reading arbitrary files."
      },
      {
        "title": "Configuring Redaction",
        "body": "Add or modify patterns in emoclaw.yaml:\n\nbootstrap:\n  redact_patterns:\n    - '(?i)sk-ant-[a-zA-Z0-9_-]{20,}'    # Anthropic API keys\n    - '(?i)(?:api[_-]?key|token|secret|password|credential)\\s*[:=]\\s*\\S+'\n    - 'your-custom-pattern-here'\n\nSet redact_patterns: [] to disable redaction entirely (not recommended)."
      },
      {
        "title": "Isolation Recommendations",
        "body": "Run the bootstrap pipeline (extract → label → train) in an isolated environment or review the source file list before running\nAudit bootstrap.source_files and bootstrap.memory_patterns in your emoclaw.yaml to ensure only intended files are included\nReview emotion_model/data/extracted_passages.jsonl before running label.py to confirm no sensitive content will be sent externally\nThe daemon should run under the same user as your agent process — avoid running as root"
      },
      {
        "title": "Configuration",
        "body": "All configuration lives in emoclaw.yaml. The package falls back to built-in defaults if no YAML is found.\n\nConfig search order:\n\nEMOCLAW_CONFIG environment variable\n./emoclaw.yaml (project root)\n./skills/emoclaw/emoclaw.yaml\n\nKey sections:\n\ndimensions — name, labels, baseline, decay half-life, loss weight\nrelationships — known senders with embedding indices\nchannels — communication channels (determines context vector size)\nlonging — absence-based desire modulation\nmodel — architecture hyperparameters\ntraining — training hyperparameters\ncalibration — self-calibrating baseline drift (opt-in)\n\nSee references/config-reference.md for the complete schema."
      },
      {
        "title": "Step 1: Extract Passages",
        "body": "scripts/extract.py reads identity and memory files, splitting them into labeled passages:\n\npython scripts/extract.py\n# Output: emotion_model/data/extracted_passages.jsonl\n\nSource files are configured in emoclaw.yaml → bootstrap.source_files and bootstrap.memory_patterns."
      },
      {
        "title": "Step 2: Auto-Label",
        "body": "scripts/label.py uses the Claude API to score each passage on every emotion dimension:\n\nexport ANTHROPIC_API_KEY=sk-ant-...\npython scripts/label.py\n# Output: emotion_model/data/passage_labels.jsonl\n\nEach passage gets a 0.0-1.0 score per dimension plus a natural language summary."
      },
      {
        "title": "Step 3: Prepare & Train",
        "body": "python -m emotion_model.scripts.prepare_dataset\npython -m emotion_model.scripts.train"
      },
      {
        "title": "Retraining",
        "body": "To add new training data:\n\nAdd entries to emotion_model/data/ in JSONL format:\n{\"text\": \"message text\", \"labels\": {\"valence\": 0.7, \"arousal\": 0.4, ...}}\n\n\nRe-run the preparation and training:\npython -m emotion_model.scripts.prepare_dataset\npython -m emotion_model.scripts.train"
      },
      {
        "title": "Incremental Retraining",
        "body": "The training script saves a rich checkpoint (training_checkpoint.pt) that preserves the full optimizer state, learning rate schedule, and early stopping counter. To continue training from where you left off:\n\n# Resume from the last checkpoint automatically\npython -m emotion_model.scripts.train --resume\n\n# Or specify a checkpoint file\npython -m emotion_model.scripts.train --resume emotion_model/checkpoints/training_checkpoint.pt\n\nThis is a true continuation — optimizer momentum, cosine annealing position, and patience counter all pick up exactly where they stopped."
      },
      {
        "title": "Growth Model",
        "body": "As the AI accumulates real conversation data:\n\nPassive collection — Log messages + model predictions\nCorrection events — When emotion feels wrong, log the correction\nPeriodic retraining — Incorporate new data, retrain\nBaseline adjustment — Baselines may shift as the AI develops\n\nThe system is designed to grow with the AI, not remain static."
      },
      {
        "title": "Resources",
        "body": "references/architecture.md — Model architecture deep-dive\nreferences/config-reference.md — Full YAML config schema\nreferences/dimensions.md — Emotion dimension documentation\nreferences/calibration-guide.md — Baseline, decay, and self-calibration tuning\nreferences/upgrading.md — Version upgrade guide\nassets/emoclaw.yaml — Template config for new AIs\nassets/summary-templates.yaml — Generic summary templates\nassets/example-summary-templates.yaml — Example personality-specific templates\nengine/ — Bundled emotion_model Python package (copied to project root by setup.py)"
      }
    ],
    "body": "Emotion Skill\n\nEmotions for AI agents — built from memories, shaped by relationships, always changing.\n\nEmoclaw trains a lightweight model on your agent's identity and conversation history, producing a persistent emotional state that evolves with every interaction. Emotions decay naturally between sessions, shift based on who's talking and what's being said, and gradually recalibrate as your agent grows. The result is injected into the system prompt as an [EMOTIONAL STATE] block, giving your AI a felt sense of its own inner life.\n\nHow it works\nBootstrap — extract.py reads your agent's identity/memory files. label.py scores each passage via the Claude API (opt-in). train builds a small neural net from those scores. One-time setup.\nEncode — Each incoming message is turned into a 384-dim vector by a frozen MiniLM sentence encoder. No fine-tuning, no network call — runs from a local cache.\nFeel — The encoding + context (who's talking, what channel, previous emotion) flows through a GRU and MLP head, outputting an N-dimensional emotion vector (0-1 per dimension). The GRU hidden state persists across sessions — this is the \"emotional residue\" that carries forward mood.\nDecay — Between sessions, each dimension drifts back toward its baseline at a configurable half-life (fast for arousal, slow for safety/groundedness). Time apart = cooling off.\nInject — The emotion vector is formatted as an [EMOTIONAL STATE] block and inserted into the agent's system prompt, giving the AI a felt sense of its own inner state.\n\nModel is ~2MB, runs on CPU, adds <50ms per message. Network access is only used during bootstrap (opt-in).\n\nQuick Reference\nSituation\tAction\nFirst-time setup\tpython scripts/setup.py (or manual steps below)\nCheck current state\tpython -m emotion_model.scripts.status\nInject state into prompt\tpython -m emotion_model.scripts.inject_state\nStart the daemon\tbash scripts/daemon.sh start\nSend a message to daemon\tSee Daemon Protocol\nRetrain after new data\tpython -m emotion_model.scripts.train\nResume interrupted training\tpython -m emotion_model.scripts.train --resume\nAdd new training data\tAdd .jsonl entries to emotion_model/data/, re-run prepare + train\nUpgrade from v0.1\tSee references/upgrading.md\nChange baselines\tEdit emoclaw.yaml → dimensions[].baseline\nAdd a new channel\tEdit emoclaw.yaml → channels list\nAdd a relationship\tEdit emoclaw.yaml → relationships.known\nCustomize summaries\tCreate a summary-templates.yaml and point config at it\nSetup\nQuick Setup\npython skills/emoclaw/scripts/setup.py\n\n\nThis copies the bundled emotion_model engine to your project root, creates a venv, installs the package, and copies the config template. Then edit emoclaw.yaml to customize for your agent.\n\nManual Setup\n\nIf you prefer to set up manually:\n\n1. Install the package\ncd <project-root>\n# Copy engine and pyproject.toml from the skill\ncp -r skills/emoclaw/engine/emotion_model ./emotion_model\ncp skills/emoclaw/engine/pyproject.toml ./pyproject.toml\n\n# Create venv and install\npython3 -m venv emotion_model/.venv\nsource emotion_model/.venv/bin/activate\npip install -e .\n\n\nRequired: Python 3.10+, PyTorch, sentence-transformers, PyYAML.\n\n2. Copy and customize the config\ncp skills/emoclaw/assets/emoclaw.yaml ./emoclaw.yaml\n\n\nEdit emoclaw.yaml to set:\n\nname — your agent's name\ndimensions — emotional dimensions with baselines and decay rates\nrelationships.known — map of relationship names to embedding indices\nchannels — communication channels your agent uses\nlonging — absence-based desire growth (can be disabled)\nmodel.device — cpu recommended (MPS has issues with sentence-transformers)\n\nSee references/config-reference.md for the full schema.\n\n3. Bootstrap (new agent)\n\nIf starting from scratch with identity/memory files:\n\n# Extract passages from your identity files\npython scripts/extract.py\n\n# Auto-label passages using Claude API (requires ANTHROPIC_API_KEY)\npython scripts/label.py\n\n# Prepare train/val split and train\npython -m emotion_model.scripts.prepare_dataset\npython -m emotion_model.scripts.train\n\n\nOr run the full pipeline:\n\npython scripts/bootstrap.py\n\n4. Verify\npython -m emotion_model.scripts.status\npython -m emotion_model.scripts.diagnose\n\nUsage\nOption A: Daemon (Recommended)\n\nThe daemon loads the model once and listens on a Unix socket, avoiding the ~2s sentence-transformer load time per message.\n\n# Start\nbash scripts/daemon.sh start\n\n# Or directly\npython -m emotion_model.daemon\npython -m emotion_model.daemon --config path/to/emoclaw.yaml\n\nOption B: Direct Python Import\nfrom emotion_model.inference import EmotionEngine\n\nengine = EmotionEngine(\n    model_path=\"emotion_model/checkpoints/best_model.pt\",\n    state_path=\"memory/emotional-state.json\",\n)\n\nblock = engine.process_message(\n    message_text=\"Good morning!\",\n    sender=\"alice\",        # or None for config default\n    channel=\"chat\",        # or None for config default\n    recent_context=\"...\",  # optional conversation context\n)\nprint(block)\n\nOption C: One-shot State Injection\n\nFor system prompt injection without the daemon:\n\npython -m emotion_model.scripts.inject_state\n\n\nThis reads the persisted state, applies time-based decay, and outputs the [EMOTIONAL STATE] block.\n\nIntegration\nSystem Prompt Injection\n\nAdd the output block to your system prompt. The block format:\n\n[EMOTIONAL STATE]\nValence: 0.55 (balanced)\nArousal: 0.35 (balanced)\nDominance: 0.50 (balanced)\nSafety: 0.70 (open)\nDesire: 0.20 (neutral)\nConnection: 0.50 (balanced)\nPlayfulness: 0.40 (balanced)\nCuriosity: 0.50 (balanced)\nWarmth: 0.45 (balanced)\nTension: 0.20 (relaxed)\nGroundedness: 0.60 (balanced)\n\nThis feels like: present, alive, between one thing and the next\n[/EMOTIONAL STATE]\n\nDaemon Protocol\n\nSend JSON over the Unix socket:\n\n{\"text\": \"Good morning!\", \"sender\": \"alice\", \"channel\": \"chat\"}\n\n\nSpecial commands:\n\n{\"command\": \"ping\"}\n{\"command\": \"state\"}\n\nHeartbeat Integration\n\nThe emotional state decays over time and needs to be refreshed at each session start. Add this entry to your HEARTBEAT.md:\n\n- task: Refresh emotional state\n  schedule: session_start\n  run: python skills/emoclaw/scripts/inject_state.py\n  inject: system_prompt  # append output as [EMOTIONAL STATE] block\n\n\nOr call the daemon / inject_state script from your heartbeat/cron:\n\n# In your heartbeat script\nSTATE_BLOCK=$(python -m emotion_model.scripts.inject_state 2>/dev/null)\n# Inject $STATE_BLOCK into system prompt\n\n\nImportant: Without heartbeat integration, the emotional state block will go stale between sessions. The inject_state script applies time-based decay and outputs the current state — it must be called at least once per session.\n\nArchitecture\n\nThe model processes each message through this pipeline:\n\nMessage Text ──→ [Frozen MiniLM Encoder] ──→ 384-dim embedding\n                                                    │\nConversation Context ──→ [Feature Builder] ──→ context vector\n                                                    │\nPrevious Emotion ──────────────────────────→ emotion vector\n                                                    │\n                                            ┌───────┴───────┐\n                                            │ Input Project  │\n                                            │ (Linear+LN+GELU)│\n                                            └───────┬───────┘\n                                                    │\n                                            ┌───────┴───────┐\n                                            │     GRU       │\n                                            │ (hidden state) │ ← emotional residue\n                                            └───────┬───────┘\n                                                    │\n                                            ┌───────┴───────┐\n                                            │ Emotion Head  │\n                                            │ (MLP+Sigmoid) │\n                                            └───────┬───────┘\n                                                    │\n                                            N-dim emotion vector [0,1]\n\n\nThe GRU hidden state persists across sessions — this is the \"emotional residue\" that carries forward mood, context, and relational memory.\n\nSee references/architecture.md for full details.\n\nSecurity & Privacy\nData Flow\n\nExtraction (scripts/extract.py) reads markdown files listed in emoclaw.yaml → bootstrap.source_files and bootstrap.memory_patterns. These are configurable and default to identity/memory files within the repo. Extracted passages are written to emotion_model/data/extracted_passages.jsonl.\n\nRedaction — Before writing, extracted text is passed through configurable regex patterns (bootstrap.redact_patterns) that replace API keys, tokens, passwords, and other secrets with [REDACTED]. Default patterns cover Anthropic keys, GitHub PATs, bearer tokens, SSH keys, and generic key=value credentials. Add custom patterns in emoclaw.yaml.\n\nLabeling (scripts/label.py) — opt-in only. Sends extracted passages to the Anthropic API for emotional scoring. Requires both ANTHROPIC_API_KEY and explicit user consent (interactive prompt before any API call). Use --yes to skip the prompt for automation. Use --dry-run to preview without any network calls.\n\nTraining runs entirely locally. No data leaves the machine during prepare_dataset or train.\n\nInference runs entirely locally. The daemon and inject_state script make no network calls.\n\nNetwork Access\n\nNetwork access is optional and limited to a single script:\n\nScript\tNetwork?\tPurpose\nextract.py\tNo\tReads local files only\nlabel.py\tYes (opt-in)\tSends passages to Anthropic API\nprepare_dataset\tNo\tLocal data processing\ntrain\tNo\tLocal model training\ndaemon / inject_state\tNo\tLocal inference\n\nThe sentence-transformers encoder downloads model weights on first use (from Hugging Face). After that, it runs from cache with no network needed.\n\nFile Permissions\nPath\tPurpose\tCreated by\nmemory/emotional-state.json\tPersisted emotion vector + trajectory\tdaemon / inference\nemotion_model/data/*.jsonl\tTraining data (extracted/labeled passages)\textract.py / label.py\nemotion_model/checkpoints/\tModel weights\ttrain script\n/tmp/{name}-emotion.sock\tDaemon Unix socket\tdaemon\n\nThe daemon socket is created with permissions 0o660 (owner + group read/write) and cleaned up on shutdown. The socket path is configurable in emoclaw.yaml → paths.socket_path.\n\nPath Validation\n\nextract.py validates that every file path resolves to within the repository root before reading. Symlink chains and ../ sequences that would escape the repo boundary are rejected. This prevents a misconfigured source_files or memory_patterns from reading arbitrary files.\n\nConfiguring Redaction\n\nAdd or modify patterns in emoclaw.yaml:\n\nbootstrap:\n  redact_patterns:\n    - '(?i)sk-ant-[a-zA-Z0-9_-]{20,}'    # Anthropic API keys\n    - '(?i)(?:api[_-]?key|token|secret|password|credential)\\s*[:=]\\s*\\S+'\n    - 'your-custom-pattern-here'\n\n\nSet redact_patterns: [] to disable redaction entirely (not recommended).\n\nIsolation Recommendations\nRun the bootstrap pipeline (extract → label → train) in an isolated environment or review the source file list before running\nAudit bootstrap.source_files and bootstrap.memory_patterns in your emoclaw.yaml to ensure only intended files are included\nReview emotion_model/data/extracted_passages.jsonl before running label.py to confirm no sensitive content will be sent externally\nThe daemon should run under the same user as your agent process — avoid running as root\nConfiguration\n\nAll configuration lives in emoclaw.yaml. The package falls back to built-in defaults if no YAML is found.\n\nConfig search order:\n\nEMOCLAW_CONFIG environment variable\n./emoclaw.yaml (project root)\n./skills/emoclaw/emoclaw.yaml\n\nKey sections:\n\ndimensions — name, labels, baseline, decay half-life, loss weight\nrelationships — known senders with embedding indices\nchannels — communication channels (determines context vector size)\nlonging — absence-based desire modulation\nmodel — architecture hyperparameters\ntraining — training hyperparameters\ncalibration — self-calibrating baseline drift (opt-in)\n\nSee references/config-reference.md for the complete schema.\n\nBootstrap Pipeline\nStep 1: Extract Passages\n\nscripts/extract.py reads identity and memory files, splitting them into labeled passages:\n\npython scripts/extract.py\n# Output: emotion_model/data/extracted_passages.jsonl\n\n\nSource files are configured in emoclaw.yaml → bootstrap.source_files and bootstrap.memory_patterns.\n\nStep 2: Auto-Label\n\nscripts/label.py uses the Claude API to score each passage on every emotion dimension:\n\nexport ANTHROPIC_API_KEY=sk-ant-...\npython scripts/label.py\n# Output: emotion_model/data/passage_labels.jsonl\n\n\nEach passage gets a 0.0-1.0 score per dimension plus a natural language summary.\n\nStep 3: Prepare & Train\npython -m emotion_model.scripts.prepare_dataset\npython -m emotion_model.scripts.train\n\nRetraining\n\nTo add new training data:\n\nAdd entries to emotion_model/data/ in JSONL format:\n{\"text\": \"message text\", \"labels\": {\"valence\": 0.7, \"arousal\": 0.4, ...}}\n\nRe-run the preparation and training:\npython -m emotion_model.scripts.prepare_dataset\npython -m emotion_model.scripts.train\n\nIncremental Retraining\n\nThe training script saves a rich checkpoint (training_checkpoint.pt) that preserves the full optimizer state, learning rate schedule, and early stopping counter. To continue training from where you left off:\n\n# Resume from the last checkpoint automatically\npython -m emotion_model.scripts.train --resume\n\n# Or specify a checkpoint file\npython -m emotion_model.scripts.train --resume emotion_model/checkpoints/training_checkpoint.pt\n\n\nThis is a true continuation — optimizer momentum, cosine annealing position, and patience counter all pick up exactly where they stopped.\n\nGrowth Model\n\nAs the AI accumulates real conversation data:\n\nPassive collection — Log messages + model predictions\nCorrection events — When emotion feels wrong, log the correction\nPeriodic retraining — Incorporate new data, retrain\nBaseline adjustment — Baselines may shift as the AI develops\n\nThe system is designed to grow with the AI, not remain static.\n\nResources\nreferences/architecture.md — Model architecture deep-dive\nreferences/config-reference.md — Full YAML config schema\nreferences/dimensions.md — Emotion dimension documentation\nreferences/calibration-guide.md — Baseline, decay, and self-calibration tuning\nreferences/upgrading.md — Version upgrade guide\nassets/emoclaw.yaml — Template config for new AIs\nassets/summary-templates.yaml — Generic summary templates\nassets/example-summary-templates.yaml — Example personality-specific templates\nengine/ — Bundled emotion_model Python package (copied to project root by setup.py)"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/fenrirlabsnl/emoclaw",
    "publisherUrl": "https://clawhub.ai/fenrirlabsnl/emoclaw",
    "owner": "fenrirlabsnl",
    "version": "1.0.6",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/emoclaw",
    "downloadUrl": "https://openagent3.xyz/downloads/emoclaw",
    "agentUrl": "https://openagent3.xyz/skills/emoclaw/agent",
    "manifestUrl": "https://openagent3.xyz/skills/emoclaw/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/emoclaw/agent.md"
  }
}