← All skills
Tencent SkillHub · AI

NIMA Core

Noosphere Integrated Memory Architecture — Complete cognitive stack for AI agents: persistent memory, emotional intelligence, dream consolidation, hive mind,...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Noosphere Integrated Memory Architecture — Complete cognitive stack for AI agents: persistent memory, emotional intelligence, dream consolidation, hive mind,...

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
CHANGELOG.md, INSTALL.md, QUICKSTART.md, README.md, SKILL.md, doctor.sh

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
3.3.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 27 sections Open source page

NIMA Core 3.2

Noosphere Integrated Memory Architecture — A complete cognitive stack for AI agents: persistent memory, emotional intelligence, dream consolidation, hive mind, and precognitive recall. Website: https://nima-core.ai · GitHub: https://github.com/lilubot/nima-core

Quick Start

pip install nima-core && nima-core Your bot now has persistent memory. Zero config needed.

Complete Cognitive Stack

NIMA evolved from a memory plugin into a full cognitive architecture: ModuleWhat It DoesVersionMemory Capture3-layer capture (input/contemplation/output), 4-phase noise filteringv2.0Semantic RecallVector + text hybrid search, ecology scoring, token-budgeted injectionv2.0Dynamic AffectPanksepp 7-affect emotional state (SEEKING, RAGE, FEAR, LUST, CARE, PANIC, PLAY)v2.1VADER AnalyzerContextual sentiment — caps boost, negation, idioms, degree modifiersv2.2Memory PrunerLLM distillation of old conversations → semantic gists, 30-day suppression limbov2.3Dream ConsolidationNightly synthesis — extracts insights and patterns from episodic memoryv2.4Hive MindMulti-agent memory sharing via shared DB + optional Redis pub/subv2.5PrecognitionTemporal pattern mining → predictive memory pre-loadingv2.5Lucid MomentsSpontaneous surfacing of emotionally-resonant memoriesv2.5Darwinian MemoryClusters similar memories, ghosts duplicates via cosine + LLM verificationv3.0InstallerOne-command setup — LadybugDB, hooks, directories, embedder configv3.0

v3.0 Highlights

All cognitive modules unified under a single package Installer (install.sh) for zero-friction setup All OpenClaw hooks bundled and ready to drop in README rewritten, all versions aligned to 3.0.4

Architecture

OPENCLAW HOOKS ├── nima-memory/ Capture hook (3-layer, 4-phase noise filter) │ ├── index.js Hook entry point │ ├── ladybug_store.py LadybugDB storage backend │ ├── embeddings.py Multi-provider embedding (Voyage/OpenAI/Ollama/local) │ ├── backfill.py Historical transcript import │ └── health_check.py DB integrity checks ├── nima-recall-live/ Recall hook (before_agent_start) │ ├── lazy_recall.py Current recall engine │ └── ladybug_recall.py LadybugDB-native recall ├── nima-affect/ Affect hook (message_received) │ ├── vader-affect.js VADER sentiment analyzer │ └── emotion-lexicon.js Emotion keyword lexicon └── shared/ Resilient wrappers, error handling PYTHON CORE (nima_core/) ├── cognition/ │ ├── dynamic_affect.py Panksepp 7-affect system │ ├── emotion_detection.py Text emotion extraction │ ├── affect_correlation.py Cross-affect analysis │ ├── affect_history.py Temporal affect tracking │ ├── affect_interactions.py Affect coupling dynamics │ ├── archetypes.py Personality baselines (Guardian, Explorer, etc.) │ ├── personality_profiles.py JSON personality configs │ └── response_modulator_v2.py Affect → response modulation ├── dream_consolidation.py Nightly memory synthesis engine ├── memory_pruner.py Episodic distillation + suppression ├── hive_mind.py Multi-agent memory sharing ├── precognition.py Temporal pattern mining ├── lucid_moments.py Spontaneous memory surfacing ├── connection_pool.py SQLite pool (WAL, thread-safe) ├── logging_config.py Singleton logger └── metrics.py Thread-safe counters/timings

Privacy & Permissions

✅ All data stored locally in ~/.nima/ ✅ Default: local embeddings = zero external calls ✅ No NIMA-owned servers, no proprietary tracking, no analytics sent to external services ⚠️ Opt-in networking: HiveMind (Redis pub/sub), Precognition (LLM endpoints), LadybugDB migrations — see Optional Features below 🔒 Embedding API calls only when explicitly enabling (VOYAGE_API_KEY, OPENAI_API_KEY, etc.)

Optional Features with Network Access

FeatureEnv VarNetwork Calls ToDefaultCloud embeddingsNIMA_EMBEDDER=voyagevoyage.aiOffCloud embeddingsNIMA_EMBEDDER=openaiopenai.comOffMemory prunerANTHROPIC_API_KEY setanthropic.comOffOllama embeddingsNIMA_EMBEDDER=ollamalocalhost:11434OffHiveMindHIVE_ENABLED=trueRedis pub/subOffPrecognitionUsing external LLMConfigured endpointOff

What Gets Installed

ComponentLocationPurposePython core (nima_core/)~/.nima/Memory, affect, cognitionOpenClaw hooks~/.openclaw/extensions/nima-*/Capture, recall, affectSQLite database~/.nima/memory/graph.sqlitePersistent storageLogs~/.nima/logs/Debug logs (optional)

Credential Handling

Env VarRequired?Network Calls?PurposeNIMA_EMBEDDER=localNo❌Default — offline embeddingsVOYAGE_API_KEYOnly if using Voyage✅ voyage.aiCloud embeddingsOPENAI_API_KEYOnly if using OpenAI✅ openai.comCloud embeddingsANTHROPIC_API_KEYOnly if using pruner✅ anthropic.comMemory distillationNIMA_OLLAMA_MODELOnly if using Ollama❌ (localhost)Local GPU embeddings Recommendation: Start with NIMA_EMBEDDER=local (default). Only enable cloud providers when you need better embedding quality.

Safety Features

Input filtering — System messages, heartbeats, and duplicates are filtered before capture FTS5 injection prevention — Parameterized queries prevent SQL injection Path traversal protection — All file paths are sanitized Temp file cleanup — Automatic cleanup of temporary files API timeouts — Network calls have reasonable timeouts (30s Voyage, 10s local)

Best Practices

Review before installing — Inspect install.sh and hook files before running Backup config — Backup ~/.openclaw/openclaw.json before adding hooks Don't run as root — Installation writes to user home directories Use containerized envs — Test in a VM or container first if unsure Rotate API keys — If using cloud embeddings, rotate keys periodically Monitor logs — Check ~/.nima/logs/ for suspicious activity

Data Locations

~/.nima/ ├── memory/ │ ├── graph.sqlite # SQLite backend (default) │ ├── ladybug.lbug # LadybugDB backend (optional) │ ├── embedding_cache.db # Cached embeddings │ └── embedding_index.npy# Vector index ├── affect/ │ └── affect_state.json # Current emotional state └── logs/ # Debug logs (if enabled) ~/.openclaw/extensions/ ├── nima-memory/ # Capture hook ├── nima-recall-live/ # Recall hook └── nima-affect/ # Affect hook Controls: { "plugins": { "entries": { "nima-memory": { "skip_subagents": true, "skip_heartbeats": true, "noise_filtering": { "filter_system_noise": true } } } } }

Embedding Providers

ProviderSetupDimsCostLocal (default)NIMA_EMBEDDER=local384FreeVoyage AINIMA_EMBEDDER=voyage + VOYAGE_API_KEY1024$0.12/1M tokOpenAINIMA_EMBEDDER=openai + OPENAI_API_KEY1536$0.13/1M tokOllamaNIMA_EMBEDDER=ollama + NIMA_OLLAMA_MODEL768Free

Database Backend

SQLite (default)LadybugDB (recommended)Text Search31ms9ms (3.4x faster)Vector SearchExternalNative HNSW (18ms)Graph QueriesSQL JOINsNative CypherDB Size~91 MB~50 MB (44% smaller) Upgrade: pip install real-ladybug && python -c "from nima_core.storage import migrate; migrate()"

All Environment Variables

# Embedding (default: local) NIMA_EMBEDDER=local|voyage|openai|ollama VOYAGE_API_KEY=pa-xxx OPENAI_API_KEY=sk-xxx NIMA_OLLAMA_MODEL=nomic-embed-text # Data paths NIMA_DATA_DIR=~/.nima NIMA_DB_PATH=~/.nima/memory/ladybug.lbug # Memory pruner NIMA_DISTILL_MODEL=claude-haiku-4-5 ANTHROPIC_API_KEY=sk-ant-xxx # Logging NIMA_LOG_LEVEL=INFO NIMA_DEBUG_RECALL=1

Hooks

HookFiresDoesnima-memoryAfter saveCaptures 3 layers → filters noise → stores in graph DBnima-recall-liveBefore LLMSearches memories → scores by ecology → injects as context (3000 token budget)nima-affectOn messageVADER sentiment → Panksepp 7-affect state → archetype modulation

Installation

./install.sh openclaw gateway restart Or manual: cp -r openclaw_hooks/nima-memory ~/.openclaw/extensions/ cp -r openclaw_hooks/nima-recall-live ~/.openclaw/extensions/ cp -r openclaw_hooks/nima-affect ~/.openclaw/extensions/

Dream Consolidation

Nightly synthesis extracts insights and patterns from episodic memory: python -m nima_core.dream_consolidation # Or schedule via OpenClaw cron at 2 AM

Memory Pruner

Distills old conversations into semantic gists, suppresses raw noise: python -m nima_core.memory_pruner --min-age 14 --live python -m nima_core.memory_pruner --restore 12345 # undo within 30 days

Hive Mind

Multi-agent memory sharing: from nima_core import HiveMind hive = HiveMind(db_path="~/.nima/memory/ladybug.lbug") context = hive.build_agent_context("research task", max_memories=8) hive.capture_agent_result("agent-1", "result summary", "model-name")

Precognition

Temporal pattern mining → predictive memory pre-loading: from nima_core import NimaPrecognition precog = NimaPrecognition(db_path="~/.nima/memory/ladybug.lbug") precog.run_mining_cycle()

Lucid Moments

Spontaneous surfacing of emotionally-resonant memories (with safety: trauma filtering, quiet hours, daily caps): from nima_core import LucidMoments lucid = LucidMoments(db_path="~/.nima/memory/ladybug.lbug") moment = lucid.surface_moment()

Affect System

Panksepp 7-affect emotional intelligence with personality archetypes: from nima_core import DynamicAffectSystem affect = DynamicAffectSystem(identity_name="my_bot", baseline="guardian") state = affect.process_input("I'm excited about this!") # Archetypes: guardian, explorer, trickster, empath, sage

API

from nima_core import ( DynamicAffectSystem, get_affect_system, HiveMind, NimaPrecognition, LucidMoments, ) # Affect (thread-safe singleton) affect = get_affect_system(identity_name="lilu") state = affect.process_input("Hello!") # Hive Mind hive = HiveMind() context = hive.build_agent_context("task description") # Precognition precog = NimaPrecognition() precog.run_mining_cycle() # Lucid Moments lucid = LucidMoments() moment = lucid.surface_moment()

Changelog

See CHANGELOG.md for full version history.

Recent Releases

v3.0.4 (Feb 23, 2026) — Darwinian memory engine, new CLIs, installer, bug fixes v2.5.0 (Feb 21, 2026) — Hive Mind, Precognition, Lucid Moments v2.4.0 (Feb 20, 2026) — Dream Consolidation engine v2.3.0 (Feb 19, 2026) — Memory Pruner, connection pool, Ollama support v2.2.0 (Feb 19, 2026) — VADER Affect, 4-phase noise remediation, ecology scoring v2.0.0 (Feb 13, 2026) — LadybugDB backend, security hardening, 348 tests

License

MIT — free for any AI agent, commercial or personal.

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
5 Docs1 Scripts
  • SKILL.md Primary doc
  • CHANGELOG.md Docs
  • INSTALL.md Docs
  • QUICKSTART.md Docs
  • README.md Docs
  • doctor.sh Scripts