Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Continuously captures and summarizes ambient conversations to build a local knowledge graph for context-aware assistance without explicit commands.
Continuously captures and summarizes ambient conversations to build a local knowledge graph for context-aware assistance without explicit commands.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Ambient intelligence mode β continuous context awareness without explicit commands.
Runs in the background, building a knowledge graph of conversations, entities, and relationships over time. Your agent passively learns context from ambient speech β who you talk to, what projects are active, what decisions were made β without needing explicit commands.
User wants always-on context awareness Agent needs background knowledge from daily conversations User asks "what do you know about [person/project]?" based on overheard context
percept-listen skill installed and running percept-summarize skill installed (for entity extraction)
All conversations are continuously captured and summarized Entities (people, companies, projects, topics) extracted automatically Relationships mapped between entities (works_on, client_of, mentioned_with) Context packets assembled on demand for any agent action Full-text search (FTS5) + vector search (LanceDB) for retrieval
When your agent needs context, Percept assembles a Context Packet: { "recent_conversations": [...], "resolved_entities": [...], "relationships": [...], "relevant_history": [...] } This gives the agent rich situational awareness without loading entire conversation histories.
Semantic search over utterances using NVIDIA NIM embeddings (primary) with all-MiniLM-L6-v2 as offline fallback. Stored in LanceDB (local, zero-infra). # Search via dashboard (port 8960) or API curl localhost:8960/api/search?q=project+deadline&mode=hybrid
All data stored locally in SQLite + LanceDB TTL auto-purge (configurable retention periods) No audio stored β only transcripts Dashboard β Settings β Privacy for granular controls
Monitor ambient intelligence at http://localhost:8960: Live conversation feed Entity graph visualization Search across all conversations Analytics and usage stats
GitHub: https://github.com/GetPercept/percept
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.