Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Persistent agent operating system for OpenClaw. Agents remember across sessions, learn from experience, coordinate on complex projects without duplicate work.
Persistent agent operating system for OpenClaw. Agents remember across sessions, learn from experience, coordinate on complex projects without duplicate work.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Agents that remember. Learn. Coordinate.
Agent OS enables multi-agent project execution with persistent memory: Agent Memory β Each agent remembers past tasks, lessons learned, success rates Task Decomposition β Break high-level goals into executable task sequences Smart Routing β Assign tasks to agents based on capability fit Execution Tracking β Live progress board showing what every agent is doing State Persistence β Project state survives restarts (resume mid-project)
clawhub install nova/agent-os
const { AgentOS } = require('agent-os'); const os = new AgentOS('my-project'); // Register agents with capabilities os.registerAgent('research', 'π Research', ['research', 'planning']); os.registerAgent('design', 'π¨ Design', ['design', 'planning']); os.registerAgent('dev', 'π» Development', ['development']); os.initialize(); // Run a project const result = await os.runProject('Build a feature', [ 'planning', 'design', 'development', ]); console.log(result.progress); // 100
Persistent worker with: Memory β Past tasks, lessons learned, success rates State β Current task, progress, blockers Capabilities β What it's good at (research, design, development, etc.)
Decomposes goals into executable tasks: Breaks "Build a feature" into: plan β design β develop β test Matches tasks to agents based on capability fit Tracks dependencies (task A must finish before task B)
Runs tasks sequentially: Assigns tasks to agents Tracks progress in real-time Persists state so projects survive restarts Handles blockers and errors
Orchestrates everything: Register agents Initialize system Run projects Get status
AgentOS (top-level orchestration) βββ Agent (persistent worker) β βββ Memory (lessons, capabilities, history) β βββ State (current task, progress) βββ TaskRouter (goal decomposition) β βββ Templates (planning, design, development, etc.) β βββ Matcher (task β agent assignment) βββ Executor (task execution) βββ Sequential runner βββ Progress tracking βββ State persistence
All state is saved to the data/ directory: [agent-id]-memory.json β Agent knowledge base [agent-id]-state.json β Current agent status [project-id]-project.json β Project task list + status This means: β Projects survive restarts β Agents remember past work β Resume mid-project seamlessly
agent-os/ βββ core/ β βββ agent.js # Agent class β βββ task-router.js # Task decomposition β βββ executor.js # Execution scheduler β βββ index.js # AgentOS class βββ ui/ β βββ dashboard.html # Live progress UI β βββ dashboard.js # Dashboard logic β βββ style.css # Styling βββ examples/ β βββ research-project.js # Full working example βββ data/ # Auto-created (persistent state) βββ package.json
new AgentOS(projectId?) registerAgent(id, name, capabilities) initialize() runProject(goal, taskTypes) getStatus() getAgentStatus(agentId) toJSON()
startTask(task) updateProgress(percentage, message) completeTask(output) setBlocker(message) recordError(error) learnLesson(category, lesson) reset() getStatus()
decompose(goal, taskTypes) matchAgent(taskType) getTasksForAgent(agentId, tasks) canExecuteTask(task, allTasks) getNextTask(tasks) completeTask(taskId, tasks, output) getProjectStatus(tasks)
initializeProject(goal, taskTypes) execute() executeTask(task) getStatus()
See examples/research-project.js for the canonical example: npm start This demonstrates: β 3 agents with different capabilities β 12 tasks across 3 phases (planning, design, development) β Sequential execution with progress tracking β State persistence to disk β Final status report Expected output: β Registered 3 agents π Task Plan: 12 tasks π Starting execution... β [Task 1] Complete β [Task 2] Complete ... π PROJECT COMPLETE - 100% progress
HTTP server + live dashboard Parallel task execution (DAG solver) Capability learning system (auto-score agents) Smart agent routing (match to best agent) Failure recovery + retry logic Cost tracking (token usage per agent) Human checkpoints (review high-risk outputs)
Agents should remember what they learn. Most agent frameworks are stateless. Agent OS keeps persistent memory so agents: Remember β No redundant context resets Learn β Capability scores improve over time Coordinate β Shared state prevents duplication Cost less β Less context = cheaper API calls
MIT Built with β€οΈ by Nova for OpenClaw See README.md and ARCHITECTURE.md for complete documentation.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.