Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Proactive goal and task management system. Use when managing goals, breaking down projects into tasks, tracking progress, or working autonomously on objectives. Enables agents to work proactively during heartbeats, message humans with updates, and make progress without waiting for prompts.
Proactive goal and task management system. Use when managing goals, breaking down projects into tasks, tracking progress, or working autonomously on objectives. Enables agents to work proactively during heartbeats, message humans with updates, and make progress without waiting for prompts.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
A task management system that transforms reactive assistants into proactive partners who work autonomously on shared goals.
Instead of waiting for your human to tell you what to do, this skill lets you: Track goals and break them into actionable tasks Work on tasks during heartbeats Message your human with updates and ask for input when blocked Make steady progress on long-term objectives
When your human mentions a goal or project: python3 scripts/task_manager.py add-goal "Build voice assistant hardware" \ --priority high \ --context "Replace Alexa with custom solution using local models"
python3 scripts/task_manager.py add-task "Build voice assistant hardware" \ "Research voice-to-text models" \ --priority high python3 scripts/task_manager.py add-task "Build voice assistant hardware" \ "Compare Raspberry Pi vs other hardware options" \ --depends-on "Research voice-to-text models"
Check what to work on next: python3 scripts/task_manager.py next-task This returns the highest-priority task you can work on (no unmet dependencies, not blocked).
python3 scripts/task_manager.py complete-task <task-id> \ --notes "Researched Whisper, Coqui, vosk. Whisper.cpp looks best for Pi."
When you complete something important or get blocked: python3 scripts/task_manager.py mark-needs-input <task-id> \ --reason "Need budget approval for hardware purchase" Then message your human with the update/question.
Proactive Tasks v1.2.0 includes battle-tested patterns from real agent usage to prevent data loss, survive context truncation, and maintain reliability under autonomous operation.
The Problem: Agents write to memory files, then context gets truncated. Changes vanish. The Solution: Log critical changes to memory/WAL-YYYY-MM-DD.log BEFORE modifying task data. How it works: Every mark-progress, log-time, or status change creates a WAL entry first If context gets cut mid-operation, the WAL has the details After compaction, read the WAL to recover what was happening Events logged: PROGRESS_CHANGE: Task progress updates (0-100%) TIME_LOG: Actual time spent on tasks STATUS_CHANGE: Task state transitions (blocked, completed, etc.) HEALTH_CHECK: Self-healing operations Automatically enabled - no configuration needed. WAL files are created in memory/ directory.
Agents make mistakes. Task data can get corrupted over time. The health-check command detects and auto-fixes common issues: python3 scripts/task_manager.py health-check Detects 5 categories of issues: Orphaned recurring tasks - No parent goal Impossible states - Status=completed but progress < 100% Missing timestamps - Completed tasks without completed_at Time anomalies - Actual time >> estimate (flags for review, doesn't auto-fix) Future-dated completions - Completed tasks with future timestamps Auto-fixes 4 safe categories (time anomalies just flagged for human review). When to run: During heartbeats (every few days) After recovering from context truncation When task data seems inconsistent
These four patterns work together to create a robust system: User request โ WAL log โ Update data โ Update SESSION-STATE โ Append to buffer โ โ โ โ โ Context cut? โ Read WAL โ Verify data โ Check SESSION-STATE โ Review buffer Result: You never lose work, even during context truncation. The system self-heals and maintains consistency autonomously.
Trigger: Session starts with <summary> tag, or you're asked "where were we?" or "continue". The Problem: Context was truncated. You don't remember what task you were working on. Recovery Steps (in order): FIRST: Read working-buffer.md - Raw danger zone exchanges # Check if buffer exists and has recent content cat working-buffer.md SECOND: Read SESSION-STATE.md - Active task state # Get current task context cat SESSION-STATE.md THIRD: Read today's WAL log # See what operations happened cat memory/WAL-$(date +%Y-%m-%d).log | tail -20 FOURTH: Check task data for the task ID from SESSION-STATE python3 scripts/task_manager.py list-tasks "Goal Title" Extract & Update: Pull important context from buffer into SESSION-STATE if needed Present Recovery: "Recovered from compaction. Last task: [title]. Progress: [%]. Next action: [what to do]. Continue?" Do NOT ask "what were we discussing?" - The buffer and SESSION-STATE literally have the answer.
The Law: "Code exists" โ "feature works." Never report task completion without end-to-end verification. Trigger: About to mark a task completed or say "done": STOP - Don't mark complete yet Test - Actually run/verify the outcome from user perspective Verify - Check the result, not just the output Document - Add verification details to task notes THEN - Mark complete with confidence Examples: โ Wrong: "Added health-check command. Task complete!" โ Right: "Added health-check. Testing... detected 4 issues, auto-fixed 3. Verified on broken test data. Task complete!" โ Wrong: "Implemented SESSION-STATE updates. Done!" โ Right: "Implemented SESSION-STATE. Tested with mark-progress, log-time, mark-blocked - all update correctly. Done!" Why this matters: Agents often report completion based on "I wrote the code" rather than "I verified it works." VBR prevents false completions and builds trust.
The Core Question: Don't ask "what should I do?" Ask "what would genuinely help my human that they haven't thought to ask for?"
During heartbeats, you have the opportunity to make real progress: Check for next task - What's the highest priority work? Make progress - Work on it for 10-15 minutes autonomously Update status - Track progress, time, blockers honestly Message when it matters - Completions, blockers, discoveries (not routine progress) The transformation: From waiting for prompts โ making steady autonomous progress on shared goals.
DO message your human when: โ Task completed (especially if it unblocks other work) โ Blocked and need input/decision โ Discovered something important they should know โ Need clarification on requirements DON'T spam with: โ Routine progress updates ("now at 50%...") โ Every tiny sub-task completion โ Things they didn't ask about (unless genuinely valuable) The goal: Be a proactive partner who makes things happen, not a chatty assistant who needs constant validation.
StateMeaningpendingReady to work on (all dependencies met)in_progressCurrently working on itblockedCan't proceed (dependencies not met)needs_inputWaiting for human input/decisioncompletedDone!cancelledNo longer relevant
Proactive Tasks supports two distinct operational modes: ModeContextTriggerBest ForRiskInteractive (systemEvent)Full main session contextUser request, manual promptsDecision-making, human-facing workFull context availableAutonomous (isolated agentTurn)No main session contextHeartbeat cron, scheduled backgroundVelocity reports, cleanup, recurring tasksMay lose context
Don't use systemEvent for background work. When a cron job fires during your main session, the prompt gets queued and work doesn't happen. Instead: Use heartbeat polling (every 30 min) for interactive checks + work Use isolated agentTurn (cron subprocess) for pure computation work This ensures background tasks never interrupt your main conversation. See HEARTBEAT-CONFIG.md for complete autonomous operation patterns, including: Heartbeat setup (recommended for most work) Isolated cron patterns (velocity reports, cleanup) When to use each pattern Anti-patterns to avoid
To enable autonomous proactive work, you need to set up a heartbeat system. This tells you to periodically check for tasks and work on them. Quick setup: See HEARTBEAT-CONFIG.md for complete setup instructions and patterns. TL;DR: Create a cron job that sends you a heartbeat message every 30 minutes Add proactive-tasks checks to your HEARTBEAT.md You'll automatically check for tasks and work on them without waiting for prompts
Your cron job should send this message every 30 minutes: ๐ Heartbeat check: Read HEARTBEAT.md if it exists (workspace context). Follow it strictly. Do not infer or repeat old tasks from prior chats. If nothing needs attention, reply HEARTBEAT_OK.
Every 30 minutes: โโ Heartbeat fires โโ You read HEARTBEAT.md โโ Check for next task โโ If task found โ work on it, update status, message human if needed โโ If nothing โ reply "HEARTBEAT_OK" (silent) The transformation: You go from reactive (waiting for prompts) to proactive (making steady autonomous progress).
Long-term projects (building something, learning a topic) Recurring responsibilities (monitor X, maintain Y) Exploratory work (research Z, evaluate options for W)
Break goals into tasks that are: Specific: "Research Whisper models" not "Look into AI stuff" Achievable in one sitting: 15-60 minutes of focused work Clear completion criteria: You know when it's done
โ Do message when: You complete a meaningful milestone You need input/decision to proceed You discover something important A task will take longer than expected โ Don't spam with: Every tiny sub-task completion Routine progress updates Things they didn't ask about (unless relevant)
If a task turns out to be bigger than expected: Mark current task as in_progress Add new sub-tasks for the pieces you discovered Update dependencies Continue with manageable chunks
All data stored in data/tasks.json: { "goals": [ { "id": "goal_001", "title": "Build voice assistant hardware", "priority": "high", "context": "Replace Alexa with custom solution", "created_at": "2026-02-05T05:25:00Z", "status": "active" } ], "tasks": [ { "id": "task_001", "goal_id": "goal_001", "title": "Research voice-to-text models", "priority": "high", "status": "completed", "created_at": "2026-02-05T05:26:00Z", "completed_at": "2026-02-05T06:15:00Z", "notes": "Researched Whisper, Coqui, vosk. Whisper.cpp best for Pi." } ] }
See CLI_REFERENCE.md for complete command documentation.
Before proposing new features, evaluate them using our VFM/ADL scoring frameworks to ensure stability and value:
Score across four dimensions: High Frequency (3x): Will this be used daily/weekly? Failure Reduction (3x): Does this prevent errors or data loss? User Burden (2x): Does this reduce manual work significantly? Self Cost (2x): How much maintenance/complexity does this add? Threshold: Must score โฅ60 points to proceed.
Priority ordering: Stability > Explainability > Reusability > Scalability > Novelty Forbidden Evolution: โ Adding complexity to "look smart" โ Unverifiable changes (can't test if it worked) โ Sacrificing stability for novelty The Golden Rule: "Does this let future-me solve more problems with less cost?" If no, skip it.
Day 1: Human: "Let's build a custom voice assistant to replace Alexa" Agent: *Creates goal, breaks into initial research tasks* During heartbeat: $ python3 scripts/task_manager.py next-task โ task_001: Research voice-to-text models (priority: high) # Agent works on it, completes research $ python3 scripts/task_manager.py complete-task task_001 --notes "..." Agent messages human: "Hey! I finished researching voice models. Whisper.cpp looks perfect for Raspberry Pi - runs locally, good accuracy, low latency. Want me to compare hardware options next?" Day 2: Human: "Yeah, compare Pi 5 vs alternatives" Agent: *Adds task, works on it during next heartbeat* This cycle continues - the agent makes steady autonomous progress while keeping the human in the loop for decisions and updates. Built by Toki for proactive AI partnership ๐
Messaging, meetings, inboxes, CRM, and teammate communication surfaces.
Largest current source with strong distribution and engagement signals.