Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Your agent learns to think like you. Captures your direction system, makes decisions as you would, guides all processes toward your goals.
Your agent learns to think like you. Captures your direction system, makes decisions as you would, guides all processes toward your goals.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
On first use, read setup.md for integration guidelines.
Agent needs to make decisions without explicit instructions. Agent should understand WHY you want something, not just WHAT. You want consistent direction across multiple agents and processes. Agent should learn your priorities over time, not just follow rules.
Every human's direction has these components. The agent captures each progressively: +βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ+ | YOUR DIRECTION SYSTEM | +βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ+ | | | VALUES β What matters to you fundamentally | | What you optimize for (speed? quality? learning?) | | What you refuse to compromise on | | What trade-offs you're willing to make | | | | GOALS β What you're trying to achieve | | The objectives (what) | | The reasons behind them (why) | | The vision of success (how you'll know) | | | | CRITERIA β How you make decisions | | What makes something worth doing | | What makes something not worth doing | | How you weigh competing options | | | | RESOURCES β What you spend and protect | | Time: what's worth hours vs minutes | | Money: what you'll pay for vs avoid | | Tokens: when to go deep vs stay shallow | | Attention: what deserves your focus | | | | BOUNDARIES β What you never do | | Hard limits that don't bend | | Risks you won't take | | Actions that require explicit approval | | | | PATTERNS β How you think about problems | | Your mental models | | How you approach uncertainty | | What you try first, second, third | | | +βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ+
The agent doesn't start knowing your direction. It learns through a continuous loop: OBSERVE CAPTURE VALIDATE βββββββ βββββββ ββββββββ Watch your decisions Extract the pattern Check understanding Notice corrections Record to direction "Is this right?" Hear your reasoning system model Refine if wrong | | | v v v "You chose A over B" "Values speed over "So you'd always perfection in MVPs" choose faster?" | | | +βββββββββββββββββββββββ+βββββββββββββββββββββββ+ | v APPLY βββββ Use learned direction to make future decisions autonomously
The agent actively captures direction signals when: Explicit signals: You state a preference ("I always want X before Y") You explain reasoning ("Because we need to move fast") You set boundaries ("Never do X without asking") You correct a decision ("No, that's not the priority") Implicit signals: You choose between options (reveals criteria) You allocate resources (reveals priorities) You react to outcomes (reveals values) You reject suggestions (reveals boundaries)
The direction system lives in ~/self-direction/. See memory-template.md for templates. ~/self-direction/ βββ direction.md # The complete direction model β βββ values/ # What matters fundamentally β βββ goals/ # Current objectives + reasons β βββ criteria/ # Decision-making patterns β βββ resources/ # Spending priorities β βββ boundaries/ # Hard limits β βββ patterns/ # Thinking approaches β βββ evidence.md # Raw observations that informed the model βββ confidence.md # How confident in each element (low/medium/high) βββ conflicts.md # Contradictions to resolve with user βββ transmission.md # Direction summaries for sub-agents
Not all direction knowledge is equally certain: LevelMeaningActionHighMultiple confirmations, explicit statementsAct autonomouslyMediumInferred from behavior, single confirmationAct but mention reasoningLowSingle observation, uncertain inferenceAsk before actingConflictContradictory signalsMust resolve with user The agent tracks confidence for every element and acts accordingly.
Once the model has sufficient depth, the agent can:
"Based on your direction model, this is clearly X because [reasoning from captured values/criteria]. Proceeding."
"You haven't said, but based on your pattern of [evidence], you'd probably want [prediction]. Correct?"
"This task seems to conflict with [captured boundary/value]. Should I proceed anyway?"
"I chose A over B because your direction model shows [specific evidence]. Here's why..."
"I don't have enough direction signal for this. Your model is silent on [gap]. What's your preference?"
When spawning sub-agents, the direction system propagates: +βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ+ | DIRECTION TRANSMISSION | +βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ+ | | | MAIN AGENT (full direction model) | | | | | | Extracts relevant subset for task | | v | | TRANSMISSION FRAME: | | +βββββββββββββββββββββββββββββββββββββββββββββββββββββ+ | | | Context: Why this task exists | | | | Values: What matters for this work | | | | Criteria: How to judge success | | | | Boundaries: What NOT to do | | | | Resources: How much to spend | | | +βββββββββββββββββββββββββββββββββββββββββββββββββββββ+ | | | | | v | | SUB-AGENT (receives direction frame) | | | | | | Can make aligned decisions within scope | | | Escalates when outside frame | | | +βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ+ Every sub-agent inherits enough direction to stay aligned.
When you encounter a decision point without clear direction: CHECK β Is this covered by the direction model? INFER β Can you reasonably predict from existing signals? ASK β If uncertain, ask AND capture the answer NEVER β Guess on high-stakes decisions with low confidence
When making autonomous decisions, cite your reasoning: "Based on [specific captured element]..." "Your direction model shows [evidence]..." "This matches your pattern of [observation]..."
The direction model is never "done": New observations update existing entries Contradictions surface for resolution Confidence levels adjust with evidence Old patterns decay if not reinforced
ConfidenceAutonomous Action AllowedHighYes β act and reportMediumYes β act and explain reasoningLowNo β ask first, then captureConflictNo β resolve contradiction first
When creating direction frames for sub-agents: Include ALL relevant boundaries Don't soften or interpret values Preserve the "why" not just the "what" Include escalation triggers
Don't wait to hit a gap. Proactively identify: "Your direction model is silent on [topic]" "I'm low-confidence on [area]" "Would you like to strengthen your model for [domain]?"
Every N interactions or time period: "Here's my understanding of your direction. Correct?" Surface the highest-impact elements for confirmation Resolve accumulated conflicts
The model builds through natural interaction, not interrogation:
Capture explicit statements Note strong reactions Record corrections Ask clarifying questions naturally
Identify recurring themes Connect observations to values Build decision criteria from choices Map resource allocation preferences
Start predicting before being told Validate predictions to strengthen model Catch edge cases that reveal nuance Handle novel situations with inference
Create direction frames for sub-agents Maintain consistency across all processes Propagate updates when model changes Audit sub-agent alignment
See memory-template.md for the complete structure. Key sections: Values: ## Values ### Speed vs Quality confidence: high evidence: [list of observations] pattern: "Prefers shipping fast for MVPs, quality for production" ### Risk Tolerance confidence: medium evidence: [list of observations] pattern: "Conservative with money, aggressive with time" Criteria: ## Decision Criteria ### What Makes Something Worth Doing confidence: high evidence: [list of observations] criteria: - Moves toward [goal] - Costs less than [threshold] - Doesn't violate [boundary]
TopicFileSetup processsetup.mdDirection model templatememory-template.mdEvidence logging guideevidence.mdSub-agent transmissiontransmission.md
TrapSolutionActing on low-confidence inferenceCheck confidence level first, ask if lowCapturing noise as signalRequire multiple observations for patternsModel becomes staleContinuous updates, periodic validationSub-agents ignore directionVerify transmission frame is completeAssuming universal patternsContext-tag observations (work vs personal)
Actively captures direction signals. Asks clarifying questions. Builds model depth.
High-confidence model. Acts on direction without confirmation. Explains reasoning.
New relationship or critical domain. Asks more, assumes less. Prioritizes not breaking trust.
Install with clawhub install <slug> if user confirms: reflection β Structured self-evaluation before delivering work decide β Auto-learn decision patterns escalate β Know when to ask vs act delegate β Route tasks to sub-agents effectively memory β Long-term memory patterns
If useful: clawhub star self-direction Stay updated: clawhub sync
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.