Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Master the art of spawning and managing sub-agents. Write prompts that actually work, track running agents, and learn from every outcome. Part of the Hal Stack π¦
Master the art of spawning and managing sub-agents. Write prompts that actually work, track running agents, and learn from every outcome. Part of the Hal Stack π¦
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
By Hal Labs β Part of the Hal Stack Your agents fail because your prompts suck. This skill fixes that.
You're not prompting. You're praying. Most prompts are wishes tossed into the void: β "Research the best vector databases and write a report" You type something reasonable. The output is mid. You rephrase. Still mid. You add keywords. Somehow worse. You blame the model. Here's what you don't understand: A language model is a pattern-completion engine. It generates the most statistically probable output given your input. Vague input β generic output. Not because the model is dumb. Because generic is what's most probable when you give it nothing specific to work with. The model honored exactly what you asked for. You just didn't realize how little you gave it.
A prompt is not a request. A prompt is a contract. Every contract must answer four non-negotiables: ElementQuestionRoleWho is the model role-playing as?TaskWhat exactly must it accomplish?ConstraintsWhat rules must be followed?OutputWhat does "done" look like? Miss one, the model fills the gap with assumptions. Assumptions are where hallucinations are born.
Effective prompts share a specific structure. This maps to how models actually process information.
Who is the model in this conversation? Not "helpful assistant" but a specific role with specific expertise: You are a senior product marketer who specializes in B2B SaaS positioning. You have 15 years of experience converting technical features into emotional benefits. You write in short sentences. You never use jargon without explaining it. The model doesn't "become" this identityβit accesses different clusters of training data, different stylistic patterns, different reasoning approaches. Identity matters. Miss this and you get generic output.
This is where most prompts fail. You're asking for output. You should be asking for how the output is formed. β Bad: Write me a marketing page. β Good: ## Process 1. First, analyze the target audience and identify their primary pain points 2. Then, define the positioning that addresses those pain points 3. Then, write the page 4. Show your reasoning at each step 5. Do not skip steps 6. Audit your work before reporting done You don't want answers. You want how the answer is formed. Think like a director. You're not asking for a sceneβyou're directing how the scene gets built.
Prompt portability is a myth. Different models are different specialists. You wouldn't give identical instructions to your exec assistant, designer, and backend dev. Model TypeBest ForWatch Out ForClaude OpusComplex reasoning, nuanced writing, long contextExpensive, can be verboseClaude SonnetBalanced tasks, code, analysisLess creative than OpusGPT-4Broad knowledge, structured outputCan be sycophanticSmaller modelsQuick tasks, simple queriesLimited reasoning depth Adapt your prompts per model: Some prefer structured natural language Some need explicit step sequencing Some collapse under verbose prompts Some ignore constraints unless repeated Some excel at analysis but suck at creativity The person who writes model-specific prompts will outperform the person with "better ideas" every time.
If you don't have docs, you're gambling. DocumentPurposePRDWhat we're building and whyDesign SystemVisual rules and componentsConstraints DocWhat must never changeContext DocCurrent state and history The rule: Reference docs in your prompts. The attached PRD is the source of truth. Do not contradict it. The design system in /docs/design.md must be followed exactly. Without explicit anchoring, the model assumes everything is mutableβincluding your core decisions. "Good prompting isn't writing better sentences. It's anchoring the model to reality."
For complex tasks where first attempts often fail: ## Mode: Ralph Keep trying until it works. Don't give up on first failure. If something breaks: 1. Debug and understand why 2. Try a different approach 3. Research how others solved similar problems 4. Iterate until user stories are satisfied You have [N] attempts before escalating. When to use: Build tasks with multiple components Integration work Anything where first-try success is unlikely
Every spawned agent gets tracked. No orphans. Maintain notes/areas/active-agents.md: ## Currently Running | Label | Task | Spawned | Expected | Status | |-------|------|---------|----------|--------| | research-x | Competitor analysis | 9:00 AM | 15m | π Running | ## Completed Today | Label | Task | Runtime | Result | |-------|------|---------|--------| | builder-v2 | Dashboard update | 8m | β Complete | Heartbeat check: 1. Run sessions_list --activeMinutes 120 2. Compare to tracking file 3. Investigate any missing/stalled agents 4. Log completions to LEARNINGS.md
Build reusable role definitions: # Role Library ## Research Analyst You are a senior research analyst with 10 years experience in technology markets. You are thorough but efficient. You cite sources. You distinguish fact from speculation. You present findings in structured formats with clear recommendations. ## Technical Writer You are a technical writer who specializes in developer documentation. You write clearly and concisely. You use examples liberally. You assume the reader is smart but unfamiliar with this specific system. ## Code Reviewer You are a senior engineer conducting code review. You focus on correctness, maintainability, and security. You explain your reasoning. You suggest specific improvements, not vague feedback.
Role β Who is the model? Task β What must it do? Constraints β What rules apply? Output β What does done look like?
Identity β Specific role and expertise Context β Ordered, scoped, labeled Task β Precise objective Process β How to approach (most overlooked!) Output β Exact format specification
Identity assigned? Context labeled (rules/state/history)? Task specific and measurable? Process described (not just output)? User stories defined? Output format specified? Constraints explicit? Error handling included? Added to tracking file?
The gap between "AI doesn't work for me" and exceptional results isn't intelligence or access. One group treats prompting as conversation. The other treats it as engineering a system command. The model matches your level of rigor. Vague inputs β generic outputs Structured inputs β structured outputs Clear thinking β clear results You don't need to be smarter. You need to be clearer. Clarity is a system, not a talent. Part of the Hal Stack π¦ Got a skill idea? Email: halthelobster@protonmail.com "You're not prompting, you're praying. Start engineering."
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.