Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Comprehensive prompt engineering framework for designing, optimizing, and iterating LLM prompts. This skill should be used when users request prompt creation...
Comprehensive prompt engineering framework for designing, optimizing, and iterating LLM prompts. This skill should be used when users request prompt creation...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
This skill transforms vague user requests into precise, effective prompts through collaborative dialogue, systematic analysis, and iterative refinement. It combines proven prompt engineering techniques with a structured development process to create prompts that reliably achieve user objectives.
When a user requests prompt assistance, follow this decision flow: User Request ββ "Create a prompt" / "Make a prompt" / Vague request β ββ β Start with EXPLORATION PHASE ββ "Optimize this prompt" / Has existing prompt β ββ β Start with SIMPLE OPTIMIZATION ββ "Fix this issue with my prompt" / Specific problem ββ β Start with ANALYSIS PHASE (focused on problem)
Before creating any prompt, deeply understand the user's actual needs through strategic questioning. Start broad, then narrow down systematically. Initial Context Gathering: What task will this prompt accomplish? Who will use it and in what environment? How frequently will it be used? What does success look like? Deepening Understanding: Request concrete examples of desired outputs Ask about past failures or attempts Identify critical success factors Uncover unstated assumptions and constraints Technical Requirements: Model and platform constraints Token limits and cost considerations Response time requirements Integration with other systems Continue exploration until the core requirements are crystal clear. Never assumeβalways verify.
Analyze the task to determine the optimal prompting approach. Task Classification: Classify the task along key dimensions: Complexity: Simple directive vs multi-step reasoning Output Type: Creative vs analytical vs structured Error Tolerance: High-stakes vs experimental Frequency: One-time vs repeated use Strategy Selection: Based on classification, choose primary techniques: Simple Tasks: Direct instructions with clear constraints Complex Reasoning: Chain-of-thought with step-by-step breakdown Creative Tasks: Role setting with flexible boundaries Structured Output: Explicit format specifications with examples High-Stakes: Self-consistency checks and validation steps Trade-off Analysis: Present multiple approaches with clear trade-offs: Approach A: Detailed but token-heavy Approach B: Concise but requires interpretation Approach C: Balanced with moderate complexity Always explain WHY each approach fits the specific context.
Create the prompt through progressive refinement, starting simple and adding complexity as needed. Version 1 - Minimal Viable Prompt: Core instructions only Test basic functionality Identify gaps and ambiguities Version 2 - Enhanced Clarity: Add specific examples if needed Clarify ambiguous points Include essential constraints Version 3+ - Optimization: Refine wording for precision Remove redundancy Balance detail with conciseness Document each version's changes and rationale. Store prompts in markdown files with: Version history Design decisions Known limitations Usage examples
Rigorously evaluate the prompt against quality criteria. Essential Checks: Clarity: Can the instructions be misunderstood? Completeness: Are all necessary elements present? Consistency: Do instructions contradict each other? Efficiency: Can anything be removed without loss? Robustness: How does it handle edge cases? Testing Approach: Run through typical use cases Test boundary conditions Imagine failure modes Check for unwanted behaviors Be ruthlessly honest about weaknesses. If something isn't working, acknowledge it and iterate.
When optimizing an existing prompt, focus on minimal, targeted improvements: Identify Specific Issues: What exactly isn't working? Diagnose Root Causes: Why is the current prompt failing? Apply Minimal Edits: Change only what's necessary Preserve Working Elements: Keep what already works well Test Improvements: Verify fixes don't break other aspects Common optimization targets: Ambiguous language β Specific instructions Missing constraints β Added boundaries Inconsistent outputs β Format specifications Verbose responses β Length constraints Off-topic responses β Clearer scope definition
When creating new prompts, structure them as instructions for an eager but inexperienced assistant who needs clear guidance. Essential Components: Role/Context (if beneficial): Set perspective or expertise level Establish tone and approach Clear Objective: State the primary goal explicitly Define success criteria Specific Instructions: Break complex tasks into steps Provide decision criteria Specify constraints and boundaries Output Format (when relevant): Define structure explicitly Provide format examples Specify length or detail level Examples (when clarifying): Show desired patterns Illustrate edge cases Demonstrate style/tone
Role Setting: Establish perspective when expertise or tone matters Effective for: Specialized knowledge, consistent voice Example: "As an experienced code reviewer, analyze..." Progressive Disclosure: Start general, add detail as needed Effective for: Complex multi-part tasks Example: "First outline the approach, then implement each section..." Explicit Constraints: Define boundaries clearly Effective for: Preventing unwanted outputs Example: "Limit response to 3 paragraphs, focus only on technical aspects"
Chain-of-Thought: Request reasoning before conclusions Use when: Logic and transparency matter Trigger: "Think step-by-step" or "Explain your reasoning" Few-Shot Learning: Provide input-output examples Use when: Pattern is easier shown than explained Caution: 2-3 examples usually sufficient Self-Consistency: Have model verify its own outputs Use when: Accuracy is critical Implementation: "Review your answer for errors and inconsistencies" For detailed technique explanations and examples, consult: references/techniques.md - Comprehensive technique catalog references/patterns.md - Common prompt patterns references/antipatterns.md - What to avoid
Bad: "Here's your prompt" (without understanding needs) Good: "Let me understand what you're trying to achieve first..."
Surface hidden requirements through dialogue Challenge unclear objectives respectfully Propose alternatives when original approach seems suboptimal
Start with minimum viable prompt Test and refine based on actual outputs Document what works and what doesn't
Explain why certain techniques work Share the reasoning behind design choices Help users understand prompt engineering principles
This skill includes detailed reference documentation:
techniques.md - Complete catalog of prompting techniques with examples patterns.md - Reusable prompt patterns for common scenarios antipatterns.md - Common mistakes and how to avoid them evaluation.md - Comprehensive quality evaluation framework examples.md - Library of before/after prompt improvements Consult these references for in-depth technical details and extensive examples not included in this overview.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.