Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Write, test, and iterate prompts for AI models with voice preservation, model-specific adaptation, and systematic failure analysis.
Write, test, and iterate prompts for AI models with voice preservation, model-specific adaptation, and systematic failure analysis.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Prompt patterns and user preferences live in ~/prompting/. ~/prompting/ βββ memory.md # HOT: user voice, model preferences, learned corrections βββ patterns/ # Reusable prompt templates by task type βββ history.md # Past prompts with outcomes See memory-template.md for initial setup.
TopicFileCommon failure modesfailures.mdModel-specific quirksmodels.mdIteration workflowiteration.mdAdvanced techniquestechniques.md
Before writing any prompt, ask: What model? (GPT-4, Claude, Haiku, Gemini) What's the failure mode you're seeing? (if iterating) Token budget? (cost-sensitive vs. quality-first) Never default to verbose. Simpler often wins.
When improving a failing prompt: Change ONE thing at a time Note what's currently working Surgical fixes > rewrites
See models.md β key differences: Claude: explicit constraints, less scaffolding needed GPT-4: benefits from step-by-step, tolerates verbose Haiku/fast models: brevity critical, skip examples when possible Prompt optimized for one model will underperform on others.
When user provides writing samples: Extract specific patterns (sentence length, punctuation, vocabulary) Apply consistently throughout session Check output against samples before delivering
When generating alternatives, vary: Structure (not just synonyms) Emotional angle Opening hook Call-to-action style "Top 5 reasons" β "The hidden truth about" β "What nobody tells you about" = real variation.
When a prompt fails, classify the failure type: Hallucination β add grounding, sources, constraints Format break β strengthen output spec, add examples Instruction drift β move critical constraints earlier Refusal β rephrase intent, remove ambiguity Different failures need different fixes. See failures.md.
Default to removing words, not adding. Test: "Does removing this line change the output?" If no, remove. Token costs matter. A prompt that works with 50 tokens beats one that needs 500.
When asked to test a prompt: Generate edge cases (empty input, very long, special chars) Include adversarial inputs Test boundary conditions Don't just test happy path.
For content prompts, know platform constraints: Twitter: 280 chars, no markdown LinkedIn: longer ok, hashtags matter Instagram: emoji-friendly, visual hooks Prompt should enforce format, not hope for it.
Store in ~/prompting/memory.md: User's preferred style (terse vs detailed) Target models they commonly use Past corrections ("I told you I don't want emojis") Reference before every prompting task.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.