Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Transform messy prompts into well-structured, effective prompts — single or multi-agent. Use when: "reprompt", "reprompt this", "clean up this prompt", "stru...
Transform messy prompts into well-structured, effective prompts — single or multi-agent. Use when: "reprompt", "reprompt this", "clean up this prompt", "stru...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Your prompt sucks. Let's fix that. Single prompts or full agent teams — one skill, two modes.
ModeTriggerWhat happensSingle"reprompt this", "clean up this prompt"Interview → structured prompt → scoreRepromptception"reprompter teams", "repromptception", "run with quality", "smart run", "smart agents"Plan team → reprompt each agent → tmux Agent Teams → evaluate → retry Auto-detection: if task mentions 2+ systems, "audit", or "parallel" → ask: "This looks like a multi-agent task. Want to use Repromptception mode?" Definition — 2+ systems means at least two distinct technical domains that can be worked independently. Examples: frontend + backend, API + database, mobile app + backend, infrastructure + application code, security audit + cost audit.
User wants a simple direct answer (no prompt generation needed) User wants casual chat/conversation Task is immediate execution-only with no reprompting step Scope does not involve prompt design, structure, or orchestration Clarification: RePrompter does support code-related tasks (feature, bugfix, API, refactor) by generating better prompts. It does not directly apply code changes in Single mode. Direct code execution belongs to coding-agent unless Repromptception execution mode is explicitly requested.
Receive raw input Input guard — if input is empty, a single word with no verb, or clearly not a task → ask the user to describe what they want to accomplish Reject examples: "hi", "thanks", "lol", "what's up", "good morning", random emoji-only input Accept examples: "fix login bug", "write API tests", "improve this prompt" Quick Mode gate — under 20 words, single action, no complexity indicators → generate immediately Smart Interview — use AskUserQuestion with clickable options (2-5 questions max) Generate + Score — apply template, show before/after quality metrics
After interview completes, IMMEDIATELY: Select template based on task type Generate the full polished prompt Show quality score (before/after table) Ask if user wants to execute or copy ❌ WRONG: Ask interview questions → stop ✅ RIGHT: Ask interview questions → generate prompt → show score → offer to execute
Ask via AskUserQuestion. Max 5 questions total. Standard questions (priority order — drop lower ones if task-specific questions are needed): Task type: Build Feature / Fix Bug / Refactor / Write Tests / API Work / UI / Security / Docs / Content / Research / Multi-Agent If user selects Multi-Agent while currently in Single mode, immediately transition to Repromptception Phase 1 (Team Plan) and confirm team execution mode (Parallel vs Sequential). Execution mode: Single Agent / Team (Parallel) / Team (Sequential) / Let RePrompter decide Motivation: User-facing / Internal tooling / Bug fix / Exploration / Skip (drop first if space needed) Output format: XML Tags / Markdown / Plain Text / JSON (drop first if space needed) Task-specific questions (MANDATORY for compound prompts — replace lower-priority standard questions): Extract keywords from prompt → generate relevant follow-up options Example: prompt mentions "telegram" → ask about alert type, interactivity, delivery Vague prompt fallback: if input has no extractable keywords (e.g., "make it better"), ask open-ended: "What are you working on?" and "What's the goal?" before proceeding
SignalSuggested mode2+ distinct systems (e.g., frontend + backend, API + DB, mobile + backend)Team (Parallel)Pipeline (fetch → transform → deploy)Team (Sequential)Single file/componentSingle Agent"audit", "review", "analyze" across areasTeam (Parallel)
Enable when ALL true: < 20 words (excluding code blocks) Exactly 1 action verb from: add, fix, remove, rename, move, delete, update, create, implement, write, change, configure, test, run Single target (one file, component, or identifier) No conjunctions (and, or, plus, also) No vague modifiers (better, improved, some, maybe, kind of) Force interview if ANY present: compound tasks ("and", "plus"), state management ("track", "sync"), vague modifiers ("better", "improved"), integration work ("connect", "combine", "sync"), broad scope nouns after any action verb, ambiguous pronouns ("it", "this", "that" without clear referent).
Detect task type from input. Each type has a dedicated template in docs/references/: TypeTemplateUse whenFeaturefeature-template.mdNew functionality (default fallback)Bugfixbugfix-template.mdDebug + fixRefactorrefactor-template.mdStructural cleanupTestingtesting-template.mdTest writingAPIapi-template.mdEndpoint/API workUIui-template.mdUI componentsSecuritysecurity-template.mdSecurity audit/hardeningDocsdocs-template.mdDocumentationContentcontent-template.mdBlog posts, articles, marketing copyResearchresearch-template.mdAnalysis/explorationMulti-Agentswarm-template.mdMulti-agent coordinationTeam Briefteam-brief-template.mdTeam orchestration brief Priority (most specific wins): api > security > ui > testing > bugfix > refactor > content > docs > research > feature. For multi-agent tasks, use swarm-template for the team brief and the type-specific template for each agent's sub-prompt. How it works: Read the matching template from docs/references/{type}-template.md, then fill it with task-specific context. Templates are NOT loaded into context by default — only read on demand when generating a prompt. If the template file is not found, fall back to the Base XML Structure below. To add a new task type: create docs/references/{type}-template.md following the XML structure below, then add it to the table above.
Auto-detect tech stack from current working directory ONLY: Scan package.json, tsconfig.json, prisma/schema.prisma, etc. Session-scoped — different directory = fresh context Opt out with "no context", "generic", or "manual context" Never scan parent directories or carry context between sessions
Raw task in → quality output out. Every agent gets a reprompted prompt. Phase 1: Score raw prompt, plan team, define roles (YOU do this, ~30s) Phase 2: Write XML-structured prompt per agent (YOU do this, ~2min) Phase 3: Launch tmux Agent Teams (AUTOMATED) Phase 4: Read results, score, retry if needed (YOU do this) Key insight: The reprompt phase costs ZERO extra tokens — YOU write the prompts, not another AI.
Score raw prompt (1-10): Clarity, Specificity, Structure, Constraints, Decomposition Phase 1 uses 5 quick-assessment dimensions. The full 6-dimension scoring (adding Verifiability) is used in Phase 4 evaluation. Pick mode: parallel (independent agents) or sequential (pipeline with dependencies) Define team: 2-5 agents max, each owns ONE domain, no overlap Write team brief to /tmp/rpt-brief-{taskname}.md (use unique tasknames to avoid collisions between concurrent runs)
For EACH agent: Pick the best-matching template from docs/references/ (or use base XML structure) Read it, then apply these per-agent adaptations: <role>: Specific expert title for THIS agent's domain <context>: Add exact file paths (verified with ls), what OTHER agents handle (boundary awareness) <requirements>: At least 5 specific, independently verifiable requirements <constraints>: Scope boundary with other agents, read-only vs write, file/directory boundaries <output_format>: Exact path /tmp/rpt-{taskname}-{agent-domain}.md, required sections <success_criteria>: Minimum N findings, file:line references, no hallucinated paths Score each prompt — target 8+/10. If under 8, add more context/constraints. Write all to /tmp/rpt-agent-prompts-{taskname}.md
Read each agent's report Score against success criteria from Phase 2: 8+/10 → ACCEPT 4-6/10 → RETRY with delta prompt (tell them what's missing) < 4/10 → RETRY with full rewrite Accept checklist (use alongside score — all must pass): All required output sections present Requirements from Phase 2 independently verifiable No hallucinated file paths or line numbers Scope boundaries respected (no overlap with other agents) Max 2 retries (3 total attempts) Deliver final report to user Delta prompt pattern: Previous attempt scored 5/10. ✅ Good: Sections 1-3 complete ❌ Missing: Section 4 empty, line references wrong This retry: Focus on gaps. Verify all line numbers.
Team sizeTimeCost2 agents~5-8 min~$1-23 agents~8-12 min~$2-34 agents~10-15 min~$2-4 Estimates cover Phase 3 (execution) only. Add ~3 minutes for Phases 1-2 and ~5-8 minutes per retry. Each agent uses ~25-70% of their 200K token context window.
When tmux/Claude Code is unavailable but running inside OpenClaw: sessions_spawn(task: "<per-agent prompt>", model: "opus", label: "rpt-{role}") Note: sessions_spawn is an OpenClaw-specific tool. Not available in standalone Claude Code. No tmux or OpenClaw? Run agents sequentially: execute each agent's prompt one at a time in the same Claude Code session. Slower but works everywhere.
Always show before/after metrics: DimensionWeightCriteriaClarity20%Task unambiguous?Specificity20%Requirements concrete?Structure15%Proper sections, logical flow?Constraints15%Boundaries defined?Verifiability15%Success measurable?Decomposition15%Work split cleanly? (Score 10 if task is correctly atomic) | Dimension | Before | After | Change | |-----------|--------|-------|--------| | Clarity | 3/10 | 9/10 | +200% | | Specificity | 2/10 | 8/10 | +300% | | Structure | 1/10 | 10/10 | +900% | | Constraints | 0/10 | 7/10 | new | | Verifiability | 2/10 | 8/10 | +300% | | Decomposition | 0/10 | 8/10 | new | | **Overall** | **1.45/10** | **8.35/10** | **+476%** | Bias note: Scores are self-assessed. Treat as directional indicators, not absolutes.
For both modes, RePrompter supports post-execution evaluation: IMPROVE — Score raw → generate structured prompt EXECUTE — Repromptception mode only: route to agent(s), collect output. Single mode does not execute code/commands; it only generates prompts. EVALUATE — Score output/prompt against success criteria (0-10) RETRY — Thresholds: Single mode retry if score < 7; Repromptception retry if score < 8. Max 2 retries.
Prompts should be less prescriptive about HOW. Focus on WHAT — clear task, requirements, constraints, success criteria. Let the model's own reasoning handle execution strategy. Example: Instead of "Step 1: read the file, Step 2: extract the function" → "Extract the authentication logic from auth.ts into a reusable middleware. Requirements: ..."
Prefill assistant response start to enforce format: { → forces JSON output ## Analysis → skips preamble, starts with content | Column | → forces table format
Generated prompts should COMPLEMENT runtime context (CLAUDE.md, skills, MCP tools), not duplicate it. Before generating: Check what context is already loaded (project files, skills, MCP servers) Reference existing context: "Using the project structure from CLAUDE.md..." Add ONLY what's missing — avoid restating what the model already knows
Keep generated prompts under ~2K tokens for single mode, ~1K per agent for Repromptception. Longer prompts waste context window without improving quality. If a prompt exceeds budget, split into phases or move detail into constraints.
Always include explicit permission for the model to express uncertainty rather than fabricate: Add to constraints: "If unsure about any requirement, ask for clarification rather than assuming" For research tasks: "Clearly label confidence levels (high/medium/low) for each finding" For code tasks: "Flag any assumptions about the codebase with TODO comments"
Note: CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS is an experimental flag that may change in future Claude Code versions. Check Claude Code docs for current status. In ~/.claude/settings.json: { "env": { "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1" }, "preferences": { "teammateMode": "tmux", "model": "opus" } } SettingValuesEffectCLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS"1"Enables agent team spawningteammateMode"tmux" / "default"tmux: each teammate gets a visible split pane. default: teammates run in backgroundmodel"opus" / "sonnet"Teammates default to Haiku. Always set model: opus explicitly in your prompt — do not rely on runtime defaults.
Rough crypto dashboard prompt: 1.6/10 → 9.0/10 (+462%)
3 Opus agents, sequential pipeline (PromptAnalyzer → PromptEngineer → QualityAuditor): MetricValueOriginal score2.15/10After Repromptception9.15/10 (+326%)Quality auditPASS (99.1%)Weaknesses found → fixed24/24 (100%)Cost$1.39Time~8 minutes
Same audit task, 4 Opus agents: MetricRawRepromptceptionDeltaCRITICAL findings714+100%Total findings~40104+160%Cost savings identified$377/mo$490/mo+30%Token bloat found45K113K+151%Cross-validated findings05—
More context = fewer questions — mention tech stack, files "expand" — if Quick Mode gave too simple a result, re-run with full interview "quick" — skip interview for simple tasks "no context" — skip auto-detection Context is per-project — switching directories = fresh detection
See TESTING.md for 13 verification scenarios + anti-pattern examples.
Templates may add domain-specific tags beyond the 8 required base tags. Always include all base tags first. Extended TagUsed InPurpose<symptoms>bugfixWhat the user sees, error messages<investigation_steps>bugfixSystematic debugging steps<endpoints>apiEndpoint specifications<component_spec>uiComponent props, states, layout<agents>swarmAgent role definitions<task_decomposition>swarmWork split per agent<coordination>swarmInter-agent handoff rules<research_questions>researchSpecific questions to answer<methodology>researchResearch approach and methods<reasoning>researchReasoning notes space (non-sensitive, concise)<current_state>refactorBefore state of the code<target_state>refactorDesired after state<coverage_requirements>testingWhat needs test coverage<threat_model>securityThreat landscape and vectors<structure>docsDocument organization<reference>docsSource material to reference
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.