{
  "schemaVersion": "1.0",
  "item": {
    "slug": "reprompter",
    "name": "RePrompter",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/AytuncYildizli/reprompter",
    "canonicalUrl": "https://clawhub.ai/AytuncYildizli/reprompter",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/reprompter",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=reprompter",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "CHANGELOG.md",
      "CONTRIBUTING.md",
      "README.md",
      "SKILL.md",
      "TESTING.md",
      "assets/demo.svg"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/reprompter"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/reprompter",
    "agentPageUrl": "https://openagent3.xyz/skills/reprompter/agent",
    "manifestUrl": "https://openagent3.xyz/skills/reprompter/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/reprompter/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "RePrompter v7.0",
        "body": "Your prompt sucks. Let's fix that. Single prompts or full agent teams — one skill, two modes."
      },
      {
        "title": "Two Modes",
        "body": "ModeTriggerWhat happensSingle\"reprompt this\", \"clean up this prompt\"Interview → structured prompt → scoreRepromptception\"reprompter teams\", \"repromptception\", \"run with quality\", \"smart run\", \"smart agents\"Plan team → reprompt each agent → tmux Agent Teams → evaluate → retry\n\nAuto-detection: if task mentions 2+ systems, \"audit\", or \"parallel\" → ask: \"This looks like a multi-agent task. Want to use Repromptception mode?\"\n\nDefinition — 2+ systems means at least two distinct technical domains that can be worked independently. Examples: frontend + backend, API + database, mobile app + backend, infrastructure + application code, security audit + cost audit."
      },
      {
        "title": "Don't Use When",
        "body": "User wants a simple direct answer (no prompt generation needed)\nUser wants casual chat/conversation\nTask is immediate execution-only with no reprompting step\nScope does not involve prompt design, structure, or orchestration\n\nClarification: RePrompter does support code-related tasks (feature, bugfix, API, refactor) by generating better prompts. It does not directly apply code changes in Single mode. Direct code execution belongs to coding-agent unless Repromptception execution mode is explicitly requested."
      },
      {
        "title": "Process",
        "body": "Receive raw input\nInput guard — if input is empty, a single word with no verb, or clearly not a task → ask the user to describe what they want to accomplish\n\nReject examples: \"hi\", \"thanks\", \"lol\", \"what's up\", \"good morning\", random emoji-only input\nAccept examples: \"fix login bug\", \"write API tests\", \"improve this prompt\"\n\n\nQuick Mode gate — under 20 words, single action, no complexity indicators → generate immediately\nSmart Interview — use AskUserQuestion with clickable options (2-5 questions max)\nGenerate + Score — apply template, show before/after quality metrics"
      },
      {
        "title": "⚠️ MUST GENERATE AFTER INTERVIEW",
        "body": "After interview completes, IMMEDIATELY:\n\nSelect template based on task type\nGenerate the full polished prompt\nShow quality score (before/after table)\nAsk if user wants to execute or copy\n\n❌ WRONG: Ask interview questions → stop\n✅ RIGHT: Ask interview questions → generate prompt → show score → offer to execute"
      },
      {
        "title": "Interview Questions",
        "body": "Ask via AskUserQuestion. Max 5 questions total.\n\nStandard questions (priority order — drop lower ones if task-specific questions are needed):\n\nTask type: Build Feature / Fix Bug / Refactor / Write Tests / API Work / UI / Security / Docs / Content / Research / Multi-Agent\n\nIf user selects Multi-Agent while currently in Single mode, immediately transition to Repromptception Phase 1 (Team Plan) and confirm team execution mode (Parallel vs Sequential).\n\n\nExecution mode: Single Agent / Team (Parallel) / Team (Sequential) / Let RePrompter decide\nMotivation: User-facing / Internal tooling / Bug fix / Exploration / Skip (drop first if space needed)\nOutput format: XML Tags / Markdown / Plain Text / JSON (drop first if space needed)\n\nTask-specific questions (MANDATORY for compound prompts — replace lower-priority standard questions):\n\nExtract keywords from prompt → generate relevant follow-up options\nExample: prompt mentions \"telegram\" → ask about alert type, interactivity, delivery\nVague prompt fallback: if input has no extractable keywords (e.g., \"make it better\"), ask open-ended: \"What are you working on?\" and \"What's the goal?\" before proceeding"
      },
      {
        "title": "Auto-Detect Complexity",
        "body": "SignalSuggested mode2+ distinct systems (e.g., frontend + backend, API + DB, mobile + backend)Team (Parallel)Pipeline (fetch → transform → deploy)Team (Sequential)Single file/componentSingle Agent\"audit\", \"review\", \"analyze\" across areasTeam (Parallel)"
      },
      {
        "title": "Quick Mode",
        "body": "Enable when ALL true:\n\n< 20 words (excluding code blocks)\nExactly 1 action verb from: add, fix, remove, rename, move, delete, update, create, implement, write, change, configure, test, run\nSingle target (one file, component, or identifier)\nNo conjunctions (and, or, plus, also)\nNo vague modifiers (better, improved, some, maybe, kind of)\n\nForce interview if ANY present: compound tasks (\"and\", \"plus\"), state management (\"track\", \"sync\"), vague modifiers (\"better\", \"improved\"), integration work (\"connect\", \"combine\", \"sync\"), broad scope nouns after any action verb, ambiguous pronouns (\"it\", \"this\", \"that\" without clear referent)."
      },
      {
        "title": "Task Types & Templates",
        "body": "Detect task type from input. Each type has a dedicated template in docs/references/:\n\nTypeTemplateUse whenFeaturefeature-template.mdNew functionality (default fallback)Bugfixbugfix-template.mdDebug + fixRefactorrefactor-template.mdStructural cleanupTestingtesting-template.mdTest writingAPIapi-template.mdEndpoint/API workUIui-template.mdUI componentsSecuritysecurity-template.mdSecurity audit/hardeningDocsdocs-template.mdDocumentationContentcontent-template.mdBlog posts, articles, marketing copyResearchresearch-template.mdAnalysis/explorationMulti-Agentswarm-template.mdMulti-agent coordinationTeam Briefteam-brief-template.mdTeam orchestration brief\n\nPriority (most specific wins): api > security > ui > testing > bugfix > refactor > content > docs > research > feature. For multi-agent tasks, use swarm-template for the team brief and the type-specific template for each agent's sub-prompt.\n\nHow it works: Read the matching template from docs/references/{type}-template.md, then fill it with task-specific context. Templates are NOT loaded into context by default — only read on demand when generating a prompt. If the template file is not found, fall back to the Base XML Structure below.\n\nTo add a new task type: create docs/references/{type}-template.md following the XML structure below, then add it to the table above."
      },
      {
        "title": "Base XML Structure",
        "body": "All templates follow this core structure (8 required tags). Use as fallback if no specific template matches:\n\nException: team-brief-template.md uses Markdown format for orchestration briefs. This is intentional — see template header for rationale.\n\n<role>{Expert role matching task type and domain}</role>\n\n<context>\n- Working environment, frameworks, tools\n- Available resources, current state\n</context>\n\n<task>{Clear, unambiguous single-sentence task}</task>\n\n<motivation>{Why this matters — priority, impact}</motivation>\n\n<requirements>\n- {Specific, measurable requirement 1}\n- {At least 3-5 requirements}\n</requirements>\n\n<constraints>\n- {What NOT to do}\n- {Boundaries and limits}\n</constraints>\n\n<output_format>{Expected format, structure, length}</output_format>\n\n<success_criteria>\n- {Testable condition 1}\n- {Measurable outcome 2}\n</success_criteria>"
      },
      {
        "title": "Project Context Detection",
        "body": "Auto-detect tech stack from current working directory ONLY:\n\nScan package.json, tsconfig.json, prisma/schema.prisma, etc.\nSession-scoped — different directory = fresh context\nOpt out with \"no context\", \"generic\", or \"manual context\"\nNever scan parent directories or carry context between sessions"
      },
      {
        "title": "TL;DR",
        "body": "Raw task in → quality output out. Every agent gets a reprompted prompt.\n\nPhase 1: Score raw prompt, plan team, define roles (YOU do this, ~30s)\nPhase 2: Write XML-structured prompt per agent (YOU do this, ~2min)\nPhase 3: Launch tmux Agent Teams (AUTOMATED)\nPhase 4: Read results, score, retry if needed (YOU do this)\n\nKey insight: The reprompt phase costs ZERO extra tokens — YOU write the prompts, not another AI."
      },
      {
        "title": "Phase 1: Team Plan (~30 seconds)",
        "body": "Score raw prompt (1-10): Clarity, Specificity, Structure, Constraints, Decomposition\n\nPhase 1 uses 5 quick-assessment dimensions. The full 6-dimension scoring (adding Verifiability) is used in Phase 4 evaluation.\n\n\nPick mode: parallel (independent agents) or sequential (pipeline with dependencies)\nDefine team: 2-5 agents max, each owns ONE domain, no overlap\nWrite team brief to /tmp/rpt-brief-{taskname}.md (use unique tasknames to avoid collisions between concurrent runs)"
      },
      {
        "title": "Phase 2: Repromptception (~2 minutes)",
        "body": "For EACH agent:\n\nPick the best-matching template from docs/references/ (or use base XML structure)\nRead it, then apply these per-agent adaptations:\n\n<role>: Specific expert title for THIS agent's domain\n<context>: Add exact file paths (verified with ls), what OTHER agents handle (boundary awareness)\n<requirements>: At least 5 specific, independently verifiable requirements\n<constraints>: Scope boundary with other agents, read-only vs write, file/directory boundaries\n<output_format>: Exact path /tmp/rpt-{taskname}-{agent-domain}.md, required sections\n<success_criteria>: Minimum N findings, file:line references, no hallucinated paths\n\nScore each prompt — target 8+/10. If under 8, add more context/constraints.\n\nWrite all to /tmp/rpt-agent-prompts-{taskname}.md"
      },
      {
        "title": "Phase 3: Execute (tmux Agent Teams)",
        "body": "# 1. Start Claude Code with Agent Teams\ntmux new-session -d -s {session} \"cd /path/to/workdir && CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 claude --model opus\"\n# placeholders:\n# - {session}: unique tmux session name (example: rpt-auth-audit)\n# - /path/to/workdir: absolute repository path for the target project (example: /tmp/reprompter-check)\n\n# 2. Wait for startup\nsleep 12\n\n# 3. Send prompt — MUST use -l (literal), Enter SEPARATE\n# IMPORTANT: Include POLLING RULES to prevent lead TaskList loop bug\ntmux send-keys -t {session} -l 'Create an agent team with N teammates. CRITICAL: Use model opus for ALL tasks.\n\nPOLLING RULES — YOU MUST FOLLOW THESE:\n- After sending tasks, poll TaskList at most 10 times\n- If ALL tasks show \"done\" status, IMMEDIATELY stop polling\n- After 3 consecutive TaskList calls showing the same status, STOP polling regardless\n- Once you stop polling: read the output files, then write synthesis\n- DO NOT call TaskList more than 20 times total under any circumstances\n\nTeammate 1 (ROLE): TASK. Write output to /tmp/rpt-{taskname}-{domain}.md. ... After all complete, synthesize into /tmp/rpt-{taskname}-final.md'\nsleep 0.5\ntmux send-keys -t {session} Enter\n\n# 4. Monitor (poll every 15-30s)\ntmux capture-pane -t {session} -p -S -100\n\n# 5. Verify outputs\nls -la /tmp/rpt-{taskname}-*.md\n\n# 6. Cleanup\ntmux kill-session -t {session}\n\nCritical tmux Rules\n\n⚠️ WARNING: Default teammate model is HAIKU unless explicitly overridden. Always set --model opus in both CLI launch command and team prompt.\n\nRuleWhyAlways send-keys -l (literal flag)Without it, special chars breakEnter sent SEPARATELYCombined fails for multilinesleep 0.5 between text and EnterBuffer processing timesleep 12 after session startClaude Code init time--model opus in CLI AND promptDefault teammate = HAIKUEach agent writes own filePrevents file conflictsUnique taskname per runPrevents collisions between concurrent sessions"
      },
      {
        "title": "Phase 4: Evaluate + Retry",
        "body": "Read each agent's report\n\n\nScore against success criteria from Phase 2:\n\n8+/10 → ACCEPT\n4-6/10 → RETRY with delta prompt (tell them what's missing)\n< 4/10 → RETRY with full rewrite\n\nAccept checklist (use alongside score — all must pass):\n\n All required output sections present\n Requirements from Phase 2 independently verifiable\n No hallucinated file paths or line numbers\n Scope boundaries respected (no overlap with other agents)\n\n\n\nMax 2 retries (3 total attempts)\n\n\nDeliver final report to user\n\nDelta prompt pattern:\n\nPrevious attempt scored 5/10.\n✅ Good: Sections 1-3 complete\n❌ Missing: Section 4 empty, line references wrong\nThis retry: Focus on gaps. Verify all line numbers."
      },
      {
        "title": "Expected Cost & Time",
        "body": "Team sizeTimeCost2 agents~5-8 min~$1-23 agents~8-12 min~$2-34 agents~10-15 min~$2-4\n\nEstimates cover Phase 3 (execution) only. Add ~3 minutes for Phases 1-2 and ~5-8 minutes per retry. Each agent uses ~25-70% of their 200K token context window."
      },
      {
        "title": "Fallback: sessions_spawn (OpenClaw only)",
        "body": "When tmux/Claude Code is unavailable but running inside OpenClaw:\n\nsessions_spawn(task: \"<per-agent prompt>\", model: \"opus\", label: \"rpt-{role}\")\n\nNote: sessions_spawn is an OpenClaw-specific tool. Not available in standalone Claude Code.\n\nNo tmux or OpenClaw? Run agents sequentially: execute each agent's prompt one at a time in the same Claude Code session. Slower but works everywhere."
      },
      {
        "title": "Quality Scoring",
        "body": "Always show before/after metrics:\n\nDimensionWeightCriteriaClarity20%Task unambiguous?Specificity20%Requirements concrete?Structure15%Proper sections, logical flow?Constraints15%Boundaries defined?Verifiability15%Success measurable?Decomposition15%Work split cleanly? (Score 10 if task is correctly atomic)\n\n| Dimension | Before | After | Change |\n|-----------|--------|-------|--------|\n| Clarity | 3/10 | 9/10 | +200% |\n| Specificity | 2/10 | 8/10 | +300% |\n| Structure | 1/10 | 10/10 | +900% |\n| Constraints | 0/10 | 7/10 | new |\n| Verifiability | 2/10 | 8/10 | +300% |\n| Decomposition | 0/10 | 8/10 | new |\n| **Overall** | **1.45/10** | **8.35/10** | **+476%** |\n\nBias note: Scores are self-assessed. Treat as directional indicators, not absolutes."
      },
      {
        "title": "Closed-Loop Quality (v6.0+)",
        "body": "For both modes, RePrompter supports post-execution evaluation:\n\nIMPROVE — Score raw → generate structured prompt\nEXECUTE — Repromptception mode only: route to agent(s), collect output. Single mode does not execute code/commands; it only generates prompts.\nEVALUATE — Score output/prompt against success criteria (0-10)\nRETRY — Thresholds: Single mode retry if score < 7; Repromptception retry if score < 8. Max 2 retries."
      },
      {
        "title": "Reasoning-Friendly Prompting (Claude 4.x)",
        "body": "Prompts should be less prescriptive about HOW. Focus on WHAT — clear task, requirements, constraints, success criteria. Let the model's own reasoning handle execution strategy.\n\nExample: Instead of \"Step 1: read the file, Step 2: extract the function\" → \"Extract the authentication logic from auth.ts into a reusable middleware. Requirements: ...\""
      },
      {
        "title": "Response Prefilling (API only)",
        "body": "Prefill assistant response start to enforce format:\n\n{ → forces JSON output\n## Analysis → skips preamble, starts with content\n| Column | → forces table format"
      },
      {
        "title": "Context Engineering",
        "body": "Generated prompts should COMPLEMENT runtime context (CLAUDE.md, skills, MCP tools), not duplicate it. Before generating:\n\nCheck what context is already loaded (project files, skills, MCP servers)\nReference existing context: \"Using the project structure from CLAUDE.md...\"\nAdd ONLY what's missing — avoid restating what the model already knows"
      },
      {
        "title": "Token Budget",
        "body": "Keep generated prompts under ~2K tokens for single mode, ~1K per agent for Repromptception. Longer prompts waste context window without improving quality. If a prompt exceeds budget, split into phases or move detail into constraints."
      },
      {
        "title": "Uncertainty Handling",
        "body": "Always include explicit permission for the model to express uncertainty rather than fabricate:\n\nAdd to constraints: \"If unsure about any requirement, ask for clarification rather than assuming\"\nFor research tasks: \"Clearly label confidence levels (high/medium/low) for each finding\"\nFor code tasks: \"Flag any assumptions about the codebase with TODO comments\""
      },
      {
        "title": "Settings (for Repromptception mode)",
        "body": "Note: CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS is an experimental flag that may change in future Claude Code versions. Check Claude Code docs for current status.\n\nIn ~/.claude/settings.json:\n\n{\n  \"env\": {\n    \"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS\": \"1\"\n  },\n  \"preferences\": {\n    \"teammateMode\": \"tmux\",\n    \"model\": \"opus\"\n  }\n}\n\nSettingValuesEffectCLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS\"1\"Enables agent team spawningteammateMode\"tmux\" / \"default\"tmux: each teammate gets a visible split pane. default: teammates run in backgroundmodel\"opus\" / \"sonnet\"Teammates default to Haiku. Always set model: opus explicitly in your prompt — do not rely on runtime defaults."
      },
      {
        "title": "Single Prompt (v6.0)",
        "body": "Rough crypto dashboard prompt: 1.6/10 → 9.0/10 (+462%)"
      },
      {
        "title": "Repromptception E2E (v6.1)",
        "body": "3 Opus agents, sequential pipeline (PromptAnalyzer → PromptEngineer → QualityAuditor):\n\nMetricValueOriginal score2.15/10After Repromptception9.15/10 (+326%)Quality auditPASS (99.1%)Weaknesses found → fixed24/24 (100%)Cost$1.39Time~8 minutes"
      },
      {
        "title": "Repromptception vs Raw Agent Teams (v7.0)",
        "body": "Same audit task, 4 Opus agents:\n\nMetricRawRepromptceptionDeltaCRITICAL findings714+100%Total findings~40104+160%Cost savings identified$377/mo$490/mo+30%Token bloat found45K113K+151%Cross-validated findings05—"
      },
      {
        "title": "Tips",
        "body": "More context = fewer questions — mention tech stack, files\n\"expand\" — if Quick Mode gave too simple a result, re-run with full interview\n\"quick\" — skip interview for simple tasks\n\"no context\" — skip auto-detection\nContext is per-project — switching directories = fresh detection"
      },
      {
        "title": "Test Scenarios",
        "body": "See TESTING.md for 13 verification scenarios + anti-pattern examples."
      },
      {
        "title": "Appendix: Extended XML Tags",
        "body": "Templates may add domain-specific tags beyond the 8 required base tags. Always include all base tags first.\n\nExtended TagUsed InPurpose<symptoms>bugfixWhat the user sees, error messages<investigation_steps>bugfixSystematic debugging steps<endpoints>apiEndpoint specifications<component_spec>uiComponent props, states, layout<agents>swarmAgent role definitions<task_decomposition>swarmWork split per agent<coordination>swarmInter-agent handoff rules<research_questions>researchSpecific questions to answer<methodology>researchResearch approach and methods<reasoning>researchReasoning notes space (non-sensitive, concise)<current_state>refactorBefore state of the code<target_state>refactorDesired after state<coverage_requirements>testingWhat needs test coverage<threat_model>securityThreat landscape and vectors<structure>docsDocument organization<reference>docsSource material to reference"
      }
    ],
    "body": "RePrompter v7.0\n\nYour prompt sucks. Let's fix that. Single prompts or full agent teams — one skill, two modes.\n\nTwo Modes\nMode\tTrigger\tWhat happens\nSingle\t\"reprompt this\", \"clean up this prompt\"\tInterview → structured prompt → score\nRepromptception\t\"reprompter teams\", \"repromptception\", \"run with quality\", \"smart run\", \"smart agents\"\tPlan team → reprompt each agent → tmux Agent Teams → evaluate → retry\n\nAuto-detection: if task mentions 2+ systems, \"audit\", or \"parallel\" → ask: \"This looks like a multi-agent task. Want to use Repromptception mode?\"\n\nDefinition — 2+ systems means at least two distinct technical domains that can be worked independently. Examples: frontend + backend, API + database, mobile app + backend, infrastructure + application code, security audit + cost audit.\n\nDon't Use When\nUser wants a simple direct answer (no prompt generation needed)\nUser wants casual chat/conversation\nTask is immediate execution-only with no reprompting step\nScope does not involve prompt design, structure, or orchestration\n\nClarification: RePrompter does support code-related tasks (feature, bugfix, API, refactor) by generating better prompts. It does not directly apply code changes in Single mode. Direct code execution belongs to coding-agent unless Repromptception execution mode is explicitly requested.\n\nMode 1: Single Prompt\nProcess\nReceive raw input\nInput guard — if input is empty, a single word with no verb, or clearly not a task → ask the user to describe what they want to accomplish\nReject examples: \"hi\", \"thanks\", \"lol\", \"what's up\", \"good morning\", random emoji-only input\nAccept examples: \"fix login bug\", \"write API tests\", \"improve this prompt\"\nQuick Mode gate — under 20 words, single action, no complexity indicators → generate immediately\nSmart Interview — use AskUserQuestion with clickable options (2-5 questions max)\nGenerate + Score — apply template, show before/after quality metrics\n⚠️ MUST GENERATE AFTER INTERVIEW\n\nAfter interview completes, IMMEDIATELY:\n\nSelect template based on task type\nGenerate the full polished prompt\nShow quality score (before/after table)\nAsk if user wants to execute or copy\n❌ WRONG: Ask interview questions → stop\n✅ RIGHT: Ask interview questions → generate prompt → show score → offer to execute\n\nInterview Questions\n\nAsk via AskUserQuestion. Max 5 questions total.\n\nStandard questions (priority order — drop lower ones if task-specific questions are needed):\n\nTask type: Build Feature / Fix Bug / Refactor / Write Tests / API Work / UI / Security / Docs / Content / Research / Multi-Agent\nIf user selects Multi-Agent while currently in Single mode, immediately transition to Repromptception Phase 1 (Team Plan) and confirm team execution mode (Parallel vs Sequential).\nExecution mode: Single Agent / Team (Parallel) / Team (Sequential) / Let RePrompter decide\nMotivation: User-facing / Internal tooling / Bug fix / Exploration / Skip (drop first if space needed)\nOutput format: XML Tags / Markdown / Plain Text / JSON (drop first if space needed)\n\nTask-specific questions (MANDATORY for compound prompts — replace lower-priority standard questions):\n\nExtract keywords from prompt → generate relevant follow-up options\nExample: prompt mentions \"telegram\" → ask about alert type, interactivity, delivery\nVague prompt fallback: if input has no extractable keywords (e.g., \"make it better\"), ask open-ended: \"What are you working on?\" and \"What's the goal?\" before proceeding\nAuto-Detect Complexity\nSignal\tSuggested mode\n2+ distinct systems (e.g., frontend + backend, API + DB, mobile + backend)\tTeam (Parallel)\nPipeline (fetch → transform → deploy)\tTeam (Sequential)\nSingle file/component\tSingle Agent\n\"audit\", \"review\", \"analyze\" across areas\tTeam (Parallel)\nQuick Mode\n\nEnable when ALL true:\n\n< 20 words (excluding code blocks)\nExactly 1 action verb from: add, fix, remove, rename, move, delete, update, create, implement, write, change, configure, test, run\nSingle target (one file, component, or identifier)\nNo conjunctions (and, or, plus, also)\nNo vague modifiers (better, improved, some, maybe, kind of)\n\nForce interview if ANY present: compound tasks (\"and\", \"plus\"), state management (\"track\", \"sync\"), vague modifiers (\"better\", \"improved\"), integration work (\"connect\", \"combine\", \"sync\"), broad scope nouns after any action verb, ambiguous pronouns (\"it\", \"this\", \"that\" without clear referent).\n\nTask Types & Templates\n\nDetect task type from input. Each type has a dedicated template in docs/references/:\n\nType\tTemplate\tUse when\nFeature\tfeature-template.md\tNew functionality (default fallback)\nBugfix\tbugfix-template.md\tDebug + fix\nRefactor\trefactor-template.md\tStructural cleanup\nTesting\ttesting-template.md\tTest writing\nAPI\tapi-template.md\tEndpoint/API work\nUI\tui-template.md\tUI components\nSecurity\tsecurity-template.md\tSecurity audit/hardening\nDocs\tdocs-template.md\tDocumentation\nContent\tcontent-template.md\tBlog posts, articles, marketing copy\nResearch\tresearch-template.md\tAnalysis/exploration\nMulti-Agent\tswarm-template.md\tMulti-agent coordination\nTeam Brief\tteam-brief-template.md\tTeam orchestration brief\n\nPriority (most specific wins): api > security > ui > testing > bugfix > refactor > content > docs > research > feature. For multi-agent tasks, use swarm-template for the team brief and the type-specific template for each agent's sub-prompt.\n\nHow it works: Read the matching template from docs/references/{type}-template.md, then fill it with task-specific context. Templates are NOT loaded into context by default — only read on demand when generating a prompt. If the template file is not found, fall back to the Base XML Structure below.\n\nTo add a new task type: create docs/references/{type}-template.md following the XML structure below, then add it to the table above.\n\nBase XML Structure\n\nAll templates follow this core structure (8 required tags). Use as fallback if no specific template matches:\n\nException: team-brief-template.md uses Markdown format for orchestration briefs. This is intentional — see template header for rationale.\n\n<role>{Expert role matching task type and domain}</role>\n\n<context>\n- Working environment, frameworks, tools\n- Available resources, current state\n</context>\n\n<task>{Clear, unambiguous single-sentence task}</task>\n\n<motivation>{Why this matters — priority, impact}</motivation>\n\n<requirements>\n- {Specific, measurable requirement 1}\n- {At least 3-5 requirements}\n</requirements>\n\n<constraints>\n- {What NOT to do}\n- {Boundaries and limits}\n</constraints>\n\n<output_format>{Expected format, structure, length}</output_format>\n\n<success_criteria>\n- {Testable condition 1}\n- {Measurable outcome 2}\n</success_criteria>\n\nProject Context Detection\n\nAuto-detect tech stack from current working directory ONLY:\n\nScan package.json, tsconfig.json, prisma/schema.prisma, etc.\nSession-scoped — different directory = fresh context\nOpt out with \"no context\", \"generic\", or \"manual context\"\nNever scan parent directories or carry context between sessions\nMode 2: Repromptception (Agent Teams)\nTL;DR\nRaw task in → quality output out. Every agent gets a reprompted prompt.\n\nPhase 1: Score raw prompt, plan team, define roles (YOU do this, ~30s)\nPhase 2: Write XML-structured prompt per agent (YOU do this, ~2min)\nPhase 3: Launch tmux Agent Teams (AUTOMATED)\nPhase 4: Read results, score, retry if needed (YOU do this)\n\n\nKey insight: The reprompt phase costs ZERO extra tokens — YOU write the prompts, not another AI.\n\nPhase 1: Team Plan (~30 seconds)\nScore raw prompt (1-10): Clarity, Specificity, Structure, Constraints, Decomposition\nPhase 1 uses 5 quick-assessment dimensions. The full 6-dimension scoring (adding Verifiability) is used in Phase 4 evaluation.\nPick mode: parallel (independent agents) or sequential (pipeline with dependencies)\nDefine team: 2-5 agents max, each owns ONE domain, no overlap\nWrite team brief to /tmp/rpt-brief-{taskname}.md (use unique tasknames to avoid collisions between concurrent runs)\nPhase 2: Repromptception (~2 minutes)\n\nFor EACH agent:\n\nPick the best-matching template from docs/references/ (or use base XML structure)\nRead it, then apply these per-agent adaptations:\n<role>: Specific expert title for THIS agent's domain\n<context>: Add exact file paths (verified with ls), what OTHER agents handle (boundary awareness)\n<requirements>: At least 5 specific, independently verifiable requirements\n<constraints>: Scope boundary with other agents, read-only vs write, file/directory boundaries\n<output_format>: Exact path /tmp/rpt-{taskname}-{agent-domain}.md, required sections\n<success_criteria>: Minimum N findings, file:line references, no hallucinated paths\n\nScore each prompt — target 8+/10. If under 8, add more context/constraints.\n\nWrite all to /tmp/rpt-agent-prompts-{taskname}.md\n\nPhase 3: Execute (tmux Agent Teams)\n# 1. Start Claude Code with Agent Teams\ntmux new-session -d -s {session} \"cd /path/to/workdir && CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 claude --model opus\"\n# placeholders:\n# - {session}: unique tmux session name (example: rpt-auth-audit)\n# - /path/to/workdir: absolute repository path for the target project (example: /tmp/reprompter-check)\n\n# 2. Wait for startup\nsleep 12\n\n# 3. Send prompt — MUST use -l (literal), Enter SEPARATE\n# IMPORTANT: Include POLLING RULES to prevent lead TaskList loop bug\ntmux send-keys -t {session} -l 'Create an agent team with N teammates. CRITICAL: Use model opus for ALL tasks.\n\nPOLLING RULES — YOU MUST FOLLOW THESE:\n- After sending tasks, poll TaskList at most 10 times\n- If ALL tasks show \"done\" status, IMMEDIATELY stop polling\n- After 3 consecutive TaskList calls showing the same status, STOP polling regardless\n- Once you stop polling: read the output files, then write synthesis\n- DO NOT call TaskList more than 20 times total under any circumstances\n\nTeammate 1 (ROLE): TASK. Write output to /tmp/rpt-{taskname}-{domain}.md. ... After all complete, synthesize into /tmp/rpt-{taskname}-final.md'\nsleep 0.5\ntmux send-keys -t {session} Enter\n\n# 4. Monitor (poll every 15-30s)\ntmux capture-pane -t {session} -p -S -100\n\n# 5. Verify outputs\nls -la /tmp/rpt-{taskname}-*.md\n\n# 6. Cleanup\ntmux kill-session -t {session}\n\nCritical tmux Rules\n\n⚠️ WARNING: Default teammate model is HAIKU unless explicitly overridden. Always set --model opus in both CLI launch command and team prompt.\n\nRule\tWhy\nAlways send-keys -l (literal flag)\tWithout it, special chars break\nEnter sent SEPARATELY\tCombined fails for multiline\nsleep 0.5 between text and Enter\tBuffer processing time\nsleep 12 after session start\tClaude Code init time\n--model opus in CLI AND prompt\tDefault teammate = HAIKU\nEach agent writes own file\tPrevents file conflicts\nUnique taskname per run\tPrevents collisions between concurrent sessions\nPhase 4: Evaluate + Retry\n\nRead each agent's report\n\nScore against success criteria from Phase 2:\n\n8+/10 → ACCEPT\n4-6/10 → RETRY with delta prompt (tell them what's missing)\n< 4/10 → RETRY with full rewrite\n\nAccept checklist (use alongside score — all must pass):\n\n All required output sections present\n Requirements from Phase 2 independently verifiable\n No hallucinated file paths or line numbers\n Scope boundaries respected (no overlap with other agents)\n\nMax 2 retries (3 total attempts)\n\nDeliver final report to user\n\nDelta prompt pattern:\n\nPrevious attempt scored 5/10.\n✅ Good: Sections 1-3 complete\n❌ Missing: Section 4 empty, line references wrong\nThis retry: Focus on gaps. Verify all line numbers.\n\nExpected Cost & Time\nTeam size\tTime\tCost\n2 agents\t~5-8 min\t~$1-2\n3 agents\t~8-12 min\t~$2-3\n4 agents\t~10-15 min\t~$2-4\n\nEstimates cover Phase 3 (execution) only. Add ~3 minutes for Phases 1-2 and ~5-8 minutes per retry. Each agent uses ~25-70% of their 200K token context window.\n\nFallback: sessions_spawn (OpenClaw only)\n\nWhen tmux/Claude Code is unavailable but running inside OpenClaw:\n\nsessions_spawn(task: \"<per-agent prompt>\", model: \"opus\", label: \"rpt-{role}\")\n\n\nNote: sessions_spawn is an OpenClaw-specific tool. Not available in standalone Claude Code.\n\nNo tmux or OpenClaw? Run agents sequentially: execute each agent's prompt one at a time in the same Claude Code session. Slower but works everywhere.\n\nQuality Scoring\n\nAlways show before/after metrics:\n\nDimension\tWeight\tCriteria\nClarity\t20%\tTask unambiguous?\nSpecificity\t20%\tRequirements concrete?\nStructure\t15%\tProper sections, logical flow?\nConstraints\t15%\tBoundaries defined?\nVerifiability\t15%\tSuccess measurable?\nDecomposition\t15%\tWork split cleanly? (Score 10 if task is correctly atomic)\n| Dimension | Before | After | Change |\n|-----------|--------|-------|--------|\n| Clarity | 3/10 | 9/10 | +200% |\n| Specificity | 2/10 | 8/10 | +300% |\n| Structure | 1/10 | 10/10 | +900% |\n| Constraints | 0/10 | 7/10 | new |\n| Verifiability | 2/10 | 8/10 | +300% |\n| Decomposition | 0/10 | 8/10 | new |\n| **Overall** | **1.45/10** | **8.35/10** | **+476%** |\n\n\nBias note: Scores are self-assessed. Treat as directional indicators, not absolutes.\n\nClosed-Loop Quality (v6.0+)\n\nFor both modes, RePrompter supports post-execution evaluation:\n\nIMPROVE — Score raw → generate structured prompt\nEXECUTE — Repromptception mode only: route to agent(s), collect output. Single mode does not execute code/commands; it only generates prompts.\nEVALUATE — Score output/prompt against success criteria (0-10)\nRETRY — Thresholds: Single mode retry if score < 7; Repromptception retry if score < 8. Max 2 retries.\nAdvanced Features\nReasoning-Friendly Prompting (Claude 4.x)\n\nPrompts should be less prescriptive about HOW. Focus on WHAT — clear task, requirements, constraints, success criteria. Let the model's own reasoning handle execution strategy.\n\nExample: Instead of \"Step 1: read the file, Step 2: extract the function\" → \"Extract the authentication logic from auth.ts into a reusable middleware. Requirements: ...\"\n\nResponse Prefilling (API only)\n\nPrefill assistant response start to enforce format:\n\n{ → forces JSON output\n## Analysis → skips preamble, starts with content\n| Column | → forces table format\nContext Engineering\n\nGenerated prompts should COMPLEMENT runtime context (CLAUDE.md, skills, MCP tools), not duplicate it. Before generating:\n\nCheck what context is already loaded (project files, skills, MCP servers)\nReference existing context: \"Using the project structure from CLAUDE.md...\"\nAdd ONLY what's missing — avoid restating what the model already knows\nToken Budget\n\nKeep generated prompts under ~2K tokens for single mode, ~1K per agent for Repromptception. Longer prompts waste context window without improving quality. If a prompt exceeds budget, split into phases or move detail into constraints.\n\nUncertainty Handling\n\nAlways include explicit permission for the model to express uncertainty rather than fabricate:\n\nAdd to constraints: \"If unsure about any requirement, ask for clarification rather than assuming\"\nFor research tasks: \"Clearly label confidence levels (high/medium/low) for each finding\"\nFor code tasks: \"Flag any assumptions about the codebase with TODO comments\"\nSettings (for Repromptception mode)\n\nNote: CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS is an experimental flag that may change in future Claude Code versions. Check Claude Code docs for current status.\n\nIn ~/.claude/settings.json:\n\n{\n  \"env\": {\n    \"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS\": \"1\"\n  },\n  \"preferences\": {\n    \"teammateMode\": \"tmux\",\n    \"model\": \"opus\"\n  }\n}\n\nSetting\tValues\tEffect\nCLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS\t\"1\"\tEnables agent team spawning\nteammateMode\t\"tmux\" / \"default\"\ttmux: each teammate gets a visible split pane. default: teammates run in background\nmodel\t\"opus\" / \"sonnet\"\tTeammates default to Haiku. Always set model: opus explicitly in your prompt — do not rely on runtime defaults.\nProven Results\nSingle Prompt (v6.0)\n\nRough crypto dashboard prompt: 1.6/10 → 9.0/10 (+462%)\n\nRepromptception E2E (v6.1)\n\n3 Opus agents, sequential pipeline (PromptAnalyzer → PromptEngineer → QualityAuditor):\n\nMetric\tValue\nOriginal score\t2.15/10\nAfter Repromptception\t9.15/10 (+326%)\nQuality audit\tPASS (99.1%)\nWeaknesses found → fixed\t24/24 (100%)\nCost\t$1.39\nTime\t~8 minutes\nRepromptception vs Raw Agent Teams (v7.0)\n\nSame audit task, 4 Opus agents:\n\nMetric\tRaw\tRepromptception\tDelta\nCRITICAL findings\t7\t14\t+100%\nTotal findings\t~40\t104\t+160%\nCost savings identified\t$377/mo\t$490/mo\t+30%\nToken bloat found\t45K\t113K\t+151%\nCross-validated findings\t0\t5\t—\nTips\nMore context = fewer questions — mention tech stack, files\n\"expand\" — if Quick Mode gave too simple a result, re-run with full interview\n\"quick\" — skip interview for simple tasks\n\"no context\" — skip auto-detection\nContext is per-project — switching directories = fresh detection\nTest Scenarios\n\nSee TESTING.md for 13 verification scenarios + anti-pattern examples.\n\nAppendix: Extended XML Tags\n\nTemplates may add domain-specific tags beyond the 8 required base tags. Always include all base tags first.\n\nExtended Tag\tUsed In\tPurpose\n<symptoms>\tbugfix\tWhat the user sees, error messages\n<investigation_steps>\tbugfix\tSystematic debugging steps\n<endpoints>\tapi\tEndpoint specifications\n<component_spec>\tui\tComponent props, states, layout\n<agents>\tswarm\tAgent role definitions\n<task_decomposition>\tswarm\tWork split per agent\n<coordination>\tswarm\tInter-agent handoff rules\n<research_questions>\tresearch\tSpecific questions to answer\n<methodology>\tresearch\tResearch approach and methods\n<reasoning>\tresearch\tReasoning notes space (non-sensitive, concise)\n<current_state>\trefactor\tBefore state of the code\n<target_state>\trefactor\tDesired after state\n<coverage_requirements>\ttesting\tWhat needs test coverage\n<threat_model>\tsecurity\tThreat landscape and vectors\n<structure>\tdocs\tDocument organization\n<reference>\tdocs\tSource material to reference"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/AytuncYildizli/reprompter",
    "publisherUrl": "https://clawhub.ai/AytuncYildizli/reprompter",
    "owner": "AytuncYildizli",
    "version": "7.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/reprompter",
    "downloadUrl": "https://openagent3.xyz/downloads/reprompter",
    "agentUrl": "https://openagent3.xyz/skills/reprompter/agent",
    "manifestUrl": "https://openagent3.xyz/skills/reprompter/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/reprompter/agent.md"
  }
}