{
  "schemaVersion": "1.0",
  "item": {
    "slug": "solo-build",
    "name": "Build",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/fortunto2/solo-build",
    "canonicalUrl": "https://clawhub.ai/fortunto2/solo-build",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/solo-build",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=solo-build",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/solo-build"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/solo-build",
    "agentPageUrl": "https://openagent3.xyz/skills/solo-build/agent",
    "manifestUrl": "https://openagent3.xyz/skills/solo-build/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/solo-build/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "/build",
        "body": "This skill is self-contained — follow the task loop, TDD rules, and completion flow below instead of delegating to external build/execution skills (superpowers, etc.).\n\nExecute tasks from an implementation plan. Finds plan.md (in docs/plan/), picks the next unchecked task, implements it with TDD workflow, commits, and updates progress."
      },
      {
        "title": "When to use",
        "body": "After /plan has created a track with spec.md + plan.md. This is the execution engine.\n\nPipeline: /plan → /build → /deploy → /review"
      },
      {
        "title": "MCP Tools (use if available)",
        "body": "session_search(query) — find how similar problems were solved before\nproject_code_search(query, project) — find reusable code across projects\ncodegraph_query(query) — check file dependencies, imports, callers\n\nIf MCP tools are not available, fall back to Glob + Grep + Read."
      },
      {
        "title": "Pre-flight Checks",
        "body": "Detect context — find where plan files live:\n\nCheck docs/plan/*/plan.md — standard location\nUse whichever exists.\nDO NOT search for conductor/ or any other directory — only docs/plan/.\n\n\n\nLoad workflow config from docs/workflow.md (if exists):\n\nTDD strictness (strict / moderate / none)\nCommit strategy (conventional commits format)\nVerification checkpoint rules\nIntegration Testing section — if present, run the specified CLI commands after completing tasks that touch the listed paths\nIf docs/workflow.md missing: use defaults (moderate TDD, conventional commits).\n\n\n\nVerify git hooks are installed:\nRead the stack YAML (templates/stacks/{stack}.yaml) — the pre_commit field tells you which system and what it runs:\n\nhusky + lint-staged → JS/TS stacks (eslint + prettier + tsc)\npre-commit → Python stacks (ruff + ruff-format + ty)\nlefthook → mobile stacks (swiftlint/detekt + formatter)\n\nThen verify the hook system is active:\n# husky\n[ -f .husky/pre-commit ] && git config core.hooksPath | grep -q husky && echo \"OK\" || echo \"NOT ACTIVE\"\n# pre-commit (Python)\n[ -f .pre-commit-config.yaml ] && [ -f .git/hooks/pre-commit ] && echo \"OK\" || echo \"NOT ACTIVE\"\n# lefthook\n[ -f lefthook.yml ] && lefthook version >/dev/null 2>&1 && echo \"OK\" || echo \"NOT ACTIVE\"\n\nIf not active — install before first commit:\n\nhusky: pnpm prepare (or npm run prepare)\npre-commit: uv run pre-commit install\nlefthook: lefthook install\n\nDon't use --no-verify on commits — if hooks fail, fix the issue and commit again."
      },
      {
        "title": "If $ARGUMENTS contains a track ID:",
        "body": "Validate: {plan_root}/{argument}/plan.md exists (check docs/plan/).\nIf not found: search docs/plan/*/plan.md for partial matches, suggest corrections."
      },
      {
        "title": "If $ARGUMENTS contains --task X.Y:",
        "body": "Jump directly to that task in the active track."
      },
      {
        "title": "If no argument:",
        "body": "Search for plan.md files in docs/plan/.\nRead each plan.md, find tracks with uncompleted tasks.\nIf multiple, ask via AskUserQuestion.\nIf zero tracks: \"No plans found. Run /plan first.\""
      },
      {
        "title": "Step 1 — Architecture overview (if MCP available)",
        "body": "codegraph_explain(project=\"{project name}\")\n\nReturns: stack, languages, directory layers, key patterns, top dependencies, hub files — one call instead of exploring the tree manually."
      },
      {
        "title": "Step 2 — Essential docs (parallel reads)",
        "body": "docs/plan/{trackId}/plan.md — task list (REQUIRED)\ndocs/plan/{trackId}/spec.md — acceptance criteria (REQUIRED)\ndocs/workflow.md — TDD policy, commit strategy (if exists)\nCLAUDE.md — architecture, Do/Don't\n.solo/pipelines/progress.md — running docs from previous iterations (if exists, pipeline-specific). Contains what was done in prior pipeline sessions: stages completed, commit SHAs, last output lines. Use this to avoid repeating completed work.\n\nDo NOT read source code files at this stage. Only docs. Source files are loaded per-task in the execution loop (step 3 below)."
      },
      {
        "title": "Resumption",
        "body": "If a task is marked [~] in plan.md:\n\nResuming: {track title}\nLast task: Task {X.Y}: {description} [in progress]\n\n1. Continue from where we left off\n2. Restart current task\n3. Show progress summary first\n\nAsk via AskUserQuestion, then proceed."
      },
      {
        "title": "Task Execution Loop",
        "body": "Makefile convention: If Makefile exists in project root, always prefer make targets over raw commands. Use make test instead of pnpm test, make lint instead of pnpm lint, make build instead of pnpm build, etc. Run make help (or read Makefile) to discover available targets. If a make integration or similar target exists, use it for integration testing after pipeline-related tasks.\n\nIMPORTANT — All-done check: Before entering the loop, scan plan.md for ANY - [ ] or - [~] tasks. If ALL tasks are [x] — skip the loop entirely and jump to Completion section below to run final verification and output <solo:done/>.\n\nFor each incomplete task in plan.md (marked [ ]), in order:"
      },
      {
        "title": "1. Find Next Task",
        "body": "Parse plan.md for first line matching - [ ] Task X.Y: (or - [~] Task X.Y: if resuming)."
      },
      {
        "title": "2. Start Task",
        "body": "Update plan.md: [ ] → [~] for current task.\nAnnounce: \"Starting Task X.Y: {description}\""
      },
      {
        "title": "3. Research (smart, before coding)",
        "body": "Do NOT grep the entire project or read all source files. Load only what this specific task needs.\n\nIf MCP available (preferred):\n\nproject_code_search(query=\"{task keywords}\", project=\"{name}\") — find relevant code in the project. Read only the top 2-3 results.\nsession_search(\"{task keywords}\") — check if you solved this before.\ncodegraph_query(\"MATCH (f:File {project: '{name}'})-[:IMPORTS]->(dep) WHERE f.path CONTAINS '{module}' RETURN dep.path\") — check imports/dependencies of files you'll modify.\n\nIf MCP unavailable (fallback):\n\nRead ONLY the files explicitly mentioned in the task description (file paths).\nGlob for the specific module directory the task targets (e.g., src/auth/**/*.ts), not the entire project.\nIf the task doesn't mention files, use Grep with a narrow pattern on src/ or app/ — never **/*.\n\nNever do: Grep \"keyword\" . across the whole project. This dumps hundreds of lines into context for no reason. Be surgical."
      },
      {
        "title": "Python-Specific Quality Tools",
        "body": "When the project uses a Python stack (detected by pyproject.toml or stack YAML), run the full Astral toolchain:\n\nRuff — linting + formatting (always):\nuv run ruff check --fix .\nuv run ruff format .\n\n\n\nty — type-checking (if ty in dev dependencies or stack YAML):\nuv run ty check .\n\nty is Astral's type-checker (extremely fast, replaces mypy/pyright). Fix type errors before committing.\n\n\nHypothesis — property-based testing (if hypothesis in dependencies):\n\nUse @given(st.from_type(MyModel)) to auto-generate Pydantic model inputs.\nUse @given(st.text(), st.integers()) for edge-case coverage on parsers/validators.\nHypothesis tests go in the same test files alongside regular pytest tests.\n\n\n\nPre-commit — run all hooks before committing:\nuv run pre-commit run --all-files\n\nRun these checks after each task implementation, before git commit. If any fail, fix before proceeding."
      },
      {
        "title": "JS/TS-Specific Quality Tools",
        "body": "When the project uses a JS/TS stack (detected by package.json or stack YAML):\n\nESLint — linting (always):\npnpm lint --fix\n\n\n\nPrettier — formatting (always):\npnpm format\n\n\n\ntsc --noEmit — type-checking (strict mode):\npnpm tsc --noEmit\n\nFix type errors before committing. Strict mode should be on in tsconfig.json.\n\n\nKnip — dead code detection (if in devDependencies, run periodically):\npnpm knip\n\nFinds unused files, exports, and dependencies. Run after significant refactors.\n\n\nPre-commit — husky + lint-staged runs ESLint + Prettier + tsc on staged files."
      },
      {
        "title": "iOS/Android-Specific Quality Tools",
        "body": "When the project uses a mobile stack:\n\niOS (Swift):\n\nswiftlint lint --strict\nswift-format format --in-place --recursive Sources/\n\nAndroid (Kotlin):\n\n./gradlew detekt\n./gradlew ktlintCheck\n\nBoth use lefthook for pre-commit hooks (language-agnostic, no Node.js required)."
      },
      {
        "title": "4. TDD Workflow (if TDD enabled in workflow.md)",
        "body": "Red — write failing test:\n\nCreate/update test file for the task functionality.\nRun tests to confirm they fail.\n\nGreen — implement:\n\nWrite minimum code to make the test pass.\nRun tests to confirm pass.\n\nRefactor:\n\nClean up while tests stay green.\nRun tests one final time."
      },
      {
        "title": "5. Non-TDD Workflow (if TDD is \"none\" or \"moderate\" and task is simple)",
        "body": "Implement the task directly.\nRun existing tests to check nothing broke.\nFor \"moderate\": write tests for business logic and API routes, skip for UI/config."
      },
      {
        "title": "5.5. Integration Testing (CLI-First)",
        "body": "If the task touches core business logic (pipeline, algorithms, agent tools), run make integration (or the integration command from docs/workflow.md). The CLI exercises the same code paths as the UI without requiring a browser. If make integration fails, fix before committing."
      },
      {
        "title": "5.6. Visual Verification (if browser/simulator/emulator available)",
        "body": "After implementation, run a quick visual smoke test if tools are available:\n\nWeb projects (Playwright MCP or browser tools):\nIf you have Playwright MCP tools or browser tools available:\n\nStart the dev server if not already running (check stack YAML for dev_server.command)\nNavigate to the page affected by the current task\nCheck the browser console for errors (hydration mismatches, uncaught exceptions, 404s)\nTake a screenshot to verify the visual output matches expectations\nIf the task affects responsive layout, resize to mobile viewport (375px) and check\n\niOS projects (simulator):\nIf instructed to use iOS Simulator in the pipeline prompt:\n\nBuild for simulator: xcodebuild -scheme {Name} -sdk iphonesimulator build\nInstall on booted simulator: xcrun simctl install booted {app-path}\nLaunch and take screenshot: xcrun simctl io booted screenshot /tmp/sim-screenshot.png\nCheck simulator logs: xcrun simctl spawn booted log stream --style compact --timeout 10\n\nAndroid projects (emulator):\nIf instructed to use Android Emulator in the pipeline prompt:\n\nBuild debug APK: ./gradlew assembleDebug\nInstall: adb install -r app/build/outputs/apk/debug/app-debug.apk\nTake screenshot: adb exec-out screencap -p > /tmp/emu-screenshot.png\nCheck logcat: adb logcat '*:E' --format=time -d 2>&1 | tail -20\n\nGraceful degradation: If browser/simulator/emulator tools are not available or fail — skip visual checks entirely. Visual testing is a bonus, never a blocker. Log that it was skipped and continue with the task."
      },
      {
        "title": "6. Complete Task",
        "body": "Commit (following commit strategy):\n\ngit add {specific files changed}\ngit commit -m \"<type>(<scope>): <description>\"\n\nTypes: feat, fix, refactor, test, docs, chore, perf, style\n\nCapture SHA after commit:\n\ngit rev-parse --short HEAD\n\nSHA annotation in plan.md. After every task commit:\n\nMark task done: [~] → [x]\nAppend commit SHA inline: - [x] Task X.Y: description <!-- sha:abc1234 -->\n\nWithout a SHA, there's no traceability and no revert capability. If a task required multiple commits, record the last one."
      },
      {
        "title": "7. Phase Completion Check",
        "body": "After each task, check if all tasks in current phase are [x].\n\nIf phase complete:\n\nSHA audit — scan all [x] tasks in this phase. If any are missing <!-- sha:... -->, capture their SHA now from git log and add it. Every [x] task MUST have a SHA.\nRun verification steps listed under ### Verification for the phase.\nRun full test suite.\nRun linter.\nMark verification checkboxes in plan.md: - [ ] → - [x].\nCommit plan.md progress: git commit -m \"chore(plan): complete phase {N}\".\nCapture checkpoint SHA and append to phase heading in plan.md:\n## Phase N: Title <!-- checkpoint:abc1234 -->.\nReport results and continue:\n\nPhase {N} complete! <!-- checkpoint:abc1234 -->\n\n  Tasks:  {M}/{M}\n  Tests:  {pass/fail}\n  Linter: {pass/fail}\n  Verification:\n    - [x] {check 1}\n    - [x] {check 2}\n\n  Revert this phase: git revert abc1234..HEAD\n\nProceed to the next phase automatically. No approval needed."
      },
      {
        "title": "Test Failure",
        "body": "Tests failing after Task X.Y:\n  {failure details}\n\n1. Attempt to fix\n2. Rollback task changes (git checkout)\n3. Pause for manual intervention\n\nAsk via AskUserQuestion. Do NOT automatically continue past failures."
      },
      {
        "title": "Track Completion",
        "body": "When all phases and tasks are [x]:"
      },
      {
        "title": "1. Final Verification",
        "body": "Run local build — must pass before deploy:\n\nNext.js: pnpm build\nPython: uv build or uv run python -m py_compile src/**/*.py\nAstro: pnpm build\nCloudflare: pnpm build\niOS: xcodebuild -scheme {Name} -sdk iphonesimulator build\nAndroid: ./gradlew assembleDebug\n\n\nRun full test suite.\nRun linter + type-checker.\nVisual smoke test (if tools available):\n\nWeb: start dev server, navigate to main page, check console for errors, take screenshot\niOS: build + install on simulator, launch, take screenshot, check logs\nAndroid: build APK + install on emulator, launch, take screenshot, check logcat\nSkip if tools unavailable — not a blocker for completion\n\n\nCheck acceptance criteria from spec.md."
      },
      {
        "title": "2. Update plan.md header",
        "body": "Change **Status:** [ ] Not Started → **Status:** [x] Complete at the top of plan.md."
      },
      {
        "title": "3. Signal completion",
        "body": "Output pipeline signal ONLY if pipeline state directory (.solo/states/) exists:\n\n<solo:done/>\n\nDo NOT repeat the signal tag elsewhere in the response. One occurrence only."
      },
      {
        "title": "4. Summary",
        "body": "Track complete: {title} ({trackId})\n\n  Phases: {N}/{N}\n  Tasks:  {M}/{M}\n  Tests:  All passing\n\n  Phase checkpoints:\n    Phase 1: abc1234\n    Phase 2: def5678\n    Phase 3: ghi9012\n\n  Revert entire track: git revert abc1234..HEAD\n\nNext:\n  /build {next-track-id}  — continue with next track\n  /plan \"next feature\"    — plan something new"
      },
      {
        "title": "Reverting Work",
        "body": "SHA comments in plan.md enable surgical reverts:\n\nRevert a single task:\n\n# Find SHA from plan.md: - [x] Task 2.3: ... <!-- sha:abc1234 -->\ngit revert abc1234\n\nThen update plan.md: [x] → [ ] for that task.\n\nRevert an entire phase:\n\n# Find checkpoint from phase heading: ## Phase 2: ... <!-- checkpoint:def5678 -->\n# Find previous checkpoint: ## Phase 1: ... <!-- checkpoint:abc1234 -->\ngit revert abc1234..def5678\n\nThen update plan.md: all tasks in that phase [x] → [ ].\n\nNever use git reset --hard — always git revert to preserve history."
      },
      {
        "title": "Progress Tracking (TodoWrite)",
        "body": "At the start of a build session, create a task list from plan.md so progress is visible:\n\nOn session start: Read plan.md, find all incomplete tasks ([ ] and [~]).\nCreate TaskCreate for each phase with its tasks as description.\nTaskUpdate as you work: in_progress when starting a task, completed when done.\nThis gives the user (and pipeline) real-time visibility into progress."
      },
      {
        "title": "Rationalizations Catalog",
        "body": "These thoughts mean STOP — you're about to cut corners:\n\nThoughtReality\"This is too simple to test\"Simple code breaks too. Write the test.\"I'll add tests later\"Tests written after pass immediately — they prove nothing.\"I already tested it manually\"Manual tests don't persist. Automated tests do.\"The test framework isn't set up\"Set it up. That's part of the task.\"This is just a config change\"Config changes break builds. Verify.\"I'm confident this works\"Confidence without evidence is guessing. Run the command.\"Let me just try changing X\"Stop. Investigate root cause first.\"Tests are passing, ship it\"Tests passing ≠ acceptance criteria met. Check spec.md.\"I'll fix the lint later\"Fix it now. Tech debt compounds.\"It works on my machine\"Run the build. Verify in the actual environment."
      },
      {
        "title": "Critical Rules",
        "body": "Run phase checkpoints — verify tests + linter pass before moving to next phase.\nSTOP on failure — do not continue past test failures or errors.\nKeep plan.md updated — task status must reflect actual progress at all times.\nCommit after each task — atomic commits with conventional format.\nResearch before coding — 30 seconds of search saves 30 minutes of reimplementation.\nOne task at a time — finish current task before starting next.\nKeep test output concise — when running tests, pipe through head -50 or use --reporter=dot / -q flag. Thousands of test lines pollute context. Only show failures in detail.\nVerify before claiming done — run the actual command, read the full output, confirm success BEFORE marking a task complete. Never say \"should work now\"."
      },
      {
        "title": "\"No plans found\"",
        "body": "Cause: No plan.md exists in docs/plan/.\nFix: Run /plan \"your feature\" first to create a track."
      },
      {
        "title": "Tests failing after task",
        "body": "Cause: Implementation broke existing functionality.\nFix: Use the error handling flow — attempt fix, rollback if needed, pause for user input. Never skip failing tests."
      },
      {
        "title": "Phase checkpoint failed",
        "body": "Cause: Tests or linter failed at phase boundary.\nFix: Fix failures before proceeding. Re-run verification for that phase."
      }
    ],
    "body": "/build\n\nThis skill is self-contained — follow the task loop, TDD rules, and completion flow below instead of delegating to external build/execution skills (superpowers, etc.).\n\nExecute tasks from an implementation plan. Finds plan.md (in docs/plan/), picks the next unchecked task, implements it with TDD workflow, commits, and updates progress.\n\nWhen to use\n\nAfter /plan has created a track with spec.md + plan.md. This is the execution engine.\n\nPipeline: /plan → /build → /deploy → /review\n\nMCP Tools (use if available)\nsession_search(query) — find how similar problems were solved before\nproject_code_search(query, project) — find reusable code across projects\ncodegraph_query(query) — check file dependencies, imports, callers\n\nIf MCP tools are not available, fall back to Glob + Grep + Read.\n\nPre-flight Checks\n\nDetect context — find where plan files live:\n\nCheck docs/plan/*/plan.md — standard location\nUse whichever exists.\nDO NOT search for conductor/ or any other directory — only docs/plan/.\n\nLoad workflow config from docs/workflow.md (if exists):\n\nTDD strictness (strict / moderate / none)\nCommit strategy (conventional commits format)\nVerification checkpoint rules\nIntegration Testing section — if present, run the specified CLI commands after completing tasks that touch the listed paths If docs/workflow.md missing: use defaults (moderate TDD, conventional commits).\n\nVerify git hooks are installed:\n\nRead the stack YAML (templates/stacks/{stack}.yaml) — the pre_commit field tells you which system and what it runs:\n\nhusky + lint-staged → JS/TS stacks (eslint + prettier + tsc)\npre-commit → Python stacks (ruff + ruff-format + ty)\nlefthook → mobile stacks (swiftlint/detekt + formatter)\n\nThen verify the hook system is active:\n\n# husky\n[ -f .husky/pre-commit ] && git config core.hooksPath | grep -q husky && echo \"OK\" || echo \"NOT ACTIVE\"\n# pre-commit (Python)\n[ -f .pre-commit-config.yaml ] && [ -f .git/hooks/pre-commit ] && echo \"OK\" || echo \"NOT ACTIVE\"\n# lefthook\n[ -f lefthook.yml ] && lefthook version >/dev/null 2>&1 && echo \"OK\" || echo \"NOT ACTIVE\"\n\n\nIf not active — install before first commit:\n\nhusky: pnpm prepare (or npm run prepare)\npre-commit: uv run pre-commit install\nlefthook: lefthook install\n\nDon't use --no-verify on commits — if hooks fail, fix the issue and commit again.\n\nTrack Selection\nIf $ARGUMENTS contains a track ID:\nValidate: {plan_root}/{argument}/plan.md exists (check docs/plan/).\nIf not found: search docs/plan/*/plan.md for partial matches, suggest corrections.\nIf $ARGUMENTS contains --task X.Y:\nJump directly to that task in the active track.\nIf no argument:\nSearch for plan.md files in docs/plan/.\nRead each plan.md, find tracks with uncompleted tasks.\nIf multiple, ask via AskUserQuestion.\nIf zero tracks: \"No plans found. Run /plan first.\"\nContext Loading\nStep 1 — Architecture overview (if MCP available)\ncodegraph_explain(project=\"{project name}\")\n\n\nReturns: stack, languages, directory layers, key patterns, top dependencies, hub files — one call instead of exploring the tree manually.\n\nStep 2 — Essential docs (parallel reads)\ndocs/plan/{trackId}/plan.md — task list (REQUIRED)\ndocs/plan/{trackId}/spec.md — acceptance criteria (REQUIRED)\ndocs/workflow.md — TDD policy, commit strategy (if exists)\nCLAUDE.md — architecture, Do/Don't\n.solo/pipelines/progress.md — running docs from previous iterations (if exists, pipeline-specific). Contains what was done in prior pipeline sessions: stages completed, commit SHAs, last output lines. Use this to avoid repeating completed work.\n\nDo NOT read source code files at this stage. Only docs. Source files are loaded per-task in the execution loop (step 3 below).\n\nResumption\n\nIf a task is marked [~] in plan.md:\n\nResuming: {track title}\nLast task: Task {X.Y}: {description} [in progress]\n\n1. Continue from where we left off\n2. Restart current task\n3. Show progress summary first\n\n\nAsk via AskUserQuestion, then proceed.\n\nTask Execution Loop\n\nMakefile convention: If Makefile exists in project root, always prefer make targets over raw commands. Use make test instead of pnpm test, make lint instead of pnpm lint, make build instead of pnpm build, etc. Run make help (or read Makefile) to discover available targets. If a make integration or similar target exists, use it for integration testing after pipeline-related tasks.\n\nIMPORTANT — All-done check: Before entering the loop, scan plan.md for ANY - [ ] or - [~] tasks. If ALL tasks are [x] — skip the loop entirely and jump to Completion section below to run final verification and output <solo:done/>.\n\nFor each incomplete task in plan.md (marked [ ]), in order:\n\n1. Find Next Task\n\nParse plan.md for first line matching - [ ] Task X.Y: (or - [~] Task X.Y: if resuming).\n\n2. Start Task\nUpdate plan.md: [ ] → [~] for current task.\nAnnounce: \"Starting Task X.Y: {description}\"\n3. Research (smart, before coding)\n\nDo NOT grep the entire project or read all source files. Load only what this specific task needs.\n\nIf MCP available (preferred):\n\nproject_code_search(query=\"{task keywords}\", project=\"{name}\") — find relevant code in the project. Read only the top 2-3 results.\nsession_search(\"{task keywords}\") — check if you solved this before.\ncodegraph_query(\"MATCH (f:File {project: '{name}'})-[:IMPORTS]->(dep) WHERE f.path CONTAINS '{module}' RETURN dep.path\") — check imports/dependencies of files you'll modify.\n\nIf MCP unavailable (fallback):\n\nRead ONLY the files explicitly mentioned in the task description (file paths).\nGlob for the specific module directory the task targets (e.g., src/auth/**/*.ts), not the entire project.\nIf the task doesn't mention files, use Grep with a narrow pattern on src/ or app/ — never **/*.\n\nNever do: Grep \"keyword\" . across the whole project. This dumps hundreds of lines into context for no reason. Be surgical.\n\nPython-Specific Quality Tools\n\nWhen the project uses a Python stack (detected by pyproject.toml or stack YAML), run the full Astral toolchain:\n\nRuff — linting + formatting (always):\n\nuv run ruff check --fix .\nuv run ruff format .\n\n\nty — type-checking (if ty in dev dependencies or stack YAML):\n\nuv run ty check .\n\n\nty is Astral's type-checker (extremely fast, replaces mypy/pyright). Fix type errors before committing.\n\nHypothesis — property-based testing (if hypothesis in dependencies):\n\nUse @given(st.from_type(MyModel)) to auto-generate Pydantic model inputs.\nUse @given(st.text(), st.integers()) for edge-case coverage on parsers/validators.\nHypothesis tests go in the same test files alongside regular pytest tests.\n\nPre-commit — run all hooks before committing:\n\nuv run pre-commit run --all-files\n\n\nRun these checks after each task implementation, before git commit. If any fail, fix before proceeding.\n\nJS/TS-Specific Quality Tools\n\nWhen the project uses a JS/TS stack (detected by package.json or stack YAML):\n\nESLint — linting (always):\n\npnpm lint --fix\n\n\nPrettier — formatting (always):\n\npnpm format\n\n\ntsc --noEmit — type-checking (strict mode):\n\npnpm tsc --noEmit\n\n\nFix type errors before committing. Strict mode should be on in tsconfig.json.\n\nKnip — dead code detection (if in devDependencies, run periodically):\n\npnpm knip\n\n\nFinds unused files, exports, and dependencies. Run after significant refactors.\n\nPre-commit — husky + lint-staged runs ESLint + Prettier + tsc on staged files.\n\niOS/Android-Specific Quality Tools\n\nWhen the project uses a mobile stack:\n\niOS (Swift):\n\nswiftlint lint --strict\nswift-format format --in-place --recursive Sources/\n\n\nAndroid (Kotlin):\n\n./gradlew detekt\n./gradlew ktlintCheck\n\n\nBoth use lefthook for pre-commit hooks (language-agnostic, no Node.js required).\n\n4. TDD Workflow (if TDD enabled in workflow.md)\n\nRed — write failing test:\n\nCreate/update test file for the task functionality.\nRun tests to confirm they fail.\n\nGreen — implement:\n\nWrite minimum code to make the test pass.\nRun tests to confirm pass.\n\nRefactor:\n\nClean up while tests stay green.\nRun tests one final time.\n5. Non-TDD Workflow (if TDD is \"none\" or \"moderate\" and task is simple)\nImplement the task directly.\nRun existing tests to check nothing broke.\nFor \"moderate\": write tests for business logic and API routes, skip for UI/config.\n5.5. Integration Testing (CLI-First)\n\nIf the task touches core business logic (pipeline, algorithms, agent tools), run make integration (or the integration command from docs/workflow.md). The CLI exercises the same code paths as the UI without requiring a browser. If make integration fails, fix before committing.\n\n5.6. Visual Verification (if browser/simulator/emulator available)\n\nAfter implementation, run a quick visual smoke test if tools are available:\n\nWeb projects (Playwright MCP or browser tools): If you have Playwright MCP tools or browser tools available:\n\nStart the dev server if not already running (check stack YAML for dev_server.command)\nNavigate to the page affected by the current task\nCheck the browser console for errors (hydration mismatches, uncaught exceptions, 404s)\nTake a screenshot to verify the visual output matches expectations\nIf the task affects responsive layout, resize to mobile viewport (375px) and check\n\niOS projects (simulator): If instructed to use iOS Simulator in the pipeline prompt:\n\nBuild for simulator: xcodebuild -scheme {Name} -sdk iphonesimulator build\nInstall on booted simulator: xcrun simctl install booted {app-path}\nLaunch and take screenshot: xcrun simctl io booted screenshot /tmp/sim-screenshot.png\nCheck simulator logs: xcrun simctl spawn booted log stream --style compact --timeout 10\n\nAndroid projects (emulator): If instructed to use Android Emulator in the pipeline prompt:\n\nBuild debug APK: ./gradlew assembleDebug\nInstall: adb install -r app/build/outputs/apk/debug/app-debug.apk\nTake screenshot: adb exec-out screencap -p > /tmp/emu-screenshot.png\nCheck logcat: adb logcat '*:E' --format=time -d 2>&1 | tail -20\n\nGraceful degradation: If browser/simulator/emulator tools are not available or fail — skip visual checks entirely. Visual testing is a bonus, never a blocker. Log that it was skipped and continue with the task.\n\n6. Complete Task\n\nCommit (following commit strategy):\n\ngit add {specific files changed}\ngit commit -m \"<type>(<scope>): <description>\"\n\n\nTypes: feat, fix, refactor, test, docs, chore, perf, style\n\nCapture SHA after commit:\n\ngit rev-parse --short HEAD\n\n\nSHA annotation in plan.md. After every task commit:\n\nMark task done: [~] → [x]\nAppend commit SHA inline: - [x] Task X.Y: description <!-- sha:abc1234 -->\n\nWithout a SHA, there's no traceability and no revert capability. If a task required multiple commits, record the last one.\n\n7. Phase Completion Check\n\nAfter each task, check if all tasks in current phase are [x].\n\nIf phase complete:\n\nSHA audit — scan all [x] tasks in this phase. If any are missing <!-- sha:... -->, capture their SHA now from git log and add it. Every [x] task MUST have a SHA.\nRun verification steps listed under ### Verification for the phase.\nRun full test suite.\nRun linter.\nMark verification checkboxes in plan.md: - [ ] → - [x].\nCommit plan.md progress: git commit -m \"chore(plan): complete phase {N}\".\nCapture checkpoint SHA and append to phase heading in plan.md: ## Phase N: Title <!-- checkpoint:abc1234 -->.\nReport results and continue:\nPhase {N} complete! <!-- checkpoint:abc1234 -->\n\n  Tasks:  {M}/{M}\n  Tests:  {pass/fail}\n  Linter: {pass/fail}\n  Verification:\n    - [x] {check 1}\n    - [x] {check 2}\n\n  Revert this phase: git revert abc1234..HEAD\n\n\nProceed to the next phase automatically. No approval needed.\n\nError Handling\nTest Failure\nTests failing after Task X.Y:\n  {failure details}\n\n1. Attempt to fix\n2. Rollback task changes (git checkout)\n3. Pause for manual intervention\n\n\nAsk via AskUserQuestion. Do NOT automatically continue past failures.\n\nTrack Completion\n\nWhen all phases and tasks are [x]:\n\n1. Final Verification\nRun local build — must pass before deploy:\nNext.js: pnpm build\nPython: uv build or uv run python -m py_compile src/**/*.py\nAstro: pnpm build\nCloudflare: pnpm build\niOS: xcodebuild -scheme {Name} -sdk iphonesimulator build\nAndroid: ./gradlew assembleDebug\nRun full test suite.\nRun linter + type-checker.\nVisual smoke test (if tools available):\nWeb: start dev server, navigate to main page, check console for errors, take screenshot\niOS: build + install on simulator, launch, take screenshot, check logs\nAndroid: build APK + install on emulator, launch, take screenshot, check logcat\nSkip if tools unavailable — not a blocker for completion\nCheck acceptance criteria from spec.md.\n2. Update plan.md header\n\nChange **Status:** [ ] Not Started → **Status:** [x] Complete at the top of plan.md.\n\n3. Signal completion\n\nOutput pipeline signal ONLY if pipeline state directory (.solo/states/) exists:\n\n<solo:done/>\n\n\nDo NOT repeat the signal tag elsewhere in the response. One occurrence only.\n\n4. Summary\nTrack complete: {title} ({trackId})\n\n  Phases: {N}/{N}\n  Tasks:  {M}/{M}\n  Tests:  All passing\n\n  Phase checkpoints:\n    Phase 1: abc1234\n    Phase 2: def5678\n    Phase 3: ghi9012\n\n  Revert entire track: git revert abc1234..HEAD\n\nNext:\n  /build {next-track-id}  — continue with next track\n  /plan \"next feature\"    — plan something new\n\nReverting Work\n\nSHA comments in plan.md enable surgical reverts:\n\nRevert a single task:\n\n# Find SHA from plan.md: - [x] Task 2.3: ... <!-- sha:abc1234 -->\ngit revert abc1234\n\n\nThen update plan.md: [x] → [ ] for that task.\n\nRevert an entire phase:\n\n# Find checkpoint from phase heading: ## Phase 2: ... <!-- checkpoint:def5678 -->\n# Find previous checkpoint: ## Phase 1: ... <!-- checkpoint:abc1234 -->\ngit revert abc1234..def5678\n\n\nThen update plan.md: all tasks in that phase [x] → [ ].\n\nNever use git reset --hard — always git revert to preserve history.\n\nProgress Tracking (TodoWrite)\n\nAt the start of a build session, create a task list from plan.md so progress is visible:\n\nOn session start: Read plan.md, find all incomplete tasks ([ ] and [~]).\nCreate TaskCreate for each phase with its tasks as description.\nTaskUpdate as you work: in_progress when starting a task, completed when done.\nThis gives the user (and pipeline) real-time visibility into progress.\nRationalizations Catalog\n\nThese thoughts mean STOP — you're about to cut corners:\n\nThought\tReality\n\"This is too simple to test\"\tSimple code breaks too. Write the test.\n\"I'll add tests later\"\tTests written after pass immediately — they prove nothing.\n\"I already tested it manually\"\tManual tests don't persist. Automated tests do.\n\"The test framework isn't set up\"\tSet it up. That's part of the task.\n\"This is just a config change\"\tConfig changes break builds. Verify.\n\"I'm confident this works\"\tConfidence without evidence is guessing. Run the command.\n\"Let me just try changing X\"\tStop. Investigate root cause first.\n\"Tests are passing, ship it\"\tTests passing ≠ acceptance criteria met. Check spec.md.\n\"I'll fix the lint later\"\tFix it now. Tech debt compounds.\n\"It works on my machine\"\tRun the build. Verify in the actual environment.\nCritical Rules\nRun phase checkpoints — verify tests + linter pass before moving to next phase.\nSTOP on failure — do not continue past test failures or errors.\nKeep plan.md updated — task status must reflect actual progress at all times.\nCommit after each task — atomic commits with conventional format.\nResearch before coding — 30 seconds of search saves 30 minutes of reimplementation.\nOne task at a time — finish current task before starting next.\nKeep test output concise — when running tests, pipe through head -50 or use --reporter=dot / -q flag. Thousands of test lines pollute context. Only show failures in detail.\nVerify before claiming done — run the actual command, read the full output, confirm success BEFORE marking a task complete. Never say \"should work now\".\nCommon Issues\n\"No plans found\"\n\nCause: No plan.md exists in docs/plan/. Fix: Run /plan \"your feature\" first to create a track.\n\nTests failing after task\n\nCause: Implementation broke existing functionality. Fix: Use the error handling flow — attempt fix, rollback if needed, pause for user input. Never skip failing tests.\n\nPhase checkpoint failed\n\nCause: Tests or linter failed at phase boundary. Fix: Fix failures before proceeding. Re-run verification for that phase."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/fortunto2/solo-build",
    "publisherUrl": "https://clawhub.ai/fortunto2/solo-build",
    "owner": "fortunto2",
    "version": "2.2.1",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/solo-build",
    "downloadUrl": "https://openagent3.xyz/downloads/solo-build",
    "agentUrl": "https://openagent3.xyz/skills/solo-build/agent",
    "manifestUrl": "https://openagent3.xyz/skills/solo-build/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/solo-build/agent.md"
  }
}