Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Complete Claude Code productivity system — project setup, prompting patterns, sub-agent orchestration, context management, debugging, refactoring, TDD, and s...
Complete Claude Code productivity system — project setup, prompting patterns, sub-agent orchestration, context management, debugging, refactoring, TDD, and s...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
The complete methodology for shipping production code with Claude Code at 10X speed. Not installation scripts — actual patterns, workflows, and techniques that compound your output.
SignalHealthyFixCLAUDE.md exists at project root✅Create one (see §1).claueignore configured✅Add noise directoriesSession context under 60%✅/compact or start freshClear task scope before prompting✅Write task brief firstTests exist for target code✅Write tests first (§7)Git clean before big changes✅Commit or stashSub-agents for parallel work✅Use /new or Task toolVerifying output, not trusting blindly✅Always review diffs Score: /8. Below 6 = slow, buggy sessions. Fix before coding.
CLAUDE.md is your project's brain. Claude reads it at session start. A good one saves thousands of tokens per session.
Be specific — "TypeScript strict" not "use types." Stack versions, not just names. Include commands — Claude needs to know how to run things. Exact commands, not descriptions. Architecture decisions — document WHY, not just what. "Errors as values because we use Result type" tells Claude the pattern. Keep it under 200 lines — CLAUDE.md is read every session. Bloat wastes tokens. Update when patterns change — stale CLAUDE.md causes Claude to fight your codebase. Nested CLAUDE.md — subdirectories can have their own. Claude merges them. Use for monorepo packages.
node_modules/ .next/ dist/ coverage/ *.lock .git/ *.min.js *.map public/assets/ Rule: if Claude doesn't need to read it, ignore it. Large lock files and build artifacts waste context.
Bad: "Make the API more robust" Good: "Add input validation to POST /api/users — validate email format, name 1-100 chars, reject extra fields. Return 422 with field-level errors matching this shape: { errors: { field: string, message: string }[] }" Rule: if you can't describe the exact output shape, you don't know what you want yet. Think first.
Step 1: "Create the database schema for a todo app with projects and tasks" [review output] Step 2: "Now add the CRUD service layer for tasks — pure functions, no framework imports" [review output] Step 3: "Now the API routes that call those services — input validation with zod" Why: Claude produces better code in focused steps than in one massive prompt. Each step builds verified context.
Bad: "It's broken" Good: "Running pnpm test gives this error: TypeError: Cannot read properties of undefined (reading 'id') at getUserById (src/lib/users.ts:23:15) The function expects a User object but receives undefined when the database query returns no rows. Add a null check and return a Result type." Rule: paste the actual error. Claude is excellent at fixing bugs when it can see the stack trace.
I'm deciding between these approaches for real-time updates: A) Server-Sent Events from Next.js API routes B) WebSocket via separate service C) Polling every 5 seconds Context: 500 concurrent users, updates every 30 seconds on average, deployed on Vercel. What are the tradeoffs? Recommend one with reasoning. Use this for decisions, not implementation. Get the answer, THEN switch to Task Brief for building.
Context %Action0-30%Fresh. Do complex work here.30-60%Good. Continue current task.60-80%Getting stale. Finish current unit, then compact.80%+Dangerous. /compact immediately or start new session.
Switching to unrelated task Context above 70% Claude starts repeating itself or making mistakes it didn't make earlier After shipping a feature (clean slate for next one)
Mid-task but context bloating from exploration After a debugging session (lots of error output consumed context) Before a complex implementation step
Don't paste entire files — reference by path. Claude can read them. Don't re-explain — if it's in CLAUDE.md, don't repeat it in prompts. Use specific file paths — "Look at src/lib/auth.ts line 45" not "look at the auth code." Close tangents — if Claude goes down a rabbit hole, redirect immediately. Don't let bad output consume context. One concern per message — "Fix the auth bug AND refactor the database layer AND add tests" = context explosion. Sequential > parallel in a single session.
ScenarioPatternIndependent featuresSpawn sub-agent per featureTests + implementationOne agent writes tests, main writes codeResearch + buildSub-agent researches API docs, main buildsRefactor + maintainSub-agent refactors module A, main works on BCode reviewSub-agent reviews your PR with fresh eyes
Use the Task tool to: 1. Research the Stripe API for subscription billing 2. Return: webhook event types we need, API calls for create/update/cancel, error codes to handle Task tool spawns a sub-agent with its own context. Results come back summarized. Perfect for research that would bloat your main context.
Describe the symptom (what you see vs. what you expect) Error output (paste full stack trace, not summary) Bisect (when did it last work? what changed?) Unit isolate (can you reproduce in a test?) Generate hypothesis (ask Claude for 3 possible causes, ranked)
Add constraints — "The fix must not change the API contract" narrows the search space. Share what you've tried — "I already tried revalidatePath('/profile') and it didn't work because..." Ask for alternatives — "Give me 3 different approaches to solve this, with tradeoffs." Fresh session — sometimes the context is poisoned. Start clean with just the bug description.
Before any refactoring session: Before we start refactoring: 1. Run the test suite and confirm it passes: `pnpm test` 2. Commit current state: `git add -A && git commit -m "chore: pre-refactor checkpoint"` 3. Create a branch: `git checkout -b refactor/[description]`
One type of change at a time — don't rename AND restructure AND optimize simultaneously. Run tests after each step — catch breaks early, not after 20 files changed. Preserve exports — use index.ts re-exports so consumers don't break. Commit incrementally — one commit per logical change, not one giant commit.
Step 1: "Write a failing test for: creating a user with valid email stores them in the database" Step 2: [verify test fails for the right reason] Step 3: "Now write the minimum code to make this test pass" Step 4: [verify test passes] Step 5: "Refactor the implementation — the test must still pass"
Tests are acceptance criteria — Claude knows exactly what "done" means. Immediate verification — no ambiguity about whether the code works. Prevents gold-plating — "minimum code to pass" stops Claude from over-engineering. Regression safety — future changes can't silently break working features.
git status # Clean working tree? git pull # Up to date? git checkout -b feat/[description] # New branch
ScopeCommit PatternSingle function addedfeat: add calculateShipping functionBug fixedfix: handle null user in profile fetchTests addedtest: add shipping calculation edge casesRefactor (no behavior change)refactor: extract validation into shared moduleMultiple related changesCommit each logical unit separatelyLarge featureMultiple commits on feature branch, squash on merge
Review this code for: 1. Correctness — does it do what it claims? 2. Security — any injection, auth bypass, data leak risks? 3. Performance — N+1 queries, unnecessary re-renders, missing indexes? 4. Maintainability — clear naming, reasonable complexity, documented edge cases? 5. Testing — are the tests sufficient? Any missing cases? Be specific. For each issue, cite the file:line and suggest a fix. Skip style nits unless they affect readability.
Before deploying any Claude-generated code:
All tests pass (pnpm test && pnpm test:e2e) Linting clean (pnpm lint) Type checking clean (tsc --noEmit) No hardcoded secrets in code (grep for API keys, tokens, passwords) Error handling exists for all external calls (DB, API, file I/O) Input validation on all user-facing endpoints Database migrations reviewed (no data loss, backward compatible)
Performance: no N+1 queries, no unbounded lists, pagination exists Logging: structured logs at appropriate levels (not console.log) Auth: all new endpoints have proper authorization checks Rate limiting on public endpoints Rollback plan documented (how to revert if broken)
Load test with expected traffic Monitoring alerts for new endpoints Documentation updated (API docs, README, CHANGELOG) Accessibility checked (if UI changes)
Anti-PatternWhy It's BadFix"Build me a full app" in one promptContext explosion, mediocre everythingBreak into 5-10 focused tasksAccepting code without reading itBugs compound, technical debt growsReview every diff. Question anything unclear.Re-prompting the same thing hoping for different resultsWastes tokens and contextChange your prompt. Add constraints. Try a different approach.Ignoring test failures"It mostly works" → production incidentsFix tests immediately. Green before moving on.Never using /compactContext degrades, Claude gets confusedCompact every 30-45 minutes of active workPasting entire codebasesContext full of noiseReference files by path. Let Claude read what it needs.Using Claude for tasks you should think throughOutsourcing architecture decisions to AIDiscuss with Claude, but YOU decide.Not committing between tasksCan't revert, can't bisectCommit after every working statePrompting in vague language"Make it better" → random changesSpecific inputs, specific outputs, specific constraintsFighting Claude's suggestionsIf Claude keeps suggesting something different, maybe it's rightConsider the suggestion. Explain why your way is better if you disagree.
CommandWhen to Use/newNew task, fresh context/compactContext getting heavy, mid-task/clearNuclear option — wipe everything/costCheck token spend this session/modelSwitch models (Sonnet for speed, Opus for complexity)/vimEnter vim mode for file editing
Task TypeBest ModelWhySimple bug fixSonnetFast, cheap, sufficientNew feature implementationSonnetGood balance of speed + qualityComplex architectureOpusDeeper reasoning, better tradeoff analysisCode reviewOpusCatches subtle issuesRefactoringSonnetMechanical changes, speed mattersDebugging race conditionsOpusNeeds to reason about state
Esc → interrupt Claude (stop generation if going wrong direction) Up arrow → edit last prompt (fix typo without re-typing) Tab → accept file edit suggestion
For maximum throughput on a large feature: Terminal 1: main implementation (Claude Code) Terminal 2: tests for what Terminal 1 builds (Claude Code with /new) Terminal 3: manual testing / running the app
1. "Create branch feat/[name]" 2. "Write failing tests for [feature spec]" 3. "Implement minimum code to pass tests" 4. "Refactor — tests must stay green" 5. "Run full test suite + lint + typecheck" 6. "Commit with conventional commit message" 7. "Self-review: check diff for security, performance, missing edge cases" 8. "Create PR with description"
1. "Create branch fix/[description]" 2. "Write a test that reproduces this bug: [paste error]" 3. "Fix the bug — test must pass" 4. "Run full test suite to check for regressions" 5. "Commit: fix: [description of what was wrong]"
1. "Create branch refactor/[description]" 2. "Run tests — confirm all green" 3. "Commit pre-refactor state" 4. "[Specific refactoring instruction]" 5. "Run tests — must still be green" 6. "Commit this step" 7. [Repeat 4-6 for each refactoring step]
1. "Use Task tool to research [topic]. Return: key findings, API surface, gotchas, recommended approach." 2. [Read Task output] 3. "Based on the research, build a minimal proof of concept in /tmp/spike-[name]/" 4. [Evaluate POC] 5. "Spike looks good. Now implement properly in the main codebase."
Track these weekly: MetricTargetHow to MeasureFeatures shipped3-5/weekGit commits tagged feat:Bugs introduced<1/weekPost-deploy incidentsTest coverage trend↑ or stableCoverage reportsToken cost / featureDecreasing/cost per sessionTime to first working code<30 minStopwatch from task startContext compacts per session1-2Count your /compact usage
Say ThisDoes This"Set up my project for Claude Code"Creates CLAUDE.md + .claueignore from your stack"Review this code"Runs 5-dimension review on changed files"Help me debug [error]"Walks through DEBUG protocol"Refactor [file/module]"Safe refactoring with test verification"Write tests for [function]"TDD-style test generation"Ship this feature"Runs production checklist"Start a new task"Clean context + branch setup"How's my productivity?"Reviews git log and suggests improvements"Optimize my CLAUDE.md"Reviews and improves your project config"What model should I use for [task]?"Model selection recommendation"Help me with my PR"PR description + self-review"Estimate this task"Breaks down into steps with time estimates Built by AfrexAI — AI-native business tools that ship.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.