Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Provides a comprehensive methodology to optimize productivity and tool use across multiple AI coding assistants through assessment, selection, context engine...
Provides a comprehensive methodology to optimize productivity and tool use across multiple AI coding assistants through assessment, selection, context engine...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
The complete methodology for 10X productivity with AI-assisted development. Covers Cursor, Windsurf, Cline, Aider, Claude Code, GitHub Copilot, and more โ tool-agnostic principles that work everywhere.
Rate yourself 1-5 on each: Dimension1 (Beginner)5 (Expert)Prompt quality"Fix this bug"Structured context + constraints + examplesContext managementPaste entire filesCurated context windows, .cursorrules, AGENTS.mdWorkflow integrationAd-hoc usageSystematic agent-first developmentOutput verificationAccept everythingReview, test, iterate before committingTool selectionOne tool for everythingRight tool for right task Score interpretation: 5-10: Read everything โ you'll 10X your output 11-18: Skip to Phase 4+ for advanced techniques 19-25: Focus on Phase 8-10 for mastery patterns
ToolBest ForContext WindowAutonomy LevelCostGitHub CopilotLine/function completion, inline suggestionsCurrent file + neighborsLow (autocomplete)$10-19/moCursorFull-file editing, multi-file refactors, chatProject-aware (indexing)Medium (tab/chat/composer)$20/moWindsurf (Cascade)Autonomous multi-step tasks, flowsProject-aware + flowsHigh (agentic flows)$15/moClineVS Code extension, model-agnostic, transparentManual context + autoHigh (tool use, browser)API costsAiderTerminal-based, git-native, pair programmingRepo map + selected filesMedium-High (git commits)API costsClaude CodeCLI agent, complex multi-file tasksWorkspace-awareHigh (full agent)API costsOpenClawPersistent agent, cron, multi-surfaceWorkspace + memory + toolsVery High (autonomous)API costs
Need autocomplete while typing? โ GitHub Copilot (layer it with any other tool) Working in VS Code/IDE? โโ Want integrated editor experience? โ Cursor or Windsurf โโ Want model flexibility + transparency? โ Cline โโ Want minimal config, just works? โ Cursor Working in terminal? โโ Want git-native pair programming? โ Aider โโ Want full agent with tools? โ Claude Code โโ Want persistent autonomous agent? โ OpenClaw Building complex multi-file features? โ Cursor Composer or Windsurf Cascade or Claude Code Need autonomous background work? โ OpenClaw (cron, heartbeats, multi-session)
Solo developer: GitHub Copilot (always-on autocomplete) Cursor OR Windsurf (primary IDE) Claude Code OR Aider (terminal agent for complex tasks) Team: GitHub Copilot (org-wide) Cursor (primary IDE, .cursorrules in repo) CI/CD AI review (automated PR review)
Context is everything. The quality of AI output is directly proportional to the quality of context you provide.
System instructions (.cursorrules, AGENTS.md, CLAUDE.md, .windsurfrules) Explicit context (files you @mention or add to chat) Implicit context (open tabs, recent edits, project index) Model knowledge (training data โ least reliable for your codebase)
The 80/20 Rule: 80% of your context should be the specific files/functions relevant to the task. 20% is project conventions and standards. Context Compression Techniques: Summarize, don't dump โ Instead of pasting a 500-line file, describe what it does and paste only the relevant section Use @mentions โ @file.ts instead of copy-paste (tool-specific) Create reference docs โ One-page architecture summaries the AI can reference Prune conversation โ Start new chats for new tasks; stale context = hallucinations Tree command โ Give the AI your project structure: tree -I node_modules -L 3
Every 5-10 messages, check: Is the AI still tracking correctly? If it starts hallucinating file names, functions, or making wrong assumptions โ start a new chat with fresh context. Context is milk. It spoils.
1. Write the test first (yourself or with AI help) 2. Ask AI to implement the code that passes the test 3. Run tests โ verify green 4. Ask AI to refactor while keeping tests green 5. Review the final code yourself Why this works: Tests are specifications. The AI writes better code when it has a concrete target. You catch hallucinations immediately.
1. Ask AI to scaffold the architecture (file structure, interfaces, types) 2. Review and approve the scaffold 3. Ask AI to fill in implementation file by file 4. Review each file individually 5. Integration test the full feature Why this works: You maintain architectural control. The AI handles the grunt work. Errors are caught at each layer.
Chat 1: Architecture discussion โ decisions documented Chat 2: Implementation of Component A (reference architecture doc) Chat 3: Implementation of Component B (reference architecture doc) Chat 4: Integration + testing Why this works: Fresh context per component prevents drift. Architecture doc provides continuity.
1. Start session with repo context 2. Describe the task in natural language 3. AI proposes changes as git diffs 4. Review each diff before accepting 5. AI commits with meaningful messages 6. You handle edge cases and integration
1. Define task in structured format (acceptance criteria, constraints) 2. Agent plans โ executes โ verifies (reads files, runs tests) 3. Agent creates PR/branch with changes 4. You review the complete changeset 5. Iterate on feedback
FeaturePower MoveTab completionLet it complete 3-5 tokens before accepting โ catches wrong predictions earlyCmd+K (inline edit)Select ONLY the exact lines to change โ less context = more accurateChat@file to add context, @codebase for project-wide questionsComposerMulti-file changes โ describe the full feature, let it edit across files.cursorrulesProject-specific AI instructions โ commit to repo for team alignmentNotepadsReusable context (API docs, design docs) โ attach to any chat Cursor Pro Tips: Use @git to reference recent changes Use @docs to reference official library documentation Create .cursor/rules/ directory for multiple rule files by domain "Apply" button to accept chat suggestions directly into code
FeaturePower MoveCascade flowsMulti-step autonomous tasks โ it can read, write, run terminalWrite modeDirect file editing with AIChat modeDiscussion without editing.windsurfrulesProject context fileTurbo modeFaster, less accurate โ good for simple tasks Windsurf Pro Tips: Cascade excels at multi-file refactors โ give it the full scope Use "undo flow" to revert entire multi-step changes Pin important files in context Let it read error output from terminal to self-fix
FeaturePower MoveModel selectionSwitch models per task (cheap for simple, expensive for complex)Tool useReads files, runs commands, opens browser โ full agentTransparencyShows every action before executing โ audit everythingCustom instructionsPer-project system promptsAuto-approveConfigure which actions need approval Cline Pro Tips: Set spending limits to prevent runaway API costs Use cheaper models (Haiku/GPT-4o-mini) for simple tasks Enable "diff mode" to see exact changes before applying Create task-specific instruction files
FeaturePower Move/add filesExplicitly control which files the AI can see/edit/read filesRead-only context (reference files)/architectTwo-model approach โ architect plans, editor implementsRepo mapAuto-generates codebase summary for contextGit integrationEvery change is a commit โ easy rollback Aider Pro Tips: Use --architect flag for complex features (planner + implementer) /drop files you don't need to free context window --map-tokens to control repo map size Run aider --model claude-sonnet-4-20250514 for best code quality
FeaturePower MoveFull agentReads files, writes code, runs tests, git operationsCLAUDE.mdProject instructions file โ auto-loadedSub-agentsSpawn parallel workers for complex tasksMemoryPersistent across sessions (project-level) Claude Code Pro Tips: Write a comprehensive CLAUDE.md โ it's your biggest leverage Use "plan mode" first for complex tasks, then "implement" Let it run tests and self-correct โ don't interrupt the loop Use /compact when context gets long
After every AI-generated change: Read every line โ don't blindly accept. AI hallucinates plausible-looking code Check imports โ AI often imports non-existent modules or wrong versions Verify function signatures โ parameter names, types, return types Test edge cases โ AI optimizes for the happy path Check for security โ hardcoded secrets, missing auth checks, SQL injection Run the tests โ if tests pass, good. If no tests exist, write them first Check for drift โ did it change files you didn't ask it to change? Verify dependencies โ did it add packages? Are they real? Are they secure?
FailureDetectionFixHallucinated APICode uses functions that don't existCheck library docs before acceptingOutdated patternsUses deprecated APIs (React class components)Specify versions in contextMissing error handlingHappy path only, no try/catchAsk specifically for error casesSecurity holesInline secrets, missing auth, XSSSecurity review as separate stepOver-engineering5 files for a 20-line solutionAsk for simplest possible solutionWrong abstractionsPremature generalizationSpecify "don't abstract, keep concrete"Test theaterTests that pass but test nothingReview test assertions specificallyCopy-paste bugsDuplicated logic with subtle differencesCheck for patterns, extract helpers
Skim read โ Does the structure make sense? Right files, right approach? Logic read โ Does each function do what it claims? Edge cases handled? Integration read โ Does it work with the rest of the codebase? Breaking changes?
ModelInput $/1M tokensOutput $/1M tokensBest ForGPT-4o mini$0.15$0.60Simple completions, formattingClaude Haiku$0.25$1.25Quick edits, simple questionsGPT-4o$2.50$10.00Complex code generationClaude Sonnet$3.00$15.00Complex code, long contextClaude Opus$15.00$75.00Architecture, hardest problemso3$10.00$40.00Complex reasoning, algorithms
Tier your usage โ Simple tasks โ cheap model. Complex โ expensive model Reduce context โ Every unnecessary file in context costs money Start new chats โ Long conversations accumulate expensive history Use autocomplete for simple stuff โ Copilot is flat-rate, much cheaper per completion Cache project context โ Use rules files instead of re-explaining every chat Batch related tasks โ Handle related changes in one conversation
Usage LevelEstimated Monthly CostLight (Copilot + occasional chat)$20-40Medium (Cursor Pro + daily chat)$40-80Heavy (API-based agents, complex tasks)$80-200Power user (autonomous agents, all day)$200-500+
Week 1-2: Foundation Choose primary tool (Cursor or Windsurf recommended for teams) Create .cursorrules / .windsurfrules committed to repo Run a 1-hour workshop: basics, prompt techniques, verification Set team guidelines (review requirements, security rules) Week 3-4: Practice Daily 15-min "AI wins" standup share Pair sessions: experienced + new user Collect common prompts into team prompt library Monitor and address concerns (quality, dependency) Month 2: Optimization Measure: time-to-PR, bugs-per-feature, developer satisfaction Iterate on .cursorrules based on team feedback Create task-specific prompt templates in shared docs Address skill gaps: who's using it well, who needs help? Month 3: Systemization AI-assisted PR review as CI step Automated test generation for new features Custom slash commands / snippets for team workflows Quarterly review: ROI, quality metrics, tooling updates
Task: Build feature X Agent 1 (Architect): Plans the approach, defines interfaces Agent 2 (Implementer): Writes the code Agent 3 (Tester): Writes and runs tests Agent 4 (Reviewer): Reviews for quality, security, patterns Orchestrator: Coordinates, resolves conflicts, maintains context
1. Agent writes code 2. Agent runs tests 3. Tests fail โ agent reads error, fixes code 4. Repeat until tests pass 5. Agent runs linter 6. Lint fails โ agent fixes 7. All green โ create PR
Maintain a prompts/ directory in your project: prompts/ feature-implementation.md bug-fix.md refactoring.md code-review.md test-generation.md migration.md documentation.md Each file is a reusable prompt template. Reference them: "Follow the template in prompts/feature-implementation.md"
task_routing: autocomplete: copilot # Always-on, flat rate simple_edit: haiku # Quick, cheap feature_impl: sonnet # Good balance architecture: opus # When it matters debugging: sonnet # Needs to reason about code documentation: haiku # Simple transformation security_review: opus # Can't afford mistakes test_generation: sonnet # Needs understanding of code logic
Anti-PatternWhy It FailsDo This InsteadPrompt and prayNo verification = bugs in productionAlways review, always testPaste the whole codebaseOverwhelms context, increases costCurate relevant files onlyNever start new chatsStale context โ hallucinationsNew task = new chatTrust without readingAI generates plausible but wrong codeRead every lineSkip tests because AI wrote itAI code has bugs tooTest AI code MORE, not lessUse one model for everythingWaste money on simple tasksTier models by complexityNo project rules fileAI guesses your conventionsWrite .cursorrules / CLAUDE.mdVague promptsGarbage in, garbage outUse SPEC frameworkOver-relianceSkill atrophy, can't debug AI outputUnderstand what AI generatesIgnoring securityAI doesn't prioritize securityExplicit security review step
DimensionWeightCriteriaContext engineering20%Rules files, curated context, fresh chatsPrompt quality15%SPEC framework, task-appropriate templatesVerification rigor20%Review checklist, test coverage, security reviewTool selection10%Right tool for task, model routingCost efficiency10%Tiered usage, context management, batch tasksOutput quality15%Code correctness, maintainability, no driftWorkflow integration10%Systematic process, team alignment
What was my best AI-assisted output this week? What made it good? Where did AI waste my time? What went wrong with context/prompts? Am I reviewing thoroughly enough, or rubber-stamping? What prompt patterns worked well? Add to prompt library. Am I over-relying on AI for things I should understand deeply?
Acceleration factor: Tasks completed per day vs pre-AI baseline Bug rate: Bugs in AI-assisted code vs manual code Cost per feature: API spend / features shipped Context efficiency: Average conversation length before drift Coverage: % of codebase with AI-assisted tests
"Set up AI coding for [project]" โ Generate rules file + tool recommendations "Write a prompt for [task type]" โ Generate SPEC-formatted prompt template "Review this AI output" โ Run the Trust-But-Verify checklist "Compare [tool A] vs [tool B] for [use case]" โ Tool selection analysis "Optimize my AI coding costs" โ Analyze usage and suggest model routing "Create a team AI coding guide" โ Generate team guidelines document "Debug why AI keeps [hallucinating X]" โ Context diagnosis "Set up test-driven AI workflow for [feature]" โ TDD-AI pattern guide "Create prompt library for [project type]" โ Generate prompt templates "Score my AI coding maturity" โ Run the quality assessment "Onboard [person] to AI coding" โ Generate training plan "Audit AI coding security practices" โ Security review checklist
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.