← All skills
Tencent SkillHub · Developer Tools

Estimation Patterns

Practical estimation techniques for software tasks — methods comparison, decomposition, complexity multipliers, buffer calculation, bias awareness, and communication strategies. Use when estimating features, sprint planning, or presenting timelines to stakeholders.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Practical estimation techniques for software tasks — methods comparison, decomposition, complexity multipliers, buffer calculation, bias awareness, and communication strategies. Use when estimating features, sprint planning, or presenting timelines to stakeholders.

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
README.md, SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 13 sections Open source page

Estimation Patterns (Meta-Skill)

Systematic approaches for producing accurate, defensible software estimates.

OpenClaw / Moltbot / Clawbot

npx clawhub@latest install estimation-patterns

When to Use

Estimating a feature, bug fix, or project timeline Breaking down work for sprint planning or roadmap forecasting Presenting estimates to stakeholders or product managers Reviewing historical accuracy to calibrate future estimates Noticing a pattern of missed deadlines or blown budgets

Estimation Methods

Choose the method that matches your context and audience. MethodBest ForGranularityProsConsT-Shirt SizingRoadmap planning, backlog groomingXS, S, M, L, XLFast, low-friction, good for relative rankingNot actionable for schedulingStory PointsSprint planning, team velocityFibonacci (1-21)Abstracts away individual speed, tracks velocityMeaningless outside the team, gaming riskTime-BasedClient quotes, contractor workHours / daysUniversally understood, maps to budgetsAnchoring bias, implies false precisionThree-PointHigh-uncertainty tasksMin / likely / maxCaptures uncertainty range, enables PERTRequires discipline to set honest boundsReference ComparisonRecurring task typesRelative to pastGrounded in real data, hard to argue withRequires historical records, breaks on novelty Three-point formula (PERT): Expected = (Optimistic + 4 x Likely + Pessimistic) / 6 Standard Deviation = (Pessimistic - Optimistic) / 6 Use the standard deviation to express confidence ranges (e.g., "3-5 days at 68% confidence, 2-6 days at 95%").

Task Decomposition

Break work down until every sub-task is < 4 hours of effort. Anything larger hides unknowns. LevelExampleTarget SizeEpicUser authentication system2-6 weeksFeatureOAuth2 login with Google3-10 daysTaskImplement callback handler1-3 daysSub-taskParse and validate OAuth token1-4 hoursAtomic stepWrite token expiry check function30-90 minutes Decomposition checklist: Can I describe what "done" looks like in one sentence? Is there exactly one unknown, or zero? Could a teammate pick this up without a walkthrough? Is it under 4 hours? If no — split again. If you cannot decompose a task, it signals a spike is needed. Timebox the spike (2-4 hours), then re-estimate.

Complexity Multipliers

Apply these multipliers to your base estimate when complexity factors are present. Multipliers stack multiplicatively. FactorMultiplierRationaleNew technology / stack1.5xLearning curve, unexpected gotchas, doc-huntingUnclear requirements2.0xDiscovery work, rework cycles, stakeholder alignmentLegacy code1.5xUndocumented behavior, fragile tests, hidden couplingCross-team dependency1.5xCoordination overhead, blocking, API negotiationFirst-time task2.0xNo reference point, unknown unknowns dominateRegulatory / compliance1.5xAudit trails, review gates, documentation overhead Example: A 2-day base estimate on legacy code (1.5x) with unclear requirements (2.0x) becomes 2 x 1.5 x 2.0 = 6 days. Rule: Never apply more than 3 multipliers — if that many factors converge, the task needs a spike or a scope reduction, not a bigger number.

Buffer Calculation

Raw estimates are point predictions. Reality is a distribution. Buffer TypeRule of ThumbWhen to ApplyKnown unknowns+20% of total estimateIntegration points, third-party APIs, minor gapsUnknown unknowns+50% of total estimateNew domain, first release, greenfield systemTeam velocity factor/ focus ratio (e.g., 0.7)Account for meetings, reviews, context switchingSequential dependency+10% per handoffEach team/person boundary adds coordination drag Effective estimate formula: Effective = (Base Estimate x Multipliers) / Focus Ratio + Buffer Focus ratio guidelines: ScenarioTypical Focus RatioDedicated to one project0.75-0.85Split across 2 projects0.50-0.60On-call rotation active0.60-0.70Heavy meeting load (> 3h/day)0.45-0.55

Historical Calibration

Track actual vs estimated to improve over time. This is the single most effective way to get better at estimation. Tracking table: TaskEstimatedActualRatio (A/E)NotesAuth flow3 days5 days1.67OAuth docs were outdatedDashboard charts5 days4 days0.80Reused existing componentDB migration2 days6 days3.00Discovered data quality issues Accuracy ratio: Calculate your rolling average of Actual / Estimated over the last 10-20 tasks. Ratio < 0.8 — you're overestimating (sandbagging or excessive buffers) Ratio 0.8-1.2 — well calibrated Ratio > 1.2 — you're underestimating (apply the ratio as a correction factor) Calibration action: Multiply future estimates by your rolling accuracy ratio until it converges toward 1.0.

Common Estimation Biases

Recognize these cognitive traps — awareness alone reduces their effect. BiasDescriptionMitigationPlanning FallacyAssuming best-case scenario despite past evidenceUse historical data, not intuitionAnchoringFirst number heard dominates all subsequent estimatesEstimate independently before discussingOptimism Bias"It'll be simpler than last time"Apply the three-point method, honor the pessimisticScope CreepEstimate stays fixed while scope growsRe-estimate when scope changes, alwaysHofstadter's Law"It always takes longer, even when you account for it"Add buffer, then add more buffer for novel workDunning-KrugerNovices underestimate; experts sometimes overestimateCross-check with a second estimatorSunk Cost PressureRefusing to re-estimate because the original was "approved"Treat estimates as living artifacts, update often

Estimation by Task Type

Use these ranges as starting heuristics, then adjust with multipliers and historical data. Task TypeTypical RangeKey VariablesBug fix (isolated)2-8 hoursReproducibility, code familiarity, test coverageBug fix (systemic)1-3 daysRoot cause depth, blast radius, regression riskSmall feature1-3 daysSpec clarity, UI complexity, number of endpointsMedium feature3-10 daysCross-cutting concerns, data model changesLarge feature2-4 weeksArchitecture decisions, team coordinationRefactor (local)1-3 daysTest coverage, coupling, blast radiusRefactor (systemic)1-4 weeksNumber of callers, migration strategy neededSpike / research2-8 hours (timeboxed)Always timebox — output is knowledge, not codeDevOps / infra1-5 daysProvider docs quality, IAM complexity, testing

Communication

How you present an estimate matters as much as the number itself. Always present as a range, never a single number: Bad: "It'll take 5 days." Good: "3-7 days, most likely 5. The range depends on the payment API response format — I'll know more after the spike." Confidence levels: ConfidenceWhat It MeansWhen to UseHigh (+-15%)Well-understood scope, done similar beforeFamiliar task, clear specMedium (+-30%)Some unknowns, reasonable decompositionMost sprint-level estimatesLow (+-50%+)Significant unknowns, rough order of magnitudeRoadmap forecasts, presale quotes Stakeholder communication rules: State the range and the confidence level together Name the top 1-3 risks that could push toward the upper bound Offer to de-risk with a timeboxed spike before committing Explicitly state what is not included (e.g., "does not include QA, deployment, or docs") Update estimates proactively when new information surfaces — don't wait until the deadline

Anti-Patterns

Anti-PatternWhy It's HarmfulBetter ApproachPadding silentlyErodes trust when discovered; hides real uncertaintyUse explicit buffers with stated rationaleSandbaggingDestroys velocity data; breeds complacencyTrack accuracy ratio, aim for calibrationNot decomposingLarge estimates hide unknowns and compound errorsBreak to < 4-hour sub-tasks, estimate bottom-upSingle-point estimatesImplies false certainty, no room for varianceAlways give a range with confidence levelEstimating under pressureAnchoring to what the stakeholder wants to hearAsk for time to decompose; never estimate on the spotCopy-paste estimatesEvery task has different context and risk profileEstimate fresh, use references as starting points onlyIgnoring rework cyclesFirst pass is rarely final — reviews, feedback, QAFactor in at least one review-and-revise loop

NEVER Do

NEVER give a single-number estimate without a range — it communicates false precision and sets you up for failure NEVER estimate a task you haven't decomposed — large estimates are guesses wearing a suit NEVER let an old estimate stand after scope changes — estimates are invalidated the moment requirements shift NEVER estimate in someone else's units — your days are not their days; clarify assumptions about focus time and interrupts NEVER skip recording actuals — estimation without feedback is astrology, not engineering NEVER commit to an estimate made under pressure — say "let me break this down and get back to you in an hour" NEVER treat an estimate as a promise or a deadline — estimates are probabilistic forecasts, not contracts

Category context

Code helpers, APIs, CLIs, browser automation, testing, and developer operations.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
2 Docs
  • SKILL.md Primary doc
  • README.md Docs