Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Complete system for writing, reviewing, and approving product requirement documents (PRDs) from idea validation through shipped features.
Complete system for writing, reviewing, and approving product requirement documents (PRDs) from idea validation through shipped features.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Complete product requirements methodology: from idea to spec to shipped feature. Not just a JSON template โ a full system for writing PRDs that developers actually follow and stakeholders actually approve.
Turning a vague idea into a buildable specification Writing PRDs for new features, products, or major refactors Reviewing/improving existing PRDs before sprint planning Breaking epics into right-sized user stories Creating technical design documents alongside product specs Preparing specs for AI coding agents (Claude Code, Cursor, Copilot)
Before writing a single requirement, answer these questions. Skip this and you'll rewrite the PRD 3 times.
discovery_brief: problem: statement: "" # One sentence. If you need two, you don't understand it yet. who_has_it: "" # Specific persona, not "users" frequency: "" # Daily? Weekly? Once? (daily problems > occasional ones) current_workaround: "" # What do they do today? (no workaround = maybe not a real problem) evidence: - type: "" # support_ticket | user_interview | analytics | churned_user | sales_objection detail: "" date: "" impact: users_affected: "" # Number or percentage revenue_impact: "" # $ at risk or $ opportunity strategic_alignment: "" # Which company goal does this serve? constraints: deadline: "" # Hard date or flexible? budget: "" # Engineering weeks available dependencies: "" # What must exist first? regulatory: "" # Any compliance requirements? success_metrics: primary: "" # ONE metric that defines success secondary: [] # 2-3 supporting metrics measurement_method: "" # How will you actually measure this? target: "" # Specific number, not "improve" timeframe: "" # When do you expect to see results?
[Persona] needs [capability] because [reason], but currently [blocker], which causes [measurable impact]. Examples: โ "Users need better onboarding" (vague, unmeasurable) โ "New free-trial users (500/month) need to reach their first 'aha moment' within 10 minutes because 73% who don't will churn within 48 hours, but currently the average time-to-value is 34 minutes due to a 12-step setup wizard, which costs us ~$18K/month in lost conversions."
Before proceeding, check these. If any are true, STOP and push back: SignalActionNo evidence of the problem (just someone's opinion)Demand evidence. Opinions aren't requirements.Solution already decided ("just build X")Rewind to the problem. Solutions without problems = features nobody uses.Success metric is unmeasurableDefine how you'll measure it or don't build it.Affects <1% of users with no revenue impactDeprioritize. Small problems with small impact = small returns.Scope keeps expanding during discoveryScope lock. If everything is in scope, nothing is.
Dimension0-2 (Weak)3-4 (Adequate)5 (Strong)WeightProblem clarityVague, no dataClear but thin evidenceSharp statement + multiple evidence pointsx4Scope disciplineEverything in scopeSome boundariesExplicit in/out + "what this is NOT"x3Story qualityVague tasksStories with some criteriaINVEST stories + verifiable acceptance criteriax4Edge casesNone listedHappy path + 1-2 edgesComprehensive: empty, error, slow, permissions, concurrentx3Success metrics"Improve X"Metric + targetMetric + baseline + target + timeframe + measurement methodx3Technical feasibilityNo tech sectionArchitecture notesDependencies, performance, security, migration planx2Release planNone"Ship it"Feature flag + rollout % + rollback + launch checklistx1 Scoring: Sum (score ร weight). Max = 100. 80-100: Ship-ready. Get approvals. 60-79: Solid but missing pieces. Fill gaps before review. 40-59: Needs work. Major sections incomplete. <40: Start over or go back to discovery.
story: id: "US-001" title: "" # Action-oriented: "Add priority field to tasks table" persona: "" # Who benefits narrative: "As a [persona], I want [capability] so that [benefit]" acceptance_criteria: - criterion: "" # Verifiable statement type: "functional" # functional | performance | security | ux priority: 1 # Execution order (dependencies first) size: "" # XS | S | M | L | XL status: "todo" # todo | in_progress | review | done notes: "" # Runtime observations depends_on: [] # Story IDs this depends on blocked_by: [] # External blockers
LetterCriterionTestI โ IndependentCan be built without other incomplete storiesNo circular dependenciesN โ NegotiableDetails can flex (the "what" is fixed, the "how" is flexible)Multiple implementation approaches existV โ ValuableDelivers user or business value on its ownWould a user/stakeholder care if only this shipped?E โ EstimableTeam can size itNo major unknowns (if unknowns exist, add a spike first)S โ SmallCompletable in one sprint (or one context window for AI agents)1-3 days of work maxT โ TestableHas verifiable acceptance criteriaCan write a test for each criterion
Good criteria are: Binary (pass/fail, not subjective) Specific (numbers, not adjectives) Independent (testable in isolation) โ Badโ Good"Works correctly""Returns 200 with JSON body containing id, name, status fields""Fast enough""API responds in <200ms at p95 with 100 concurrent users""User-friendly""Form shows inline validation errors within 100ms of field blur""Secure""Endpoint returns 403 for users without admin role""Handles errors""On network timeout, shows retry button + cached data if available" Always include these universal criteria: Typecheck passes (tsc --noEmit --strict) (for TypeScript projects) All existing tests still pass New functionality has test coverage
SizeScopeTimeExampleXSConfig change, copy update, env var<2 hours"Update error message text"SSingle component/function, no new deps2-4 hours"Add date picker to form"MFeature slice: DB + API + UI1-2 days"Add task priority with filter"LMulti-component feature, new patterns2-3 days"Add real-time notifications"XLToo big. Split it.โโ
Always order stories bottom-up: Level 1: Schema & Data (migrations, models, seed data) โ Level 2: Backend Logic (services, APIs, business rules) โ Level 3: Integration (API routes, auth, middleware) โ Level 4: UI Components (forms, tables, modals) โ Level 5: UX Polish (animations, empty states, loading) โ Level 6: Analytics & Monitoring (events, dashboards) Each level depends ONLY on levels below it. Never build UI before the API exists.
When a story is too big, split using one of these patterns: StrategyWhen to UseExampleBy layerFull-stack feature"Add schema" โ "Add API" โ "Add UI"By operationCRUD feature"Create task" โ "Read/list tasks" โ "Update task" โ "Delete task"By personaMulti-role feature"Admin creates template" โ "User fills template" โ "Viewer sees results"By happy/sad pathComplex flows"Successful payment" โ "Payment declined" โ "Payment timeout"By platformCross-platform"iOS support" โ "Android support" โ "Web support"Spike + implementHigh uncertainty"Spike: evaluate auth libraries (2h)" โ "Implement auth with chosen library"
When the PRD will be executed by AI agents (Claude Code, Cursor, Copilot Workspace, etc.), add these adaptations:
agent_story: id: "US-001" title: "Add priority field to tasks table" context: | The tasks table is in src/db/schema.ts using Drizzle ORM. Priority values should be: high, medium, low (default: medium). See existing fields for naming conventions. acceptance_criteria: - "Add `priority` column to `tasks` table in src/db/schema.ts" - "Type: enum('high', 'medium', 'low'), default 'medium', not null" - "Generate migration: `npx drizzle-kit generate`" - "Run migration: `npx drizzle-kit push`" - "Verify: `tsc --noEmit --strict` passes" - "Verify: existing tests pass (`npm test`)" files_to_touch: - src/db/schema.ts - drizzle/ (generated migration) commands_to_run: - "npx drizzle-kit generate" - "npx drizzle-kit push" - "tsc --noEmit --strict" - "npm test" done_when: "All verify commands pass with exit code 0"
Be explicit about file paths. Agents can't guess your project structure. Include verification commands. Agents need a "definition of done" they can check. One context window per story. If a story needs the agent to remember more than ~50 files, it's too big. List files to touch. Reduces agent exploration time and prevents hallucination. Order matters even more. Agents execute sequentially โ wrong order = compounding errors. Include the commands. Don't say "run the migration" โ say npx drizzle-kit push.
Completeness: Problem statement has evidence (not just opinion) "What this is NOT" section exists and is specific Every story has โฅ3 acceptance criteria Edge cases table covers: empty state, error state, permissions, concurrent access Success metrics have baseline + target + timeframe Technical section addresses: performance, security, dependencies Quality: No story larger than "L" (split XL stories) All acceptance criteria are binary (pass/fail) No circular dependencies between stories Dependency pyramid ordering is correct Release plan includes rollback strategy Readability: Executive summary is <3 sentences Non-engineers can understand the problem section Engineers can start building from the stories section alone No jargon without definition
Author writes PRD โ Self-review (score with rubric โ must be โฅ60) โ Peer review (another PM or tech lead) โ Engineering review (feasibility + sizing) โ Stakeholder approval (PM lead or product director) โ Status โ Approved โ Sprint planning (stories โ backlog)
FeedbackFix"What problem does this solve?"Your problem statement is weak. Add evidence."This is too big"Split into phases. Ship the smallest valuable slice first (MVP)."How do we know it worked?"Your success metrics are vague. Add numbers + timeframe."What about [edge case]?"Your edge case table is incomplete. Add it."When does this ship?"Add timeline with milestones, not just a deadline."Who approved this?"Add approvers field and get explicit sign-offs.
Draft โ In Review โ Approved โ In Progress โ Shipped โ Post-Launch Review โ โ Rejected Iterate / Kill
Track story completion in the PRD itself or a linked tracker: progress: total_stories: 12 done: 7 in_progress: 2 blocked: 1 todo: 2 completion: "58%" blocked_items: - story: "US-008" blocker: "Waiting for payments API access from finance team" since: "2025-01-15" escalation: "Pinged finance lead, follow up Friday" velocity: stories_per_week: 3.5 estimated_completion: "2025-02-01"
post_launch: shipped_date: "" review_date: "" # 2-4 weeks after ship metrics: primary: metric: "" baseline: "" target: "" actual: "" verdict: "" # hit | miss | exceeded secondary: - metric: "" actual: "" verdict: "" qualitative: user_feedback: [] support_tickets: "" # count related to this feature unexpected_outcomes: [] process_retro: what_went_well: [] what_didnt: [] estimation_accuracy: "" # actual vs estimated effort scope_changes: "" # what changed after approval decision: "" # iterate | maintain | deprecate | expand next_actions: []
CommandWhat It Does"Write a PRD for [feature]"Full PRD from discovery through stories"Break this into stories"Takes a feature description โ user stories"Review this PRD"Scores against quality rubric + gives specific feedback"Make this agent-ready"Converts PRD stories to agent-optimized format"What's missing from this PRD?"Gap analysis against the template"Split this story"Takes a large story โ smaller INVEST-compliant stories"Score this PRD"Quality rubric scoring with dimension breakdown"Create project context for [project]"Generates PROJECT_CONTEXT.md for AI agent execution"Post-launch review for [feature]"Generates review template with metrics"Track progress"Updates completion stats from story statuses
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.