Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Meta-skill that orchestrates comprehensive testing across a project by coordinating testing-patterns, e2e-testing, and testing agents. Use when setting up testing for a new project, improving coverage for an existing project, establishing a testing strategy, or verifying quality before a release.
Meta-skill that orchestrates comprehensive testing across a project by coordinating testing-patterns, e2e-testing, and testing agents. Use when setting up testing for a new project, improving coverage for an existing project, establishing a testing strategy, or verifying quality before a release.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Orchestrate comprehensive testing across a project by coordinating the testing-patterns skill, e2e-testing skill, and testing agents. This meta-skill does not define test patterns itself โ it routes to the right skill or agent at each stage and ensures nothing is missed.
Setting up testing for a new project from scratch Improving coverage for an existing project with gaps Establishing or revising a testing strategy Before a major release to verify quality gates are met After a large refactor to confirm nothing broke During code review when test adequacy is in question Onboarding a team to a testing workflow
Follow these steps in order. Each step routes to a specific skill or agent โ read and apply that resource before moving to the next step.
Scan the project to understand existing test infrastructure, measure current coverage, and identify gaps before making changes. Without a baseline, you cannot demonstrate improvement. Identify test infrastructure โ Determine the test runner, assertion library, coverage tool, and CI configuration already in use. If none exist, flag that setup is needed. Measure current coverage โ Run the existing test suite and record statement, branch, and function coverage. This is the baseline. Map untested code โ Identify modules, functions, and code paths with no test coverage. Prioritize by risk: business-critical logic first, utilities last. Catalog existing tests โ Categorize existing tests as unit, integration, or E2E. Check for skipped tests, flaky tests, and tests that don't assert anything meaningful.
Based on the discovery results, select the appropriate testing approach for this project. Determine project type โ Use the Coverage Targets table below to set appropriate thresholds for the project type. Select test patterns โ Read ai/skills/testing/testing-patterns/SKILL.md and choose the unit/integration test patterns that match the project's architecture, language, and framework. Identify critical user journeys โ List the 3-10 most important user workflows that require E2E coverage. These are flows where a failure would directly impact revenue, user trust, or safety. Document the strategy โ Fill in the Testing Strategy Template (below) and commit it to the repository.
Generate tests following the patterns selected in Phase 2. Unit tests first โ Write unit tests for uncovered business logic, starting with the highest-risk modules. Follow the testing pyramid: ~70% of your tests should be unit tests. Integration tests next โ Write integration tests for module boundaries, API endpoints, and database queries. Focus on the seams where components interact. E2E tests for critical journeys โ Read ai/skills/testing/e2e-testing/SKILL.md and write E2E tests for each critical user journey identified in Phase 2. Edge case coverage โ After the happy paths are covered, add tests for error conditions, boundary values, null/empty inputs, and concurrency scenarios.
Verify that the new tests meet quality standards and coverage targets. Run the full test suite โ Every test must pass. Fix failures before proceeding. Measure coverage against targets โ Compare new coverage against the thresholds for the project type. If targets are not met, return to Phase 3. Check test quality โ Review tests for the anti-patterns listed in testing-patterns (assert-free tests, overmocking, flaky tests, test pollution). Fix any found. Verify CI integration โ Confirm that tests run automatically on every push/PR and that coverage thresholds are enforced in CI.
Establish ongoing practices to keep the test suite healthy. Set up coverage ratcheting โ Configure CI to fail if coverage drops below the current level. Coverage should only go up. Establish flaky test policy โ Any test that fails intermittently must be fixed within one sprint or removed with a justification. Define test review standards โ Every PR that adds or changes logic must include corresponding test changes. Reviewers check for this. Schedule test health audits โ Quarterly, review test execution time, flaky test rate, skipped test count, and coverage trends.
Use this table to route specific needs to the correct resource: NeedRoute ToPathUnit/integration test patternstesting-patternsai/skills/testing/testing-patterns/SKILL.mdE2E test patternse2e-testingai/skills/testing/e2e-testing/SKILL.mdCode quality standardsclean-codeai/skills/testing/clean-code/SKILL.mdReview checklistcode-reviewai/skills/testing/code-review/SKILL.mdCI/CD quality gatesquality-gatesai/skills/testing/quality-gates/SKILL.mdDebugging test failuresdebuggingai/skills/testing/debugging/SKILL.md When a request falls clearly into one row, go directly to that resource. Use the full orchestration flow only when comprehensive coverage is the goal.
Targets vary by project type. Use the appropriate row to set expectations: Project TypeStatementBranchFunctionE2E JourneysNotesStartup MVP60%50%60%Top 3 flowsFocus on critical paths onlyProduction App80%70%80%Top 10 flowsBalance speed with confidenceLibrary / Package90%85%95%N/APublic API must be fully coveredCritical Infrastructure95%90%95%All flowsZero tolerance for gaps These are minimums. Aim higher when time permits, but do not block releases on vanity metrics โ prioritize meaningful coverage over percentage points.
All of the following must be satisfied before marking testing complete: GateRequirementWhyAll tests passZero failures, zero errorsFlaky tests count as failuresCoverage targets metStatement, branch, and function coverage meet project-type thresholdsUntested code is unverified codeCritical journeys coveredEvery critical user journey has a passing E2E testRevenue and trust depend on these flowsNo unjustified skipsEvery skip, xit, or xdescribe has a comment and linked issueSkipped tests rot into permanent gapsExecution time budgetUnit < 60s, E2E < 10minSlow suites get skipped by developersNo test pollutionRunning any test file alone produces same results as full suiteShared state masks failuresMocks are justifiedEvery mock has a comment explaining why the real impl cannot be usedOver-mocking hides real bugs
NEVER write tests that test implementation details instead of behavior โ tests must verify what the code does, not how it does it NEVER skip the discovery phase โ always measure the baseline before writing new tests, or you cannot demonstrate improvement NEVER merge tests that depend on execution order โ each test must be independent and idempotent NEVER mock what you do not own โ wrap third-party dependencies in your own adapters and mock the adapters instead NEVER treat coverage percentage as the sole quality metric โ 100% coverage with weak assertions is worse than 70% coverage with strong assertions NEVER leave the test suite in a failing state โ if a test fails, fix it or remove it with a justification before moving on NEVER skip E2E tests for critical user journeys โ unit tests alone cannot catch integration failures in flows that matter most NEVER deploy without running the full test suite โ partial test runs create false confidence
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.