# Send Testing Workflow to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "testing-workflow",
    "name": "Testing Workflow",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/wpank/testing-workflow",
    "canonicalUrl": "https://clawhub.ai/wpank/testing-workflow",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/testing-workflow",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=testing-workflow",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "README.md",
      "SKILL.md"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "slug": "testing-workflow",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-09T22:39:27.823Z",
      "expiresAt": "2026-05-16T22:39:27.823Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=testing-workflow",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=testing-workflow",
        "contentDisposition": "attachment; filename=\"testing-workflow-0.1.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "testing-workflow"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/testing-workflow"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/testing-workflow",
    "downloadUrl": "https://openagent3.xyz/downloads/testing-workflow",
    "agentUrl": "https://openagent3.xyz/skills/testing-workflow/agent",
    "manifestUrl": "https://openagent3.xyz/skills/testing-workflow/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/testing-workflow/agent.md"
  }
}
```
## Documentation

### Testing Workflow

Orchestrate comprehensive testing across a project by coordinating the testing-patterns skill, e2e-testing skill, and testing agents. This meta-skill does not define test patterns itself — it routes to the right skill or agent at each stage and ensures nothing is missed.

### When to Use

Setting up testing for a new project from scratch
Improving coverage for an existing project with gaps
Establishing or revising a testing strategy
Before a major release to verify quality gates are met
After a large refactor to confirm nothing broke
During code review when test adequacy is in question
Onboarding a team to a testing workflow

### Orchestration Flow

Follow these steps in order. Each step routes to a specific skill or agent — read and apply that resource before moving to the next step.

### Phase 1: Discovery and Baseline

Scan the project to understand existing test infrastructure, measure current coverage, and identify gaps before making changes. Without a baseline, you cannot demonstrate improvement.

Identify test infrastructure — Determine the test runner, assertion library, coverage tool, and CI configuration already in use. If none exist, flag that setup is needed.
Measure current coverage — Run the existing test suite and record statement, branch, and function coverage. This is the baseline.
Map untested code — Identify modules, functions, and code paths with no test coverage. Prioritize by risk: business-critical logic first, utilities last.
Catalog existing tests — Categorize existing tests as unit, integration, or E2E. Check for skipped tests, flaky tests, and tests that don't assert anything meaningful.

### Phase 2: Strategy Selection

Based on the discovery results, select the appropriate testing approach for this project.

Determine project type — Use the Coverage Targets table below to set appropriate thresholds for the project type.
Select test patterns — Read ai/skills/testing/testing-patterns/SKILL.md and choose the unit/integration test patterns that match the project's architecture, language, and framework.
Identify critical user journeys — List the 3-10 most important user workflows that require E2E coverage. These are flows where a failure would directly impact revenue, user trust, or safety.
Document the strategy — Fill in the Testing Strategy Template (below) and commit it to the repository.

### Phase 3: Implementation

Generate tests following the patterns selected in Phase 2.

Unit tests first — Write unit tests for uncovered business logic, starting with the highest-risk modules. Follow the testing pyramid: ~70% of your tests should be unit tests.
Integration tests next — Write integration tests for module boundaries, API endpoints, and database queries. Focus on the seams where components interact.
E2E tests for critical journeys — Read ai/skills/testing/e2e-testing/SKILL.md and write E2E tests for each critical user journey identified in Phase 2.
Edge case coverage — After the happy paths are covered, add tests for error conditions, boundary values, null/empty inputs, and concurrency scenarios.

### Phase 4: Validation

Verify that the new tests meet quality standards and coverage targets.

Run the full test suite — Every test must pass. Fix failures before proceeding.
Measure coverage against targets — Compare new coverage against the thresholds for the project type. If targets are not met, return to Phase 3.
Check test quality — Review tests for the anti-patterns listed in testing-patterns (assert-free tests, overmocking, flaky tests, test pollution). Fix any found.
Verify CI integration — Confirm that tests run automatically on every push/PR and that coverage thresholds are enforced in CI.

### Phase 5: Maintenance

Establish ongoing practices to keep the test suite healthy.

Set up coverage ratcheting — Configure CI to fail if coverage drops below the current level. Coverage should only go up.
Establish flaky test policy — Any test that fails intermittently must be fixed within one sprint or removed with a justification.
Define test review standards — Every PR that adds or changes logic must include corresponding test changes. Reviewers check for this.
Schedule test health audits — Quarterly, review test execution time, flaky test rate, skipped test count, and coverage trends.

### Skill Routing Table

Use this table to route specific needs to the correct resource:

NeedRoute ToPathUnit/integration test patternstesting-patternsai/skills/testing/testing-patterns/SKILL.mdE2E test patternse2e-testingai/skills/testing/e2e-testing/SKILL.mdCode quality standardsclean-codeai/skills/testing/clean-code/SKILL.mdReview checklistcode-reviewai/skills/testing/code-review/SKILL.mdCI/CD quality gatesquality-gatesai/skills/testing/quality-gates/SKILL.mdDebugging test failuresdebuggingai/skills/testing/debugging/SKILL.md

When a request falls clearly into one row, go directly to that resource. Use the full orchestration flow only when comprehensive coverage is the goal.

### Coverage Targets

Targets vary by project type. Use the appropriate row to set expectations:

Project TypeStatementBranchFunctionE2E JourneysNotesStartup MVP60%50%60%Top 3 flowsFocus on critical paths onlyProduction App80%70%80%Top 10 flowsBalance speed with confidenceLibrary / Package90%85%95%N/APublic API must be fully coveredCritical Infrastructure95%90%95%All flowsZero tolerance for gaps

These are minimums. Aim higher when time permits, but do not block releases on vanity metrics — prioritize meaningful coverage over percentage points.

### Testing Strategy Template

Use this template to document the testing strategy for a project. Fill it in during the orchestration flow and keep it in the repo.

# Testing Strategy

## Project Overview
- **Project**: [name]
- **Type**: [startup MVP | production app | library | critical infrastructure]
- **Primary Language**: [language]
- **Framework**: [framework]
- **Test Runner**: [runner]
- **Coverage Tool**: [tool]

## Coverage Baseline
- **Statement**: [X%]
- **Branch**: [X%]
- **Function**: [X%]
- **E2E Journeys Covered**: [N of M]
- **Date Measured**: [YYYY-MM-DD]

## Coverage Targets
- **Statement**: [target%]
- **Branch**: [target%]
- **Function**: [target%]
- **E2E Journeys**: [target count]

## Test Patterns Selected
- [ ] [Pattern 1 — reason for selection]
- [ ] [Pattern 2 — reason for selection]
- [ ] [Pattern 3 — reason for selection]

## Critical User Journeys (E2E)
1. [Journey 1 — e.g., signup -> onboarding -> first action]
2. [Journey 2 — e.g., login -> dashboard -> export]
3. [Journey 3 — e.g., checkout -> payment -> confirmation]

## Gaps and Risks
- [Untested area 1 — risk level, mitigation plan]
- [Untested area 2 — risk level, mitigation plan]

## Quality Gate Status
- [ ] All tests pass
- [ ] Coverage targets met
- [ ] Critical journeys covered with E2E
- [ ] No skipped tests without justification
- [ ] Test execution time within budget
- [ ] CI enforces coverage thresholds

### Quality Gates for Testing Completion

All of the following must be satisfied before marking testing complete:

GateRequirementWhyAll tests passZero failures, zero errorsFlaky tests count as failuresCoverage targets metStatement, branch, and function coverage meet project-type thresholdsUntested code is unverified codeCritical journeys coveredEvery critical user journey has a passing E2E testRevenue and trust depend on these flowsNo unjustified skipsEvery skip, xit, or xdescribe has a comment and linked issueSkipped tests rot into permanent gapsExecution time budgetUnit < 60s, E2E < 10minSlow suites get skipped by developersNo test pollutionRunning any test file alone produces same results as full suiteShared state masks failuresMocks are justifiedEvery mock has a comment explaining why the real impl cannot be usedOver-mocking hides real bugs

### NEVER Do

NEVER write tests that test implementation details instead of behavior — tests must verify what the code does, not how it does it
NEVER skip the discovery phase — always measure the baseline before writing new tests, or you cannot demonstrate improvement
NEVER merge tests that depend on execution order — each test must be independent and idempotent
NEVER mock what you do not own — wrap third-party dependencies in your own adapters and mock the adapters instead
NEVER treat coverage percentage as the sole quality metric — 100% coverage with weak assertions is worse than 70% coverage with strong assertions
NEVER leave the test suite in a failing state — if a test fails, fix it or remove it with a justification before moving on
NEVER skip E2E tests for critical user journeys — unit tests alone cannot catch integration failures in flows that matter most
NEVER deploy without running the full test suite — partial test runs create false confidence
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: wpank
- Version: 0.1.0
## Source health
- Status: healthy
- Item download looks usable.
- Yavira can redirect you to the upstream package for this item.
- Health scope: item
- Reason: direct_download_ok
- Checked at: 2026-05-09T22:39:27.823Z
- Expires at: 2026-05-16T22:39:27.823Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/testing-workflow)
- [Send to Agent page](https://openagent3.xyz/skills/testing-workflow/agent)
- [JSON manifest](https://openagent3.xyz/skills/testing-workflow/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/testing-workflow/agent.md)
- [Download page](https://openagent3.xyz/downloads/testing-workflow)