Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Test-driven development workflow with test generation, coverage analysis, and multi-framework support
Test-driven development workflow with test generation, coverage analysis, and multi-framework support
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Test-driven development skill for generating tests, analyzing coverage, and guiding red-green-refactor workflows across Jest, Pytest, JUnit, and Vitest.
Provide source code (TypeScript, JavaScript, Python, Java) Specify target framework (Jest, Pytest, JUnit, Vitest) Run test_generator.py with requirements Review generated test stubs Validation: Tests compile and cover happy path, error cases, edge cases
Generate coverage report from test runner (npm test -- --coverage) Run coverage_analyzer.py on LCOV/JSON/XML report Review prioritized gaps (P0/P1/P2) Generate missing tests for uncovered paths Validation: Coverage meets target threshold (typically 80%+)
Write failing test first (RED) Run tdd_workflow.py --phase red to validate Implement minimal code to pass (GREEN) Run tdd_workflow.py --phase green to validate Refactor while keeping tests green (REFACTOR) Validation: All tests pass after each cycle
Input source function (math_utils.py): def divide(a: float, b: float) -> float: if b == 0: raise ValueError("Cannot divide by zero") return a / b Command: python scripts/test_generator.py --input math_utils.py --framework pytest Generated test output (test_math_utils.py): import pytest from math_utils import divide class TestDivide: def test_divide_positive_numbers(self): assert divide(10, 2) == 5.0 def test_divide_negative_numerator(self): assert divide(-10, 2) == -5.0 def test_divide_float_result(self): assert divide(1, 3) == pytest.approx(0.333, rel=1e-3) def test_divide_by_zero_raises_value_error(self): with pytest.raises(ValueError, match="Cannot divide by zero"): divide(10, 0) def test_divide_zero_numerator(self): assert divide(0, 5) == 0.0
Command: python scripts/coverage_analyzer.py --report lcov.info --threshold 80 Sample output: Coverage Report โ Overall: 63% (threshold: 80%) P0 โ Critical gaps (uncovered error paths): auth/login.py:42-58 handle_expired_token() 0% covered payments/process.py:91-110 handle_payment_failure() 0% covered P1 โ High-value gaps (core logic branches): users/service.py:77 update_profile() โ else branch 0% covered orders/cart.py:134 apply_discount() โ zero-qty guard 0% covered P2 โ Low-risk gaps (utility / helper functions): utils/formatting.py:12 format_currency() 0% covered Recommended: Generate tests for P0 items first to reach 80% threshold.
ToolPurposeUsagetest_generator.pyGenerate test cases from code/requirementspython scripts/test_generator.py --input source.py --framework pytestcoverage_analyzer.pyParse and analyze coverage reportspython scripts/coverage_analyzer.py --report lcov.info --threshold 80tdd_workflow.pyGuide red-green-refactor cyclespython scripts/tdd_workflow.py --phase red --test test_auth.pyfixture_generator.pyGenerate test data and mockspython scripts/fixture_generator.py --entity User --count 5 Additional scripts: framework_adapter.py (convert between frameworks), metrics_calculator.py (quality metrics), format_detector.py (detect language/framework), output_formatter.py (CLI/desktop/CI output).
For Test Generation: Source code (file path or pasted content) Target framework (Jest, Pytest, JUnit, Vitest) Coverage scope (unit, integration, edge cases) For Coverage Analysis: Coverage report file (LCOV, JSON, or XML format) Optional: Source code for context Optional: Target threshold percentage For TDD Workflow: Feature requirements or user story Current phase (RED, GREEN, REFACTOR) Test code and implementation status
ScopeDetailsUnit test focusIntegration and E2E tests require different patternsStatic analysisCannot execute tests or measure runtime behaviorLanguage supportBest for TypeScript, JavaScript, Python, JavaReport formatsLCOV, JSON, XML only; other formats need conversionGenerated testsProvide scaffolding; require human review for complex logic When to use other tools: E2E testing: Playwright, Cypress, Selenium Performance testing: k6, JMeter, Locust Security testing: OWASP ZAP, Burp Suite
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.