{
  "schemaVersion": "1.0",
  "item": {
    "slug": "test-specialist",
    "name": "Test Specialist",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/Veeramanikandanr48/test-specialist",
    "canonicalUrl": "https://clawhub.ai/Veeramanikandanr48/test-specialist",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/test-specialist",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=test-specialist",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "index.js",
      "package.json",
      "references/bug_analysis.md",
      "references/testing_patterns.md",
      "scripts/analyze_coverage.py"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-07T17:22:31.273Z",
      "expiresAt": "2026-05-14T17:22:31.273Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
        "contentDisposition": "attachment; filename=\"afrexai-annual-report-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/test-specialist"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/test-specialist",
    "agentPageUrl": "https://openagent3.xyz/skills/test-specialist/agent",
    "manifestUrl": "https://openagent3.xyz/skills/test-specialist/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/test-specialist/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Overview",
        "body": "Apply systematic testing methodologies and debugging techniques to JavaScript/TypeScript applications. This skill provides comprehensive testing strategies, bug analysis frameworks, and automated tools for identifying coverage gaps and untested code."
      },
      {
        "title": "1. Writing Test Cases",
        "body": "Write comprehensive tests covering unit, integration, and end-to-end scenarios.\n\nUnit Testing Approach\n\nStructure tests using the AAA pattern (Arrange-Act-Assert):\n\ndescribe('ExpenseCalculator', () => {\n  describe('calculateTotal', () => {\n    test('sums expense amounts correctly', () => {\n      // Arrange\n      const expenses = [\n        { amount: 100, category: 'food' },\n        { amount: 50, category: 'transport' },\n        { amount: 25, category: 'entertainment' }\n      ];\n\n      // Act\n      const total = calculateTotal(expenses);\n\n      // Assert\n      expect(total).toBe(175);\n    });\n\n    test('handles empty expense list', () => {\n      expect(calculateTotal([])).toBe(0);\n    });\n\n    test('handles negative amounts', () => {\n      const expenses = [\n        { amount: 100, category: 'food' },\n        { amount: -50, category: 'refund' }\n      ];\n      expect(calculateTotal(expenses)).toBe(50);\n    });\n  });\n});\n\nKey principles:\n\nTest one behavior per test\nCover happy path, edge cases, and error conditions\nUse descriptive test names that explain the scenario\nKeep tests independent and isolated\n\nIntegration Testing Approach\n\nTest how components work together, including database, API, and service interactions:\n\ndescribe('ExpenseAPI Integration', () => {\n  beforeAll(async () => {\n    await database.connect(TEST_DB_URL);\n  });\n\n  afterAll(async () => {\n    await database.disconnect();\n  });\n\n  beforeEach(async () => {\n    await database.clear();\n    await seedTestData();\n  });\n\n  test('POST /expenses creates expense and updates total', async () => {\n    const response = await request(app)\n      .post('/api/expenses')\n      .send({\n        amount: 50,\n        category: 'food',\n        description: 'Lunch'\n      })\n      .expect(201);\n\n    expect(response.body).toMatchObject({\n      id: expect.any(Number),\n      amount: 50,\n      category: 'food'\n    });\n\n    // Verify database state\n    const total = await getTotalExpenses();\n    expect(total).toBe(50);\n  });\n});\n\nEnd-to-End Testing Approach\n\nTest complete user workflows using tools like Playwright or Cypress:\n\ntest('user can track expense from start to finish', async ({ page }) => {\n  // Navigate to app\n  await page.goto('/');\n\n  // Add new expense\n  await page.click('[data-testid=\"add-expense-btn\"]');\n  await page.fill('[data-testid=\"amount\"]', '50.00');\n  await page.selectOption('[data-testid=\"category\"]', 'food');\n  await page.fill('[data-testid=\"description\"]', 'Lunch');\n  await page.click('[data-testid=\"submit\"]');\n\n  // Verify expense appears in list\n  await expect(page.locator('[data-testid=\"expense-item\"]')).toContainText('Lunch');\n  await expect(page.locator('[data-testid=\"total\"]')).toContainText('$50.00');\n});"
      },
      {
        "title": "2. Systematic Bug Analysis",
        "body": "Apply structured debugging methodology to identify and fix issues.\n\nFive-Step Analysis Process\n\nReproduction: Reliably reproduce the bug\n\nDocument exact steps to trigger\nIdentify required environment/state\nNote expected vs actual behavior\n\n\n\nIsolation: Narrow down the problem\n\nBinary search through code path\nCreate minimal reproduction case\nRemove unrelated dependencies\n\n\n\nRoot Cause Analysis: Determine underlying cause\n\nTrace execution flow\nCheck assumptions and preconditions\nReview recent changes (git blame)\n\n\n\nFix Implementation: Implement solution\n\nWrite failing test first (TDD)\nImplement the fix\nVerify test passes\n\n\n\nValidation: Ensure completeness\n\nRun full test suite\nTest edge cases\nVerify no regressions\n\nCommon Bug Patterns\n\nRace Conditions:\n\n// Test concurrent operations\ntest('handles concurrent updates correctly', async () => {\n  const promises = Array.from({ length: 100 }, () =>\n    incrementExpenseCount()\n  );\n\n  await Promise.all(promises);\n  expect(getExpenseCount()).toBe(100);\n});\n\nNull/Undefined Errors:\n\n// Test null safety\ntest.each([null, undefined, '', 0, false])\n  ('handles invalid input: %p', (input) => {\n    expect(() => processExpense(input)).toThrow('Invalid expense');\n  });\n\nOff-by-One Errors:\n\n// Test boundaries explicitly\ndescribe('pagination', () => {\n  test('handles empty list', () => {\n    expect(paginate([], 1, 10)).toEqual([]);\n  });\n\n  test('handles single item', () => {\n    expect(paginate([item], 1, 10)).toEqual([item]);\n  });\n\n  test('handles last page with partial items', () => {\n    const items = Array.from({ length: 25 }, (_, i) => i);\n    expect(paginate(items, 3, 10)).toHaveLength(5);\n  });\n});"
      },
      {
        "title": "3. Identifying Potential Issues",
        "body": "Proactively identify issues before they become bugs.\n\nSecurity Vulnerabilities\n\nTest for common security issues:\n\ndescribe('security', () => {\n  test('prevents SQL injection', async () => {\n    const malicious = \"'; DROP TABLE expenses; --\";\n    await expect(\n      searchExpenses(malicious)\n    ).resolves.not.toThrow();\n  });\n\n  test('sanitizes XSS in descriptions', () => {\n    const xss = '<script>alert(\"xss\")</script>';\n    const expense = createExpense({ description: xss });\n    expect(expense.description).not.toContain('<script>');\n  });\n\n  test('requires authentication for expense operations', async () => {\n    await request(app)\n      .post('/api/expenses')\n      .send({ amount: 50 })\n      .expect(401);\n  });\n});\n\nPerformance Issues\n\nTest for performance problems:\n\ntest('processes large expense list efficiently', () => {\n  const largeList = Array.from({ length: 10000 }, (_, i) => ({\n    amount: i,\n    category: 'test'\n  }));\n\n  const start = performance.now();\n  const total = calculateTotal(largeList);\n  const duration = performance.now() - start;\n\n  expect(duration).toBeLessThan(100); // Should complete in <100ms\n  expect(total).toBe(49995000);\n});\n\nLogic Errors\n\nUse parameterized tests to catch edge cases:\n\ntest.each([\n  // [input, expected, description]\n  [[10, 20, 30], 60, 'normal positive values'],\n  [[0, 0, 0], 0, 'all zeros'],\n  [[-10, 20, -5], 5, 'mixed positive and negative'],\n  [[0.1, 0.2], 0.3, 'decimal precision'],\n  [[Number.MAX_SAFE_INTEGER], Number.MAX_SAFE_INTEGER, 'large numbers'],\n])('calculateTotal(%p) = %p (%s)', (amounts, expected, description) => {\n  const expenses = amounts.map(amount => ({ amount, category: 'test' }));\n  expect(calculateTotal(expenses)).toBeCloseTo(expected);\n});"
      },
      {
        "title": "4. Test Coverage Analysis",
        "body": "Use automated tools to identify gaps in test coverage.\n\nFinding Untested Code\n\nRun the provided script to identify source files without tests:\n\npython3 scripts/find_untested_code.py src\n\nThe script will:\n\nScan source directory for all code files\nIdentify which files lack corresponding test files\nCategorize untested files by type (components, services, utils, etc.)\nPrioritize files that need testing most\n\nInterpretation:\n\nAPI/Services: High priority - test business logic and data operations\nModels: High priority - test data validation and transformations\nHooks: Medium priority - test stateful behavior\nComponents: Medium priority - test complex UI logic\nUtils: Low priority - test as needed for complex functions\n\nAnalyzing Coverage Reports\n\nRun the coverage analysis script after generating coverage:\n\n# Generate coverage (using Jest example)\nnpm test -- --coverage\n\n# Analyze coverage gaps\npython3 scripts/analyze_coverage.py coverage/coverage-final.json\n\nThe script identifies:\n\nFiles below coverage threshold (default 80%)\nStatement, branch, and function coverage percentages\nPriority files to improve\n\nCoverage targets:\n\nCritical paths: 90%+ coverage\nBusiness logic: 85%+ coverage\nUI components: 75%+ coverage\nUtilities: 70%+ coverage"
      },
      {
        "title": "5. Test Maintenance and Quality",
        "body": "Ensure tests remain valuable and maintainable.\n\nTest Code Quality Principles\n\nDRY (Don't Repeat Yourself):\n\n// Extract common setup\nfunction createTestExpense(overrides = {}) {\n  return {\n    amount: 50,\n    category: 'food',\n    description: 'Test expense',\n    date: new Date('2024-01-01'),\n    ...overrides\n  };\n}\n\ntest('filters by category', () => {\n  const expenses = [\n    createTestExpense({ category: 'food' }),\n    createTestExpense({ category: 'transport' }),\n  ];\n  // ...\n});\n\nClear test data:\n\n// Bad: Magic numbers\nexpect(calculateDiscount(100, 0.15)).toBe(85);\n\n// Good: Named constants\nconst ORIGINAL_PRICE = 100;\nconst DISCOUNT_RATE = 0.15;\nconst EXPECTED_PRICE = 85;\nexpect(calculateDiscount(ORIGINAL_PRICE, DISCOUNT_RATE)).toBe(EXPECTED_PRICE);\n\nAvoid test interdependence:\n\n// Bad: Tests depend on execution order\nlet sharedState;\ntest('test 1', () => {\n  sharedState = { value: 1 };\n});\ntest('test 2', () => {\n  expect(sharedState.value).toBe(1); // Depends on test 1\n});\n\n// Good: Independent tests\ntest('test 1', () => {\n  const state = { value: 1 };\n  expect(state.value).toBe(1);\n});\ntest('test 2', () => {\n  const state = { value: 1 };\n  expect(state.value).toBe(1);\n});"
      },
      {
        "title": "Workflow Decision Tree",
        "body": "Follow this decision tree to determine the testing approach:\n\nAdding new functionality?\n\nYes → Write tests first (TDD)\n\nWrite failing test\nImplement feature\nVerify test passes\nRefactor\n\n\nNo → Go to step 2\n\n\n\nFixing a bug?\n\nYes → Apply bug analysis process\n\nReproduce the bug\nWrite failing test demonstrating bug\nFix the implementation\nVerify test passes\n\n\nNo → Go to step 3\n\n\n\nImproving test coverage?\n\nYes → Use coverage tools\n\nRun find_untested_code.py to identify gaps\nRun analyze_coverage.py on coverage reports\nPrioritize critical paths\nWrite tests for untested code\n\n\nNo → Go to step 4\n\n\n\nAnalyzing code quality?\n\nYes → Systematic review\n\nCheck for security vulnerabilities\nTest edge cases and error handling\nVerify performance characteristics\nReview error handling"
      },
      {
        "title": "Recommended Stack",
        "body": "Unit/Integration Testing:\n\nJest or Vitest for test runner\nTesting Library for React components\nSupertest for API testing\nMSW (Mock Service Worker) for API mocking\n\nE2E Testing:\n\nPlaywright or Cypress\nPage Object Model pattern\n\nCoverage:\n\nIstanbul (built into Jest/Vitest)\nCoverage reports in JSON format"
      },
      {
        "title": "Running Tests",
        "body": "# Run all tests\nnpm test\n\n# Run with coverage\nnpm test -- --coverage\n\n# Run specific test file\nnpm test -- ExpenseCalculator.test.ts\n\n# Run in watch mode\nnpm test -- --watch\n\n# Run E2E tests\nnpm run test:e2e"
      },
      {
        "title": "Reference Documentation",
        "body": "For detailed patterns and techniques, refer to:\n\nreferences/testing_patterns.md - Comprehensive testing patterns, best practices, and code examples\nreferences/bug_analysis.md - In-depth bug analysis framework, common bug patterns, and debugging techniques\n\nThese references contain extensive examples and advanced techniques. Load them when:\n\nDealing with complex testing scenarios\nNeed specific pattern implementations\nDebugging unusual issues\nSeeking best practices for specific situations"
      },
      {
        "title": "analyze_coverage.py",
        "body": "Analyze Jest/Istanbul coverage reports to identify gaps:\n\npython3 scripts/analyze_coverage.py [coverage-file]\n\nAutomatically finds common coverage file locations if not specified.\n\nOutput:\n\nFiles below coverage threshold\nStatement, branch, and function coverage percentages\nPriority files to improve"
      },
      {
        "title": "find_untested_code.py",
        "body": "Find source files without corresponding test files:\n\npython3 scripts/find_untested_code.py [src-dir] [--pattern test|spec]\n\nOutput:\n\nTotal source and test file counts\nTest file coverage percentage\nUntested files categorized by type (API, services, components, etc.)\nRecommendations for prioritization"
      },
      {
        "title": "Best Practices Summary",
        "body": "Write tests first (TDD) when adding new features\nTest behavior, not implementation - tests should survive refactoring\nKeep tests independent - no shared state between tests\nUse descriptive names - test names should explain the scenario\nCover edge cases - null, empty, boundary values, error conditions\nMock external dependencies - tests should be fast and reliable\nMaintain high coverage - 80%+ for critical code\nFix failing tests immediately - never commit broken tests\nRefactor tests - apply same quality standards as production code\nUse tools - automate coverage analysis and gap identification"
      }
    ],
    "body": "Test Specialist\nOverview\n\nApply systematic testing methodologies and debugging techniques to JavaScript/TypeScript applications. This skill provides comprehensive testing strategies, bug analysis frameworks, and automated tools for identifying coverage gaps and untested code.\n\nCore Capabilities\n1. Writing Test Cases\n\nWrite comprehensive tests covering unit, integration, and end-to-end scenarios.\n\nUnit Testing Approach\n\nStructure tests using the AAA pattern (Arrange-Act-Assert):\n\ndescribe('ExpenseCalculator', () => {\n  describe('calculateTotal', () => {\n    test('sums expense amounts correctly', () => {\n      // Arrange\n      const expenses = [\n        { amount: 100, category: 'food' },\n        { amount: 50, category: 'transport' },\n        { amount: 25, category: 'entertainment' }\n      ];\n\n      // Act\n      const total = calculateTotal(expenses);\n\n      // Assert\n      expect(total).toBe(175);\n    });\n\n    test('handles empty expense list', () => {\n      expect(calculateTotal([])).toBe(0);\n    });\n\n    test('handles negative amounts', () => {\n      const expenses = [\n        { amount: 100, category: 'food' },\n        { amount: -50, category: 'refund' }\n      ];\n      expect(calculateTotal(expenses)).toBe(50);\n    });\n  });\n});\n\n\nKey principles:\n\nTest one behavior per test\nCover happy path, edge cases, and error conditions\nUse descriptive test names that explain the scenario\nKeep tests independent and isolated\nIntegration Testing Approach\n\nTest how components work together, including database, API, and service interactions:\n\ndescribe('ExpenseAPI Integration', () => {\n  beforeAll(async () => {\n    await database.connect(TEST_DB_URL);\n  });\n\n  afterAll(async () => {\n    await database.disconnect();\n  });\n\n  beforeEach(async () => {\n    await database.clear();\n    await seedTestData();\n  });\n\n  test('POST /expenses creates expense and updates total', async () => {\n    const response = await request(app)\n      .post('/api/expenses')\n      .send({\n        amount: 50,\n        category: 'food',\n        description: 'Lunch'\n      })\n      .expect(201);\n\n    expect(response.body).toMatchObject({\n      id: expect.any(Number),\n      amount: 50,\n      category: 'food'\n    });\n\n    // Verify database state\n    const total = await getTotalExpenses();\n    expect(total).toBe(50);\n  });\n});\n\nEnd-to-End Testing Approach\n\nTest complete user workflows using tools like Playwright or Cypress:\n\ntest('user can track expense from start to finish', async ({ page }) => {\n  // Navigate to app\n  await page.goto('/');\n\n  // Add new expense\n  await page.click('[data-testid=\"add-expense-btn\"]');\n  await page.fill('[data-testid=\"amount\"]', '50.00');\n  await page.selectOption('[data-testid=\"category\"]', 'food');\n  await page.fill('[data-testid=\"description\"]', 'Lunch');\n  await page.click('[data-testid=\"submit\"]');\n\n  // Verify expense appears in list\n  await expect(page.locator('[data-testid=\"expense-item\"]')).toContainText('Lunch');\n  await expect(page.locator('[data-testid=\"total\"]')).toContainText('$50.00');\n});\n\n2. Systematic Bug Analysis\n\nApply structured debugging methodology to identify and fix issues.\n\nFive-Step Analysis Process\n\nReproduction: Reliably reproduce the bug\n\nDocument exact steps to trigger\nIdentify required environment/state\nNote expected vs actual behavior\n\nIsolation: Narrow down the problem\n\nBinary search through code path\nCreate minimal reproduction case\nRemove unrelated dependencies\n\nRoot Cause Analysis: Determine underlying cause\n\nTrace execution flow\nCheck assumptions and preconditions\nReview recent changes (git blame)\n\nFix Implementation: Implement solution\n\nWrite failing test first (TDD)\nImplement the fix\nVerify test passes\n\nValidation: Ensure completeness\n\nRun full test suite\nTest edge cases\nVerify no regressions\nCommon Bug Patterns\n\nRace Conditions:\n\n// Test concurrent operations\ntest('handles concurrent updates correctly', async () => {\n  const promises = Array.from({ length: 100 }, () =>\n    incrementExpenseCount()\n  );\n\n  await Promise.all(promises);\n  expect(getExpenseCount()).toBe(100);\n});\n\n\nNull/Undefined Errors:\n\n// Test null safety\ntest.each([null, undefined, '', 0, false])\n  ('handles invalid input: %p', (input) => {\n    expect(() => processExpense(input)).toThrow('Invalid expense');\n  });\n\n\nOff-by-One Errors:\n\n// Test boundaries explicitly\ndescribe('pagination', () => {\n  test('handles empty list', () => {\n    expect(paginate([], 1, 10)).toEqual([]);\n  });\n\n  test('handles single item', () => {\n    expect(paginate([item], 1, 10)).toEqual([item]);\n  });\n\n  test('handles last page with partial items', () => {\n    const items = Array.from({ length: 25 }, (_, i) => i);\n    expect(paginate(items, 3, 10)).toHaveLength(5);\n  });\n});\n\n3. Identifying Potential Issues\n\nProactively identify issues before they become bugs.\n\nSecurity Vulnerabilities\n\nTest for common security issues:\n\ndescribe('security', () => {\n  test('prevents SQL injection', async () => {\n    const malicious = \"'; DROP TABLE expenses; --\";\n    await expect(\n      searchExpenses(malicious)\n    ).resolves.not.toThrow();\n  });\n\n  test('sanitizes XSS in descriptions', () => {\n    const xss = '<script>alert(\"xss\")</script>';\n    const expense = createExpense({ description: xss });\n    expect(expense.description).not.toContain('<script>');\n  });\n\n  test('requires authentication for expense operations', async () => {\n    await request(app)\n      .post('/api/expenses')\n      .send({ amount: 50 })\n      .expect(401);\n  });\n});\n\nPerformance Issues\n\nTest for performance problems:\n\ntest('processes large expense list efficiently', () => {\n  const largeList = Array.from({ length: 10000 }, (_, i) => ({\n    amount: i,\n    category: 'test'\n  }));\n\n  const start = performance.now();\n  const total = calculateTotal(largeList);\n  const duration = performance.now() - start;\n\n  expect(duration).toBeLessThan(100); // Should complete in <100ms\n  expect(total).toBe(49995000);\n});\n\nLogic Errors\n\nUse parameterized tests to catch edge cases:\n\ntest.each([\n  // [input, expected, description]\n  [[10, 20, 30], 60, 'normal positive values'],\n  [[0, 0, 0], 0, 'all zeros'],\n  [[-10, 20, -5], 5, 'mixed positive and negative'],\n  [[0.1, 0.2], 0.3, 'decimal precision'],\n  [[Number.MAX_SAFE_INTEGER], Number.MAX_SAFE_INTEGER, 'large numbers'],\n])('calculateTotal(%p) = %p (%s)', (amounts, expected, description) => {\n  const expenses = amounts.map(amount => ({ amount, category: 'test' }));\n  expect(calculateTotal(expenses)).toBeCloseTo(expected);\n});\n\n4. Test Coverage Analysis\n\nUse automated tools to identify gaps in test coverage.\n\nFinding Untested Code\n\nRun the provided script to identify source files without tests:\n\npython3 scripts/find_untested_code.py src\n\n\nThe script will:\n\nScan source directory for all code files\nIdentify which files lack corresponding test files\nCategorize untested files by type (components, services, utils, etc.)\nPrioritize files that need testing most\n\nInterpretation:\n\nAPI/Services: High priority - test business logic and data operations\nModels: High priority - test data validation and transformations\nHooks: Medium priority - test stateful behavior\nComponents: Medium priority - test complex UI logic\nUtils: Low priority - test as needed for complex functions\nAnalyzing Coverage Reports\n\nRun the coverage analysis script after generating coverage:\n\n# Generate coverage (using Jest example)\nnpm test -- --coverage\n\n# Analyze coverage gaps\npython3 scripts/analyze_coverage.py coverage/coverage-final.json\n\n\nThe script identifies:\n\nFiles below coverage threshold (default 80%)\nStatement, branch, and function coverage percentages\nPriority files to improve\n\nCoverage targets:\n\nCritical paths: 90%+ coverage\nBusiness logic: 85%+ coverage\nUI components: 75%+ coverage\nUtilities: 70%+ coverage\n5. Test Maintenance and Quality\n\nEnsure tests remain valuable and maintainable.\n\nTest Code Quality Principles\n\nDRY (Don't Repeat Yourself):\n\n// Extract common setup\nfunction createTestExpense(overrides = {}) {\n  return {\n    amount: 50,\n    category: 'food',\n    description: 'Test expense',\n    date: new Date('2024-01-01'),\n    ...overrides\n  };\n}\n\ntest('filters by category', () => {\n  const expenses = [\n    createTestExpense({ category: 'food' }),\n    createTestExpense({ category: 'transport' }),\n  ];\n  // ...\n});\n\n\nClear test data:\n\n// Bad: Magic numbers\nexpect(calculateDiscount(100, 0.15)).toBe(85);\n\n// Good: Named constants\nconst ORIGINAL_PRICE = 100;\nconst DISCOUNT_RATE = 0.15;\nconst EXPECTED_PRICE = 85;\nexpect(calculateDiscount(ORIGINAL_PRICE, DISCOUNT_RATE)).toBe(EXPECTED_PRICE);\n\n\nAvoid test interdependence:\n\n// Bad: Tests depend on execution order\nlet sharedState;\ntest('test 1', () => {\n  sharedState = { value: 1 };\n});\ntest('test 2', () => {\n  expect(sharedState.value).toBe(1); // Depends on test 1\n});\n\n// Good: Independent tests\ntest('test 1', () => {\n  const state = { value: 1 };\n  expect(state.value).toBe(1);\n});\ntest('test 2', () => {\n  const state = { value: 1 };\n  expect(state.value).toBe(1);\n});\n\nWorkflow Decision Tree\n\nFollow this decision tree to determine the testing approach:\n\nAdding new functionality?\n\nYes → Write tests first (TDD)\nWrite failing test\nImplement feature\nVerify test passes\nRefactor\nNo → Go to step 2\n\nFixing a bug?\n\nYes → Apply bug analysis process\nReproduce the bug\nWrite failing test demonstrating bug\nFix the implementation\nVerify test passes\nNo → Go to step 3\n\nImproving test coverage?\n\nYes → Use coverage tools\nRun find_untested_code.py to identify gaps\nRun analyze_coverage.py on coverage reports\nPrioritize critical paths\nWrite tests for untested code\nNo → Go to step 4\n\nAnalyzing code quality?\n\nYes → Systematic review\nCheck for security vulnerabilities\nTest edge cases and error handling\nVerify performance characteristics\nReview error handling\nTesting Frameworks and Tools\nRecommended Stack\n\nUnit/Integration Testing:\n\nJest or Vitest for test runner\nTesting Library for React components\nSupertest for API testing\nMSW (Mock Service Worker) for API mocking\n\nE2E Testing:\n\nPlaywright or Cypress\nPage Object Model pattern\n\nCoverage:\n\nIstanbul (built into Jest/Vitest)\nCoverage reports in JSON format\nRunning Tests\n# Run all tests\nnpm test\n\n# Run with coverage\nnpm test -- --coverage\n\n# Run specific test file\nnpm test -- ExpenseCalculator.test.ts\n\n# Run in watch mode\nnpm test -- --watch\n\n# Run E2E tests\nnpm run test:e2e\n\nReference Documentation\n\nFor detailed patterns and techniques, refer to:\n\nreferences/testing_patterns.md - Comprehensive testing patterns, best practices, and code examples\nreferences/bug_analysis.md - In-depth bug analysis framework, common bug patterns, and debugging techniques\n\nThese references contain extensive examples and advanced techniques. Load them when:\n\nDealing with complex testing scenarios\nNeed specific pattern implementations\nDebugging unusual issues\nSeeking best practices for specific situations\nScripts\nanalyze_coverage.py\n\nAnalyze Jest/Istanbul coverage reports to identify gaps:\n\npython3 scripts/analyze_coverage.py [coverage-file]\n\n\nAutomatically finds common coverage file locations if not specified.\n\nOutput:\n\nFiles below coverage threshold\nStatement, branch, and function coverage percentages\nPriority files to improve\nfind_untested_code.py\n\nFind source files without corresponding test files:\n\npython3 scripts/find_untested_code.py [src-dir] [--pattern test|spec]\n\n\nOutput:\n\nTotal source and test file counts\nTest file coverage percentage\nUntested files categorized by type (API, services, components, etc.)\nRecommendations for prioritization\nBest Practices Summary\nWrite tests first (TDD) when adding new features\nTest behavior, not implementation - tests should survive refactoring\nKeep tests independent - no shared state between tests\nUse descriptive names - test names should explain the scenario\nCover edge cases - null, empty, boundary values, error conditions\nMock external dependencies - tests should be fast and reliable\nMaintain high coverage - 80%+ for critical code\nFix failing tests immediately - never commit broken tests\nRefactor tests - apply same quality standards as production code\nUse tools - automate coverage analysis and gap identification"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/Veeramanikandanr48/test-specialist",
    "publisherUrl": "https://clawhub.ai/Veeramanikandanr48/test-specialist",
    "owner": "Veeramanikandanr48",
    "version": "0.1.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/test-specialist",
    "downloadUrl": "https://openagent3.xyz/downloads/test-specialist",
    "agentUrl": "https://openagent3.xyz/skills/test-specialist/agent",
    "manifestUrl": "https://openagent3.xyz/skills/test-specialist/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/test-specialist/agent.md"
  }
}