{
  "schemaVersion": "1.0",
  "item": {
    "slug": "test-patterns",
    "name": "Test Patterns",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/gitgoodordietrying/test-patterns",
    "canonicalUrl": "https://clawhub.ai/gitgoodordietrying/test-patterns",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/test-patterns",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=test-patterns",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "slug": "test-patterns",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-09T22:39:27.107Z",
      "expiresAt": "2026-05-16T22:39:27.107Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=test-patterns",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=test-patterns",
        "contentDisposition": "attachment; filename=\"test-patterns-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "test-patterns"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/test-patterns"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/test-patterns",
    "agentPageUrl": "https://openagent3.xyz/skills/test-patterns/agent",
    "manifestUrl": "https://openagent3.xyz/skills/test-patterns/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/test-patterns/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Test Patterns",
        "body": "Write, run, and debug tests across languages. Covers unit tests, integration tests, E2E tests, mocking, coverage, and TDD workflows."
      },
      {
        "title": "When to Use",
        "body": "Setting up a test suite for a new project\nWriting unit tests for functions or modules\nWriting integration tests for APIs or database interactions\nSetting up code coverage measurement\nMocking external dependencies (APIs, databases, file system)\nDebugging flaky or failing tests\nImplementing test-driven development (TDD)"
      },
      {
        "title": "Setup",
        "body": "# Jest\nnpm install -D jest\n# Add to package.json: \"scripts\": { \"test\": \"jest\" }\n\n# Vitest (faster, ESM-native)\nnpm install -D vitest\n# Add to package.json: \"scripts\": { \"test\": \"vitest\" }"
      },
      {
        "title": "Unit Tests",
        "body": "// math.js\nexport function add(a, b) { return a + b; }\nexport function divide(a, b) {\n  if (b === 0) throw new Error('Division by zero');\n  return a / b;\n}\n\n// math.test.js\nimport { add, divide } from './math.js';\n\ndescribe('add', () => {\n  test('adds two positive numbers', () => {\n    expect(add(2, 3)).toBe(5);\n  });\n\n  test('handles negative numbers', () => {\n    expect(add(-1, 1)).toBe(0);\n  });\n\n  test('handles zero', () => {\n    expect(add(0, 0)).toBe(0);\n  });\n});\n\ndescribe('divide', () => {\n  test('divides two numbers', () => {\n    expect(divide(10, 2)).toBe(5);\n  });\n\n  test('throws on division by zero', () => {\n    expect(() => divide(10, 0)).toThrow('Division by zero');\n  });\n\n  test('handles floating point', () => {\n    expect(divide(1, 3)).toBeCloseTo(0.333, 3);\n  });\n});"
      },
      {
        "title": "Async Tests",
        "body": "// api.test.js\nimport { fetchUser } from './api.js';\n\ntest('fetches user by id', async () => {\n  const user = await fetchUser('123');\n  expect(user).toHaveProperty('id', '123');\n  expect(user).toHaveProperty('name');\n  expect(user.name).toBeTruthy();\n});\n\ntest('throws on missing user', async () => {\n  await expect(fetchUser('nonexistent')).rejects.toThrow('Not found');\n});"
      },
      {
        "title": "Mocking",
        "body": "// Mock a module\njest.mock('./database.js');\nimport { getUser } from './database.js';\nimport { processUser } from './service.js';\n\ntest('processes user from database', async () => {\n  // Setup mock return value\n  getUser.mockResolvedValue({ id: '1', name: 'Alice', active: true });\n\n  const result = await processUser('1');\n  expect(result.processed).toBe(true);\n  expect(getUser).toHaveBeenCalledWith('1');\n  expect(getUser).toHaveBeenCalledTimes(1);\n});\n\n// Mock fetch\nglobal.fetch = jest.fn();\n\ntest('calls API with correct params', async () => {\n  fetch.mockResolvedValue({\n    ok: true,\n    json: async () => ({ data: 'test' }),\n  });\n\n  const result = await myApiCall('/endpoint');\n  expect(fetch).toHaveBeenCalledWith('/endpoint', expect.objectContaining({\n    method: 'GET',\n  }));\n});\n\n// Spy on existing method (don't replace, just observe)\nconst consoleSpy = jest.spyOn(console, 'log').mockImplementation();\n// ... run code ...\nexpect(consoleSpy).toHaveBeenCalledWith('expected message');\nconsoleSpy.mockRestore();"
      },
      {
        "title": "Coverage",
        "body": "# Jest\nnpx jest --coverage\n\n# Vitest\nnpx vitest --coverage\n\n# Check coverage thresholds (jest.config.js)\n# coverageThreshold: { global: { branches: 80, functions: 80, lines: 80, statements: 80 } }"
      },
      {
        "title": "Setup",
        "body": "pip install pytest pytest-cov"
      },
      {
        "title": "Unit Tests",
        "body": "# calculator.py\ndef add(a, b):\n    return a + b\n\ndef divide(a, b):\n    if b == 0:\n        raise ValueError(\"Division by zero\")\n    return a / b\n\n# test_calculator.py\nimport pytest\nfrom calculator import add, divide\n\ndef test_add():\n    assert add(2, 3) == 5\n\ndef test_add_negative():\n    assert add(-1, 1) == 0\n\ndef test_divide():\n    assert divide(10, 2) == 5.0\n\ndef test_divide_by_zero():\n    with pytest.raises(ValueError, match=\"Division by zero\"):\n        divide(10, 0)\n\ndef test_divide_float():\n    assert divide(1, 3) == pytest.approx(0.333, abs=0.001)"
      },
      {
        "title": "Parametrized Tests",
        "body": "@pytest.mark.parametrize(\"a,b,expected\", [\n    (2, 3, 5),\n    (-1, 1, 0),\n    (0, 0, 0),\n    (100, -50, 50),\n])\ndef test_add_cases(a, b, expected):\n    assert add(a, b) == expected"
      },
      {
        "title": "Fixtures",
        "body": "import pytest\nimport json\nimport tempfile\nimport os\n\n@pytest.fixture\ndef sample_users():\n    \"\"\"Provide test user data.\"\"\"\n    return [\n        {\"id\": 1, \"name\": \"Alice\", \"email\": \"alice@test.com\"},\n        {\"id\": 2, \"name\": \"Bob\", \"email\": \"bob@test.com\"},\n    ]\n\n@pytest.fixture\ndef temp_db(tmp_path):\n    \"\"\"Provide a temporary SQLite database.\"\"\"\n    import sqlite3\n    db_path = tmp_path / \"test.db\"\n    conn = sqlite3.connect(str(db_path))\n    conn.execute(\"CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, email TEXT)\")\n    conn.commit()\n    yield conn\n    conn.close()\n\ndef test_insert_users(temp_db, sample_users):\n    for user in sample_users:\n        temp_db.execute(\"INSERT INTO users VALUES (?, ?, ?)\",\n                       (user[\"id\"], user[\"name\"], user[\"email\"]))\n    temp_db.commit()\n    count = temp_db.execute(\"SELECT COUNT(*) FROM users\").fetchone()[0]\n    assert count == 2\n\n# Fixture with cleanup\n@pytest.fixture\ndef temp_config_file():\n    path = tempfile.mktemp(suffix=\".json\")\n    with open(path, \"w\") as f:\n        json.dump({\"key\": \"value\"}, f)\n    yield path\n    os.unlink(path)"
      },
      {
        "title": "Mocking",
        "body": "from unittest.mock import patch, MagicMock, AsyncMock\n\n# Mock a function\n@patch('mymodule.requests.get')\ndef test_fetch_data(mock_get):\n    mock_get.return_value.status_code = 200\n    mock_get.return_value.json.return_value = {\"data\": \"test\"}\n\n    result = fetch_data(\"https://api.example.com\")\n    assert result == {\"data\": \"test\"}\n    mock_get.assert_called_once_with(\"https://api.example.com\")\n\n# Mock async\n@patch('mymodule.aiohttp.ClientSession.get', new_callable=AsyncMock)\nasync def test_async_fetch(mock_get):\n    mock_get.return_value.__aenter__.return_value.json = AsyncMock(return_value={\"ok\": True})\n    result = await async_fetch(\"/endpoint\")\n    assert result[\"ok\"] is True\n\n# Context manager mock\ndef test_file_reader():\n    with patch(\"builtins.open\", MagicMock(return_value=MagicMock(\n        read=MagicMock(return_value='{\"key\": \"val\"}'),\n        __enter__=MagicMock(return_value=MagicMock(read=MagicMock(return_value='{\"key\": \"val\"}'))),\n        __exit__=MagicMock(return_value=False),\n    ))):\n        result = read_config(\"fake.json\")\n        assert result[\"key\"] == \"val\""
      },
      {
        "title": "Coverage",
        "body": "# Run with coverage\npytest --cov=mypackage --cov-report=term-missing\n\n# HTML report\npytest --cov=mypackage --cov-report=html\n# Open htmlcov/index.html\n\n# Fail if coverage below threshold\npytest --cov=mypackage --cov-fail-under=80"
      },
      {
        "title": "Unit Tests",
        "body": "// math.go\npackage math\n\nimport \"errors\"\n\nfunc Add(a, b int) int { return a + b }\n\nfunc Divide(a, b float64) (float64, error) {\n    if b == 0 {\n        return 0, errors.New(\"division by zero\")\n    }\n    return a / b, nil\n}\n\n// math_test.go\npackage math\n\nimport (\n    \"testing\"\n    \"math\"\n)\n\nfunc TestAdd(t *testing.T) {\n    tests := []struct {\n        name     string\n        a, b     int\n        expected int\n    }{\n        {\"positive\", 2, 3, 5},\n        {\"negative\", -1, 1, 0},\n        {\"zeros\", 0, 0, 0},\n    }\n    for _, tt := range tests {\n        t.Run(tt.name, func(t *testing.T) {\n            got := Add(tt.a, tt.b)\n            if got != tt.expected {\n                t.Errorf(\"Add(%d, %d) = %d, want %d\", tt.a, tt.b, got, tt.expected)\n            }\n        })\n    }\n}\n\nfunc TestDivide(t *testing.T) {\n    result, err := Divide(10, 2)\n    if err != nil {\n        t.Fatalf(\"unexpected error: %v\", err)\n    }\n    if math.Abs(result-5.0) > 0.001 {\n        t.Errorf(\"Divide(10, 2) = %f, want 5.0\", result)\n    }\n}\n\nfunc TestDivideByZero(t *testing.T) {\n    _, err := Divide(10, 0)\n    if err == nil {\n        t.Error(\"expected error for division by zero\")\n    }\n}"
      },
      {
        "title": "Run Tests",
        "body": "# All tests\ngo test ./...\n\n# Verbose\ngo test -v ./...\n\n# Specific package\ngo test ./pkg/math/\n\n# With coverage\ngo test -cover ./...\ngo test -coverprofile=coverage.out ./...\ngo tool cover -html=coverage.out\n\n# Run specific test\ngo test -run TestAdd ./...\n\n# Race condition detection\ngo test -race ./...\n\n# Benchmark\ngo test -bench=. ./..."
      },
      {
        "title": "Unit Tests",
        "body": "// src/math.rs\npub fn add(a: i64, b: i64) -> i64 { a + b }\n\npub fn divide(a: f64, b: f64) -> Result<f64, String> {\n    if b == 0.0 { return Err(\"division by zero\".into()); }\n    Ok(a / b)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_add() {\n        assert_eq!(add(2, 3), 5);\n        assert_eq!(add(-1, 1), 0);\n    }\n\n    #[test]\n    fn test_divide() {\n        let result = divide(10.0, 2.0).unwrap();\n        assert!((result - 5.0).abs() < f64::EPSILON);\n    }\n\n    #[test]\n    fn test_divide_by_zero() {\n        assert!(divide(10.0, 0.0).is_err());\n    }\n\n    #[test]\n    #[should_panic(expected = \"overflow\")]\n    fn test_overflow_panics() {\n        let _ = add(i64::MAX, 1); // Will panic on overflow in debug\n    }\n}\n\ncargo test\ncargo test -- --nocapture  # Show println output\ncargo test test_add        # Run specific test\ncargo tarpaulin            # Coverage (install: cargo install cargo-tarpaulin)"
      },
      {
        "title": "Simple Test Runner",
        "body": "#!/bin/bash\n# test.sh - Minimal bash test framework\nPASS=0 FAIL=0\n\nassert_eq() {\n  local actual=\"$1\" expected=\"$2\" label=\"$3\"\n  if [ \"$actual\" = \"$expected\" ]; then\n    echo \"  PASS: $label\"\n    ((PASS++))\n  else\n    echo \"  FAIL: $label (got '$actual', expected '$expected')\"\n    ((FAIL++))\n  fi\n}\n\nassert_exit_code() {\n  local cmd=\"$1\" expected=\"$2\" label=\"$3\"\n  eval \"$cmd\" >/dev/null 2>&1\n  assert_eq \"$?\" \"$expected\" \"$label\"\n}\n\nassert_contains() {\n  local actual=\"$1\" substring=\"$2\" label=\"$3\"\n  if echo \"$actual\" | grep -q \"$substring\"; then\n    echo \"  PASS: $label\"\n    ((PASS++))\n  else\n    echo \"  FAIL: $label ('$actual' does not contain '$substring')\"\n    ((FAIL++))\n  fi\n}\n\n# --- Tests ---\necho \"Running tests...\"\n\n# Test your scripts\noutput=$(./my-script.sh --help 2>&1)\nassert_exit_code \"./my-script.sh --help\" \"0\" \"help flag exits 0\"\nassert_contains \"$output\" \"Usage\" \"help shows usage\"\n\noutput=$(./my-script.sh --invalid 2>&1)\nassert_exit_code \"./my-script.sh --invalid\" \"1\" \"invalid flag exits 1\"\n\n# Test command outputs\nassert_eq \"$(echo 'hello' | wc -c | tr -d ' ')\" \"6\" \"echo hello is 6 bytes\"\n\necho \"\"\necho \"Results: $PASS passed, $FAIL failed\"\n[ \"$FAIL\" -eq 0 ] && exit 0 || exit 1"
      },
      {
        "title": "API Integration Test (any language)",
        "body": "#!/bin/bash\n# test-api.sh - Start server, run tests, tear down\nSERVER_PID=\"\"\ncleanup() { [ -n \"$SERVER_PID\" ] && kill \"$SERVER_PID\" 2>/dev/null; }\ntrap cleanup EXIT\n\n# Start server in background\nnpm start &\nSERVER_PID=$!\nsleep 2  # Wait for server\n\n# Run tests against live server\nBASE_URL=http://localhost:3000 npx jest --testPathPattern=integration\nEXIT_CODE=$?\n\nexit $EXIT_CODE"
      },
      {
        "title": "Database Integration Test (Python)",
        "body": "import pytest\nimport sqlite3\n\n@pytest.fixture\ndef db():\n    \"\"\"Fresh database for each test.\"\"\"\n    conn = sqlite3.connect(\":memory:\")\n    conn.execute(\"CREATE TABLE items (id INTEGER PRIMARY KEY, name TEXT, price REAL)\")\n    yield conn\n    conn.close()\n\ndef test_insert_and_query(db):\n    db.execute(\"INSERT INTO items (name, price) VALUES (?, ?)\", (\"Widget\", 9.99))\n    db.commit()\n    row = db.execute(\"SELECT name, price FROM items WHERE name = ?\", (\"Widget\",)).fetchone()\n    assert row == (\"Widget\", 9.99)\n\ndef test_empty_table(db):\n    count = db.execute(\"SELECT COUNT(*) FROM items\").fetchone()[0]\n    assert count == 0"
      },
      {
        "title": "TDD Workflow",
        "body": "The red-green-refactor cycle:\n\nRed: Write a failing test for the next piece of behavior\nGreen: Write the minimum code to make it pass\nRefactor: Clean up without changing behavior (tests stay green)\n\n# Tight feedback loop\n# Jest watch mode\nnpx jest --watch\n\n# Vitest watch (default)\nnpx vitest\n\n# pytest watch (with pytest-watch)\npip install pytest-watch\nptw\n\n# Go (with air or entr)\nls *.go | entr -c go test ./..."
      },
      {
        "title": "Common Issues",
        "body": "Test passes alone, fails in suite → shared state. Check for:\n\nGlobal variables modified between tests\nDatabase not cleaned up\nMocks not restored (afterEach / teardown)\n\nTest fails intermittently (flaky) → timing or ordering issue:\n\nAsync operations without proper await\nTests depending on execution order\nTime-dependent logic (use clock mocking)\nNetwork calls in unit tests (should be mocked)\n\nCoverage shows uncovered branches → missing edge cases:\n\nError paths (what if the API returns 500?)\nEmpty inputs (empty string, null, empty array)\nBoundary values (0, -1, MAX_INT)"
      },
      {
        "title": "Run Single Test",
        "body": "# Jest\nnpx jest -t \"test name substring\"\n\n# pytest\npytest -k \"test_divide_by_zero\"\n\n# Go\ngo test -run TestDivideByZero ./...\n\n# Rust\ncargo test test_divide"
      },
      {
        "title": "Tips",
        "body": "Test behavior, not implementation. Tests should survive refactors.\nOne assertion per concept (not necessarily one assert per test, but one logical check).\nName tests descriptively: test_returns_empty_list_when_no_users_exist beats test_get_users_2.\nDon't mock what you don't own — write thin wrappers around external libraries, mock the wrapper.\nIntegration tests catch bugs unit tests miss. Don't skip them.\nUse tmp_path (pytest), t.TempDir() (Go), or tempfile (Node) for file-based tests.\nSnapshot tests are great for detecting unintended changes, bad for evolving formats."
      }
    ],
    "body": "Test Patterns\n\nWrite, run, and debug tests across languages. Covers unit tests, integration tests, E2E tests, mocking, coverage, and TDD workflows.\n\nWhen to Use\nSetting up a test suite for a new project\nWriting unit tests for functions or modules\nWriting integration tests for APIs or database interactions\nSetting up code coverage measurement\nMocking external dependencies (APIs, databases, file system)\nDebugging flaky or failing tests\nImplementing test-driven development (TDD)\nNode.js (Jest / Vitest)\nSetup\n# Jest\nnpm install -D jest\n# Add to package.json: \"scripts\": { \"test\": \"jest\" }\n\n# Vitest (faster, ESM-native)\nnpm install -D vitest\n# Add to package.json: \"scripts\": { \"test\": \"vitest\" }\n\nUnit Tests\n// math.js\nexport function add(a, b) { return a + b; }\nexport function divide(a, b) {\n  if (b === 0) throw new Error('Division by zero');\n  return a / b;\n}\n\n// math.test.js\nimport { add, divide } from './math.js';\n\ndescribe('add', () => {\n  test('adds two positive numbers', () => {\n    expect(add(2, 3)).toBe(5);\n  });\n\n  test('handles negative numbers', () => {\n    expect(add(-1, 1)).toBe(0);\n  });\n\n  test('handles zero', () => {\n    expect(add(0, 0)).toBe(0);\n  });\n});\n\ndescribe('divide', () => {\n  test('divides two numbers', () => {\n    expect(divide(10, 2)).toBe(5);\n  });\n\n  test('throws on division by zero', () => {\n    expect(() => divide(10, 0)).toThrow('Division by zero');\n  });\n\n  test('handles floating point', () => {\n    expect(divide(1, 3)).toBeCloseTo(0.333, 3);\n  });\n});\n\nAsync Tests\n// api.test.js\nimport { fetchUser } from './api.js';\n\ntest('fetches user by id', async () => {\n  const user = await fetchUser('123');\n  expect(user).toHaveProperty('id', '123');\n  expect(user).toHaveProperty('name');\n  expect(user.name).toBeTruthy();\n});\n\ntest('throws on missing user', async () => {\n  await expect(fetchUser('nonexistent')).rejects.toThrow('Not found');\n});\n\nMocking\n// Mock a module\njest.mock('./database.js');\nimport { getUser } from './database.js';\nimport { processUser } from './service.js';\n\ntest('processes user from database', async () => {\n  // Setup mock return value\n  getUser.mockResolvedValue({ id: '1', name: 'Alice', active: true });\n\n  const result = await processUser('1');\n  expect(result.processed).toBe(true);\n  expect(getUser).toHaveBeenCalledWith('1');\n  expect(getUser).toHaveBeenCalledTimes(1);\n});\n\n// Mock fetch\nglobal.fetch = jest.fn();\n\ntest('calls API with correct params', async () => {\n  fetch.mockResolvedValue({\n    ok: true,\n    json: async () => ({ data: 'test' }),\n  });\n\n  const result = await myApiCall('/endpoint');\n  expect(fetch).toHaveBeenCalledWith('/endpoint', expect.objectContaining({\n    method: 'GET',\n  }));\n});\n\n// Spy on existing method (don't replace, just observe)\nconst consoleSpy = jest.spyOn(console, 'log').mockImplementation();\n// ... run code ...\nexpect(consoleSpy).toHaveBeenCalledWith('expected message');\nconsoleSpy.mockRestore();\n\nCoverage\n# Jest\nnpx jest --coverage\n\n# Vitest\nnpx vitest --coverage\n\n# Check coverage thresholds (jest.config.js)\n# coverageThreshold: { global: { branches: 80, functions: 80, lines: 80, statements: 80 } }\n\nPython (pytest)\nSetup\npip install pytest pytest-cov\n\nUnit Tests\n# calculator.py\ndef add(a, b):\n    return a + b\n\ndef divide(a, b):\n    if b == 0:\n        raise ValueError(\"Division by zero\")\n    return a / b\n\n# test_calculator.py\nimport pytest\nfrom calculator import add, divide\n\ndef test_add():\n    assert add(2, 3) == 5\n\ndef test_add_negative():\n    assert add(-1, 1) == 0\n\ndef test_divide():\n    assert divide(10, 2) == 5.0\n\ndef test_divide_by_zero():\n    with pytest.raises(ValueError, match=\"Division by zero\"):\n        divide(10, 0)\n\ndef test_divide_float():\n    assert divide(1, 3) == pytest.approx(0.333, abs=0.001)\n\nParametrized Tests\n@pytest.mark.parametrize(\"a,b,expected\", [\n    (2, 3, 5),\n    (-1, 1, 0),\n    (0, 0, 0),\n    (100, -50, 50),\n])\ndef test_add_cases(a, b, expected):\n    assert add(a, b) == expected\n\nFixtures\nimport pytest\nimport json\nimport tempfile\nimport os\n\n@pytest.fixture\ndef sample_users():\n    \"\"\"Provide test user data.\"\"\"\n    return [\n        {\"id\": 1, \"name\": \"Alice\", \"email\": \"alice@test.com\"},\n        {\"id\": 2, \"name\": \"Bob\", \"email\": \"bob@test.com\"},\n    ]\n\n@pytest.fixture\ndef temp_db(tmp_path):\n    \"\"\"Provide a temporary SQLite database.\"\"\"\n    import sqlite3\n    db_path = tmp_path / \"test.db\"\n    conn = sqlite3.connect(str(db_path))\n    conn.execute(\"CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, email TEXT)\")\n    conn.commit()\n    yield conn\n    conn.close()\n\ndef test_insert_users(temp_db, sample_users):\n    for user in sample_users:\n        temp_db.execute(\"INSERT INTO users VALUES (?, ?, ?)\",\n                       (user[\"id\"], user[\"name\"], user[\"email\"]))\n    temp_db.commit()\n    count = temp_db.execute(\"SELECT COUNT(*) FROM users\").fetchone()[0]\n    assert count == 2\n\n# Fixture with cleanup\n@pytest.fixture\ndef temp_config_file():\n    path = tempfile.mktemp(suffix=\".json\")\n    with open(path, \"w\") as f:\n        json.dump({\"key\": \"value\"}, f)\n    yield path\n    os.unlink(path)\n\nMocking\nfrom unittest.mock import patch, MagicMock, AsyncMock\n\n# Mock a function\n@patch('mymodule.requests.get')\ndef test_fetch_data(mock_get):\n    mock_get.return_value.status_code = 200\n    mock_get.return_value.json.return_value = {\"data\": \"test\"}\n\n    result = fetch_data(\"https://api.example.com\")\n    assert result == {\"data\": \"test\"}\n    mock_get.assert_called_once_with(\"https://api.example.com\")\n\n# Mock async\n@patch('mymodule.aiohttp.ClientSession.get', new_callable=AsyncMock)\nasync def test_async_fetch(mock_get):\n    mock_get.return_value.__aenter__.return_value.json = AsyncMock(return_value={\"ok\": True})\n    result = await async_fetch(\"/endpoint\")\n    assert result[\"ok\"] is True\n\n# Context manager mock\ndef test_file_reader():\n    with patch(\"builtins.open\", MagicMock(return_value=MagicMock(\n        read=MagicMock(return_value='{\"key\": \"val\"}'),\n        __enter__=MagicMock(return_value=MagicMock(read=MagicMock(return_value='{\"key\": \"val\"}'))),\n        __exit__=MagicMock(return_value=False),\n    ))):\n        result = read_config(\"fake.json\")\n        assert result[\"key\"] == \"val\"\n\nCoverage\n# Run with coverage\npytest --cov=mypackage --cov-report=term-missing\n\n# HTML report\npytest --cov=mypackage --cov-report=html\n# Open htmlcov/index.html\n\n# Fail if coverage below threshold\npytest --cov=mypackage --cov-fail-under=80\n\nGo\nUnit Tests\n// math.go\npackage math\n\nimport \"errors\"\n\nfunc Add(a, b int) int { return a + b }\n\nfunc Divide(a, b float64) (float64, error) {\n    if b == 0 {\n        return 0, errors.New(\"division by zero\")\n    }\n    return a / b, nil\n}\n\n// math_test.go\npackage math\n\nimport (\n    \"testing\"\n    \"math\"\n)\n\nfunc TestAdd(t *testing.T) {\n    tests := []struct {\n        name     string\n        a, b     int\n        expected int\n    }{\n        {\"positive\", 2, 3, 5},\n        {\"negative\", -1, 1, 0},\n        {\"zeros\", 0, 0, 0},\n    }\n    for _, tt := range tests {\n        t.Run(tt.name, func(t *testing.T) {\n            got := Add(tt.a, tt.b)\n            if got != tt.expected {\n                t.Errorf(\"Add(%d, %d) = %d, want %d\", tt.a, tt.b, got, tt.expected)\n            }\n        })\n    }\n}\n\nfunc TestDivide(t *testing.T) {\n    result, err := Divide(10, 2)\n    if err != nil {\n        t.Fatalf(\"unexpected error: %v\", err)\n    }\n    if math.Abs(result-5.0) > 0.001 {\n        t.Errorf(\"Divide(10, 2) = %f, want 5.0\", result)\n    }\n}\n\nfunc TestDivideByZero(t *testing.T) {\n    _, err := Divide(10, 0)\n    if err == nil {\n        t.Error(\"expected error for division by zero\")\n    }\n}\n\nRun Tests\n# All tests\ngo test ./...\n\n# Verbose\ngo test -v ./...\n\n# Specific package\ngo test ./pkg/math/\n\n# With coverage\ngo test -cover ./...\ngo test -coverprofile=coverage.out ./...\ngo tool cover -html=coverage.out\n\n# Run specific test\ngo test -run TestAdd ./...\n\n# Race condition detection\ngo test -race ./...\n\n# Benchmark\ngo test -bench=. ./...\n\nRust\nUnit Tests\n// src/math.rs\npub fn add(a: i64, b: i64) -> i64 { a + b }\n\npub fn divide(a: f64, b: f64) -> Result<f64, String> {\n    if b == 0.0 { return Err(\"division by zero\".into()); }\n    Ok(a / b)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_add() {\n        assert_eq!(add(2, 3), 5);\n        assert_eq!(add(-1, 1), 0);\n    }\n\n    #[test]\n    fn test_divide() {\n        let result = divide(10.0, 2.0).unwrap();\n        assert!((result - 5.0).abs() < f64::EPSILON);\n    }\n\n    #[test]\n    fn test_divide_by_zero() {\n        assert!(divide(10.0, 0.0).is_err());\n    }\n\n    #[test]\n    #[should_panic(expected = \"overflow\")]\n    fn test_overflow_panics() {\n        let _ = add(i64::MAX, 1); // Will panic on overflow in debug\n    }\n}\n\ncargo test\ncargo test -- --nocapture  # Show println output\ncargo test test_add        # Run specific test\ncargo tarpaulin            # Coverage (install: cargo install cargo-tarpaulin)\n\nBash Tests\nSimple Test Runner\n#!/bin/bash\n# test.sh - Minimal bash test framework\nPASS=0 FAIL=0\n\nassert_eq() {\n  local actual=\"$1\" expected=\"$2\" label=\"$3\"\n  if [ \"$actual\" = \"$expected\" ]; then\n    echo \"  PASS: $label\"\n    ((PASS++))\n  else\n    echo \"  FAIL: $label (got '$actual', expected '$expected')\"\n    ((FAIL++))\n  fi\n}\n\nassert_exit_code() {\n  local cmd=\"$1\" expected=\"$2\" label=\"$3\"\n  eval \"$cmd\" >/dev/null 2>&1\n  assert_eq \"$?\" \"$expected\" \"$label\"\n}\n\nassert_contains() {\n  local actual=\"$1\" substring=\"$2\" label=\"$3\"\n  if echo \"$actual\" | grep -q \"$substring\"; then\n    echo \"  PASS: $label\"\n    ((PASS++))\n  else\n    echo \"  FAIL: $label ('$actual' does not contain '$substring')\"\n    ((FAIL++))\n  fi\n}\n\n# --- Tests ---\necho \"Running tests...\"\n\n# Test your scripts\noutput=$(./my-script.sh --help 2>&1)\nassert_exit_code \"./my-script.sh --help\" \"0\" \"help flag exits 0\"\nassert_contains \"$output\" \"Usage\" \"help shows usage\"\n\noutput=$(./my-script.sh --invalid 2>&1)\nassert_exit_code \"./my-script.sh --invalid\" \"1\" \"invalid flag exits 1\"\n\n# Test command outputs\nassert_eq \"$(echo 'hello' | wc -c | tr -d ' ')\" \"6\" \"echo hello is 6 bytes\"\n\necho \"\"\necho \"Results: $PASS passed, $FAIL failed\"\n[ \"$FAIL\" -eq 0 ] && exit 0 || exit 1\n\nIntegration Testing Patterns\nAPI Integration Test (any language)\n#!/bin/bash\n# test-api.sh - Start server, run tests, tear down\nSERVER_PID=\"\"\ncleanup() { [ -n \"$SERVER_PID\" ] && kill \"$SERVER_PID\" 2>/dev/null; }\ntrap cleanup EXIT\n\n# Start server in background\nnpm start &\nSERVER_PID=$!\nsleep 2  # Wait for server\n\n# Run tests against live server\nBASE_URL=http://localhost:3000 npx jest --testPathPattern=integration\nEXIT_CODE=$?\n\nexit $EXIT_CODE\n\nDatabase Integration Test (Python)\nimport pytest\nimport sqlite3\n\n@pytest.fixture\ndef db():\n    \"\"\"Fresh database for each test.\"\"\"\n    conn = sqlite3.connect(\":memory:\")\n    conn.execute(\"CREATE TABLE items (id INTEGER PRIMARY KEY, name TEXT, price REAL)\")\n    yield conn\n    conn.close()\n\ndef test_insert_and_query(db):\n    db.execute(\"INSERT INTO items (name, price) VALUES (?, ?)\", (\"Widget\", 9.99))\n    db.commit()\n    row = db.execute(\"SELECT name, price FROM items WHERE name = ?\", (\"Widget\",)).fetchone()\n    assert row == (\"Widget\", 9.99)\n\ndef test_empty_table(db):\n    count = db.execute(\"SELECT COUNT(*) FROM items\").fetchone()[0]\n    assert count == 0\n\nTDD Workflow\n\nThe red-green-refactor cycle:\n\nRed: Write a failing test for the next piece of behavior\nGreen: Write the minimum code to make it pass\nRefactor: Clean up without changing behavior (tests stay green)\n# Tight feedback loop\n# Jest watch mode\nnpx jest --watch\n\n# Vitest watch (default)\nnpx vitest\n\n# pytest watch (with pytest-watch)\npip install pytest-watch\nptw\n\n# Go (with air or entr)\nls *.go | entr -c go test ./...\n\nDebugging Failed Tests\nCommon Issues\n\nTest passes alone, fails in suite → shared state. Check for:\n\nGlobal variables modified between tests\nDatabase not cleaned up\nMocks not restored (afterEach / teardown)\n\nTest fails intermittently (flaky) → timing or ordering issue:\n\nAsync operations without proper await\nTests depending on execution order\nTime-dependent logic (use clock mocking)\nNetwork calls in unit tests (should be mocked)\n\nCoverage shows uncovered branches → missing edge cases:\n\nError paths (what if the API returns 500?)\nEmpty inputs (empty string, null, empty array)\nBoundary values (0, -1, MAX_INT)\nRun Single Test\n# Jest\nnpx jest -t \"test name substring\"\n\n# pytest\npytest -k \"test_divide_by_zero\"\n\n# Go\ngo test -run TestDivideByZero ./...\n\n# Rust\ncargo test test_divide\n\nTips\nTest behavior, not implementation. Tests should survive refactors.\nOne assertion per concept (not necessarily one assert per test, but one logical check).\nName tests descriptively: test_returns_empty_list_when_no_users_exist beats test_get_users_2.\nDon't mock what you don't own — write thin wrappers around external libraries, mock the wrapper.\nIntegration tests catch bugs unit tests miss. Don't skip them.\nUse tmp_path (pytest), t.TempDir() (Go), or tempfile (Node) for file-based tests.\nSnapshot tests are great for detecting unintended changes, bad for evolving formats."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/gitgoodordietrying/test-patterns",
    "publisherUrl": "https://clawhub.ai/gitgoodordietrying/test-patterns",
    "owner": "gitgoodordietrying",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/test-patterns",
    "downloadUrl": "https://openagent3.xyz/downloads/test-patterns",
    "agentUrl": "https://openagent3.xyz/skills/test-patterns/agent",
    "manifestUrl": "https://openagent3.xyz/skills/test-patterns/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/test-patterns/agent.md"
  }
}