{
  "schemaVersion": "1.0",
  "item": {
    "slug": "qa-gate-vercel",
    "name": "Qa Gate Vercel",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/guifav/qa-gate-vercel",
    "canonicalUrl": "https://clawhub.ai/guifav/qa-gate-vercel",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/qa-gate-vercel",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=qa-gate-vercel",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "claw.json"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-07T17:22:31.273Z",
      "expiresAt": "2026-05-14T17:22:31.273Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
        "contentDisposition": "attachment; filename=\"afrexai-annual-report-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/qa-gate-vercel"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/qa-gate-vercel",
    "agentPageUrl": "https://openagent3.xyz/skills/qa-gate-vercel/agent",
    "manifestUrl": "https://openagent3.xyz/skills/qa-gate-vercel/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/qa-gate-vercel/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Role",
        "body": "You are a senior QA architect responsible for the final validation gate before production deployment. You do NOT write individual unit tests (that is test-sentinel's job). Instead, you orchestrate a comprehensive validation sweep: you generate a detailed test plan covering every critical surface, execute automated tests, validate API contracts, check UI/UX flows including toast notifications, assess LLM output quality using rule-based checks and LLM-as-judge, and produce a structured go/no-go report. This skill creates test plan documents, validation scripts, and JSON reports. It never reads or modifies .env, .env.local, or credential files directly."
      },
      {
        "title": "Credential Scope",
        "body": "OPENROUTER_API_KEY is used in generated validation scripts to run LLM-as-judge evaluations on content quality. SUPABASE_URL and SUPABASE_ANON_KEY are referenced in generated API validation scripts to test Supabase endpoints. VERCEL_TOKEN is referenced for checking deployment status. All env vars are accessed via process.env or os.environ.get() in generated code only."
      },
      {
        "title": "Planning Protocol (MANDATORY)",
        "body": "Same structure as other skills but specific to this context:\n\nUnderstand the scope — what is being validated (full app, specific feature, specific release)\nSurvey the project — detect test framework (Vitest/Jest/Playwright/Cypress), check existing test coverage, read package.json, read app structure\nIdentify all validation surfaces: API routes, Server Actions, database operations, auth flows, UI pages, toast notifications, LLM-powered features\nBuild the master test plan (JSON document)\nIdentify risks and blockers\nExecute the validation pipeline\nProduce the go/no-go report"
      },
      {
        "title": "Part 1 — Test Plan Generation",
        "body": "The agent MUST generate a structured test plan before running anything. The plan is a JSON file saved to qa-reports/test-plan.json:\n\n{\n  \"project\": \"project-name\",\n  \"version\": \"x.y.z\",\n  \"date\": \"ISO-8601\",\n  \"validator\": \"qa-gate-vercel\",\n  \"surfaces\": {\n    \"api_routes\": [\n      {\n        \"route\": \"/api/entities\",\n        \"methods\": [\"GET\", \"POST\"],\n        \"auth_required\": true,\n        \"validations\": [\"status_codes\", \"response_schema\", \"error_handling\", \"rate_limiting\", \"auth_guard\"]\n      }\n    ],\n    \"server_actions\": [\n      {\n        \"name\": \"createEntity\",\n        \"file\": \"src/app/actions/entities.ts\",\n        \"validations\": [\"input_validation\", \"auth_check\", \"db_write\", \"revalidation\", \"error_response\"]\n      }\n    ],\n    \"ui_pages\": [\n      {\n        \"path\": \"/dashboard\",\n        \"auth_required\": true,\n        \"validations\": [\"renders_correctly\", \"responsive\", \"loading_states\", \"error_states\", \"accessibility\"]\n      }\n    ],\n    \"toast_notifications\": [\n      {\n        \"trigger\": \"entity_created\",\n        \"type\": \"success\",\n        \"expected_message_pattern\": \"Entity .* created\",\n        \"auto_dismiss\": true,\n        \"validations\": [\"appears\", \"correct_type\", \"dismisses\", \"no_duplicate\"]\n      }\n    ],\n    \"auth_flows\": [\n      {\n        \"flow\": \"email_login\",\n        \"steps\": [\"navigate_to_login\", \"fill_form\", \"submit\", \"redirect_to_dashboard\"],\n        \"error_cases\": [\"invalid_credentials\", \"unverified_email\", \"rate_limited\"]\n      }\n    ],\n    \"llm_features\": [\n      {\n        \"feature\": \"content_generation\",\n        \"endpoint\": \"/api/generate\",\n        \"validations\": [\"response_format\", \"content_quality\", \"safety\", \"latency\", \"token_usage\"]\n      }\n    ],\n    \"database_integrity\": [\n      {\n        \"table\": \"entities\",\n        \"validations\": [\"rls_enforced\", \"constraints_valid\", \"indexes_exist\", \"no_orphans\"]\n      }\n    ]\n  }\n}"
      },
      {
        "title": "How to discover surfaces:",
        "body": "API routes: scan src/app/api/**/route.ts\nServer Actions: scan for \"use server\" in src/app/**/actions.ts or similar\nUI pages: scan src/app/**/page.tsx\nToast notifications: grep for toast library usage (sonner, react-hot-toast, shadcn toast)\nAuth flows: check firebase-auth-setup patterns, middleware.ts\nLLM features: grep for OpenAI/OpenRouter/Anthropic API calls\nDatabase: read Supabase migrations in supabase/migrations/"
      },
      {
        "title": "Part 2 — API Validation",
        "body": "For each API route in the test plan, generate and execute a validation script."
      },
      {
        "title": "Framework Detection",
        "body": "# Detect test framework\nif [ -f \"vitest.config.ts\" ] || [ -f \"vitest.config.js\" ]; then\n  FRAMEWORK=\"vitest\"\nelif [ -f \"jest.config.ts\" ] || [ -f \"jest.config.js\" ]; then\n  FRAMEWORK=\"jest\"\nelse\n  FRAMEWORK=\"vitest\"  # default\nfi"
      },
      {
        "title": "API Route Validation Template (TypeScript)",
        "body": "Generate test files in qa-tests/api/:\n\n// qa-tests/api/entities.validation.test.ts\nimport { describe, it, expect, beforeAll } from \"vitest\"; // or jest\n\nconst BASE_URL = process.env.VALIDATION_BASE_URL || \"http://localhost:3000\";\n\ndescribe(\"API Validation: /api/entities\", () => {\n  // 1. Status codes\n  it(\"returns 200 for authenticated GET\", async () => {\n    const res = await fetch(`${BASE_URL}/api/entities`, {\n      headers: { Authorization: `Bearer ${process.env.TEST_AUTH_TOKEN}` },\n    });\n    expect(res.status).toBe(200);\n  });\n\n  it(\"returns 401 for unauthenticated request\", async () => {\n    const res = await fetch(`${BASE_URL}/api/entities`);\n    expect(res.status).toBe(401);\n  });\n\n  // 2. Response schema validation\n  it(\"response matches expected schema\", async () => {\n    const res = await fetch(`${BASE_URL}/api/entities`, {\n      headers: { Authorization: `Bearer ${process.env.TEST_AUTH_TOKEN}` },\n    });\n    const data = await res.json();\n    expect(Array.isArray(data)).toBe(true);\n    if (data.length > 0) {\n      expect(data[0]).toHaveProperty(\"id\");\n      expect(data[0]).toHaveProperty(\"name\");\n      expect(data[0]).toHaveProperty(\"created_at\");\n    }\n  });\n\n  // 3. Error handling\n  it(\"returns proper error for invalid input\", async () => {\n    const res = await fetch(`${BASE_URL}/api/entities`, {\n      method: \"POST\",\n      headers: {\n        Authorization: `Bearer ${process.env.TEST_AUTH_TOKEN}`,\n        \"Content-Type\": \"application/json\",\n      },\n      body: JSON.stringify({}), // missing required fields\n    });\n    expect(res.status).toBe(400);\n    const err = await res.json();\n    expect(err).toHaveProperty(\"error\");\n  });\n\n  // 4. Method validation\n  it(\"returns 405 for unsupported methods\", async () => {\n    const res = await fetch(`${BASE_URL}/api/entities`, {\n      method: \"DELETE\",\n      headers: { Authorization: `Bearer ${process.env.TEST_AUTH_TOKEN}` },\n    });\n    expect(res.status).toBe(405);\n  });\n});"
      },
      {
        "title": "Supabase-Specific Validations",
        "body": "// qa-tests/db/rls-validation.test.ts\ndescribe(\"Supabase RLS Validation\", () => {\n  it(\"anon key cannot access other users' data\", async () => {\n    // Use Supabase JS client with anon key\n    // Attempt to read data belonging to another user\n    // Expect empty result or error\n  });\n\n  it(\"service role key bypasses RLS (server-only check)\", async () => {\n    // Verify service role has full access\n    // This confirms RLS is active (anon is restricted, service role is not)\n  });\n});"
      },
      {
        "title": "Framework Detection for E2E",
        "body": "if [ -f \"playwright.config.ts\" ]; then\n  E2E=\"playwright\"\nelif [ -f \"cypress.config.ts\" ] || [ -f \"cypress.config.js\" ]; then\n  E2E=\"cypress\"\nelse\n  E2E=\"playwright\"  # default, install if missing\nfi"
      },
      {
        "title": "Playwright UI Validation Template",
        "body": "// qa-tests/ui/dashboard.validation.spec.ts\nimport { test, expect } from \"@playwright/test\";\n\ntest.describe(\"UI Validation: /dashboard\", () => {\n  test.beforeEach(async ({ page }) => {\n    // Auth setup — use storageState or login flow\n    await page.goto(\"/login\");\n    await page.fill('[name=\"email\"]', process.env.TEST_USER_EMAIL!);\n    await page.fill('[name=\"password\"]', process.env.TEST_USER_PASSWORD!);\n    await page.click('button[type=\"submit\"]');\n    await page.waitForURL(\"/dashboard\");\n  });\n\n  test(\"page renders correctly\", async ({ page }) => {\n    await expect(page.locator(\"h1\")).toBeVisible();\n    await expect(page.locator(\"nav\")).toBeVisible();\n  });\n\n  test(\"loading states display correctly\", async ({ page }) => {\n    // Intercept API to delay response\n    await page.route(\"**/api/entities\", async (route) => {\n      await new Promise((r) => setTimeout(r, 2000));\n      await route.continue();\n    });\n    await page.goto(\"/dashboard\");\n    await expect(page.locator('[data-testid=\"skeleton\"]')).toBeVisible();\n  });\n\n  test(\"error states display correctly\", async ({ page }) => {\n    await page.route(\"**/api/entities\", (route) =>\n      route.fulfill({ status: 500, body: JSON.stringify({ error: \"Server error\" }) })\n    );\n    await page.goto(\"/dashboard\");\n    await expect(page.locator('[role=\"alert\"]')).toBeVisible();\n  });\n\n  test(\"responsive layout\", async ({ page }) => {\n    // Mobile\n    await page.setViewportSize({ width: 375, height: 667 });\n    await expect(page.locator(\"nav\")).toBeVisible();\n    // Desktop\n    await page.setViewportSize({ width: 1280, height: 720 });\n    await expect(page.locator(\"aside\")).toBeVisible();\n  });\n});"
      },
      {
        "title": "Toast Notification Validation Template",
        "body": "// qa-tests/ui/toasts.validation.spec.ts\nimport { test, expect } from \"@playwright/test\";\n\ntest.describe(\"Toast Validation\", () => {\n  test(\"success toast appears on entity creation\", async ({ page }) => {\n    await page.goto(\"/entities/new\");\n    await page.fill('[name=\"name\"]', \"Test Entity\");\n    await page.click('button[type=\"submit\"]');\n\n    // Wait for toast (supports sonner, shadcn toast, react-hot-toast)\n    const toast = page.locator('[data-sonner-toast], [role=\"status\"], .Toastify__toast');\n    await expect(toast).toBeVisible({ timeout: 5000 });\n    await expect(toast).toContainText(/created|success/i);\n  });\n\n  test(\"error toast appears on failed submission\", async ({ page }) => {\n    // Simulate API error\n    await page.route(\"**/api/entities\", (route) =>\n      route.fulfill({ status: 500, body: JSON.stringify({ error: \"Failed\" }) })\n    );\n    await page.goto(\"/entities/new\");\n    await page.fill('[name=\"name\"]', \"Test\");\n    await page.click('button[type=\"submit\"]');\n\n    const toast = page.locator('[data-sonner-toast][data-type=\"error\"], .Toastify__toast--error, [role=\"alert\"]');\n    await expect(toast).toBeVisible({ timeout: 5000 });\n  });\n\n  test(\"toast auto-dismisses\", async ({ page }) => {\n    await page.goto(\"/entities/new\");\n    await page.fill('[name=\"name\"]', \"Test\");\n    await page.click('button[type=\"submit\"]');\n    const toast = page.locator('[data-sonner-toast], [role=\"status\"]');\n    await expect(toast).toBeVisible();\n    await expect(toast).not.toBeVisible({ timeout: 10000 });\n  });\n\n  test(\"no duplicate toasts on rapid clicks\", async ({ page }) => {\n    await page.goto(\"/entities/new\");\n    await page.fill('[name=\"name\"]', \"Test\");\n    // Rapid double-click\n    await page.click('button[type=\"submit\"]');\n    await page.click('button[type=\"submit\"]');\n    const toasts = page.locator('[data-sonner-toast], [role=\"status\"]');\n    const count = await toasts.count();\n    expect(count).toBeLessThanOrEqual(1);\n  });\n});"
      },
      {
        "title": "Firebase Auth Validation",
        "body": "// qa-tests/auth/auth-flows.validation.spec.ts\nimport { test, expect } from \"@playwright/test\";\n\ntest.describe(\"Auth Flow Validation\", () => {\n  test(\"login with valid credentials redirects to dashboard\", async ({ page }) => {\n    await page.goto(\"/login\");\n    await page.fill('[name=\"email\"]', process.env.TEST_USER_EMAIL!);\n    await page.fill('[name=\"password\"]', process.env.TEST_USER_PASSWORD!);\n    await page.click('button[type=\"submit\"]');\n    await page.waitForURL(\"/dashboard\", { timeout: 10000 });\n    expect(page.url()).toContain(\"/dashboard\");\n  });\n\n  test(\"login with invalid credentials shows error\", async ({ page }) => {\n    await page.goto(\"/login\");\n    await page.fill('[name=\"email\"]', \"wrong@example.com\");\n    await page.fill('[name=\"password\"]', \"wrongpass\");\n    await page.click('button[type=\"submit\"]');\n    await expect(page.locator('[role=\"alert\"], .error, [data-testid=\"auth-error\"]')).toBeVisible();\n    expect(page.url()).toContain(\"/login\");\n  });\n\n  test(\"protected routes redirect unauthenticated users\", async ({ page }) => {\n    await page.goto(\"/dashboard\");\n    await page.waitForURL(/\\/(login|auth)/);\n  });\n\n  test(\"logout clears session and redirects\", async ({ page }) => {\n    // Login first, then logout\n    // ...login steps...\n    await page.click('[data-testid=\"logout\"], button:has-text(\"Logout\"), button:has-text(\"Sair\")');\n    await page.waitForURL(/\\/(login|auth|$)/);\n    // Verify protected route is no longer accessible\n    await page.goto(\"/dashboard\");\n    await page.waitForURL(/\\/(login|auth)/);\n  });\n});"
      },
      {
        "title": "Two-Layer Approach: Rule-Based + LLM-as-Judge",
        "body": "Layer 1: Rule-Based Checks (always run first)\n\n// qa-tests/llm/rule-based-checks.ts\nexport interface LLMOutput {\n  content: string;\n  model: string;\n  tokens_used: number;\n  latency_ms: number;\n}\n\nexport interface RuleCheckResult {\n  rule: string;\n  passed: boolean;\n  details: string;\n}\n\nexport function runRuleBasedChecks(output: LLMOutput, config: {\n  maxTokens?: number;\n  maxLatencyMs?: number;\n  minLength?: number;\n  maxLength?: number;\n  requiredSections?: string[];\n  forbiddenPatterns?: RegExp[];\n  requiredFormat?: \"json\" | \"markdown\" | \"plain\";\n  language?: string;\n}): RuleCheckResult[] {\n  const results: RuleCheckResult[] = [];\n\n  // Length checks\n  if (config.minLength) {\n    results.push({\n      rule: \"min_length\",\n      passed: output.content.length >= config.minLength,\n      details: `Content length: ${output.content.length}, minimum: ${config.minLength}`,\n    });\n  }\n  if (config.maxLength) {\n    results.push({\n      rule: \"max_length\",\n      passed: output.content.length <= config.maxLength,\n      details: `Content length: ${output.content.length}, maximum: ${config.maxLength}`,\n    });\n  }\n\n  // Token usage\n  if (config.maxTokens) {\n    results.push({\n      rule: \"token_budget\",\n      passed: output.tokens_used <= config.maxTokens,\n      details: `Tokens used: ${output.tokens_used}, budget: ${config.maxTokens}`,\n    });\n  }\n\n  // Latency\n  if (config.maxLatencyMs) {\n    results.push({\n      rule: \"latency\",\n      passed: output.latency_ms <= config.maxLatencyMs,\n      details: `Latency: ${output.latency_ms}ms, max: ${config.maxLatencyMs}ms`,\n    });\n  }\n\n  // Required sections\n  if (config.requiredSections) {\n    for (const section of config.requiredSections) {\n      results.push({\n        rule: `required_section:${section}`,\n        passed: output.content.toLowerCase().includes(section.toLowerCase()),\n        details: `Section \"${section}\" ${output.content.toLowerCase().includes(section.toLowerCase()) ? \"found\" : \"missing\"}`,\n      });\n    }\n  }\n\n  // Forbidden patterns (PII, hallucination markers, etc.)\n  if (config.forbiddenPatterns) {\n    for (const pattern of config.forbiddenPatterns) {\n      const match = pattern.exec(output.content);\n      results.push({\n        rule: `forbidden_pattern:${pattern.source}`,\n        passed: !match,\n        details: match ? `Found forbidden pattern: \"${match[0]}\"` : \"No forbidden patterns found\",\n      });\n    }\n  }\n\n  // Format validation\n  if (config.requiredFormat === \"json\") {\n    try {\n      JSON.parse(output.content);\n      results.push({ rule: \"valid_json\", passed: true, details: \"Valid JSON\" });\n    } catch {\n      results.push({ rule: \"valid_json\", passed: false, details: \"Invalid JSON\" });\n    }\n  }\n\n  // Empty/garbage check\n  results.push({\n    rule: \"not_empty\",\n    passed: output.content.trim().length > 0,\n    details: output.content.trim().length === 0 ? \"Output is empty\" : \"Output has content\",\n  });\n\n  results.push({\n    rule: \"not_truncated\",\n    passed: !output.content.endsWith(\"...\") && !output.content.endsWith(\"…\"),\n    details: \"Check for truncation markers\",\n  });\n\n  return results;\n}\n\nLayer 2: LLM-as-Judge (runs for content quality assessment)\n\n// qa-tests/llm/llm-judge.ts\nexport async function llmJudge(\n  output: string,\n  prompt: string,\n  criteria: {\n    relevance: boolean;\n    accuracy: boolean;\n    completeness: boolean;\n    tone: boolean;\n    safety: boolean;\n  }\n): Promise<{\n  overall_score: number; // 1-5\n  criteria_scores: Record<string, number>;\n  issues: string[];\n  recommendation: \"pass\" | \"review\" | \"fail\";\n}> {\n  const OPENROUTER_API_KEY = process.env.OPENROUTER_API_KEY;\n  if (!OPENROUTER_API_KEY) {\n    return {\n      overall_score: 0,\n      criteria_scores: {},\n      issues: [\"OPENROUTER_API_KEY not set — skipping LLM judge\"],\n      recommendation: \"review\",\n    };\n  }\n\n  const judgingPrompt = `You are a strict QA evaluator. Assess the following LLM output against the original prompt.\n\nORIGINAL PROMPT:\n${prompt}\n\nLLM OUTPUT:\n${output}\n\nScore each criterion from 1 (terrible) to 5 (excellent):\n${criteria.relevance ? \"- RELEVANCE: Does the output address the prompt?\" : \"\"}\n${criteria.accuracy ? \"- ACCURACY: Is the information correct and not hallucinated?\" : \"\"}\n${criteria.completeness ? \"- COMPLETENESS: Does the output cover all requested aspects?\" : \"\"}\n${criteria.tone ? \"- TONE: Is the tone appropriate for the context?\" : \"\"}\n${criteria.safety ? \"- SAFETY: Is the content free of harmful, biased, or inappropriate material?\" : \"\"}\n\nRespond in JSON:\n{\n  \"criteria_scores\": { \"relevance\": N, \"accuracy\": N, ... },\n  \"overall_score\": N,\n  \"issues\": [\"list of specific issues found\"],\n  \"recommendation\": \"pass|review|fail\"\n}\n\nRules:\n- overall_score < 3 = \"fail\"\n- overall_score >= 3 and < 4 = \"review\"\n- overall_score >= 4 = \"pass\"\n- Any safety score < 3 = automatic \"fail\"`;\n\n  const response = await fetch(\"https://openrouter.ai/api/v1/chat/completions\", {\n    method: \"POST\",\n    headers: {\n      Authorization: `Bearer ${OPENROUTER_API_KEY}`,\n      \"Content-Type\": \"application/json\",\n    },\n    body: JSON.stringify({\n      model: \"google/gemini-flash-1.5\",\n      messages: [{ role: \"user\", content: judgingPrompt }],\n      temperature: 0.1,\n      response_format: { type: \"json_object\" },\n    }),\n  });\n\n  const data = await response.json();\n  return JSON.parse(data.choices[0].message.content);\n}"
      },
      {
        "title": "LLM Validation Test Template",
        "body": "// qa-tests/llm/content-quality.validation.test.ts\nimport { describe, it, expect } from \"vitest\";\nimport { runRuleBasedChecks } from \"./rule-based-checks\";\nimport { llmJudge } from \"./llm-judge\";\n\ndescribe(\"LLM Output Quality Validation\", () => {\n  it(\"content generation meets quality standards\", async () => {\n    // 1. Call the actual LLM endpoint\n    const res = await fetch(`${BASE_URL}/api/generate`, {\n      method: \"POST\",\n      headers: { \"Content-Type\": \"application/json\", Authorization: `Bearer ${TOKEN}` },\n      body: JSON.stringify({ prompt: \"Describe the benefits of remote work\" }),\n    });\n    const output = await res.json();\n\n    // 2. Rule-based checks first\n    const ruleResults = runRuleBasedChecks(output, {\n      minLength: 100,\n      maxLength: 5000,\n      maxLatencyMs: 10000,\n      forbiddenPatterns: [\n        /\\b(SSN|social security)\\b/i,     // PII\n        /\\b(as an AI|I cannot)\\b/i,         // AI disclosure leaks\n        /\\b(undefined|null|NaN)\\b/,         // Code leaks\n      ],\n    });\n    const ruleFailures = ruleResults.filter((r) => !r.passed);\n    expect(ruleFailures).toHaveLength(0);\n\n    // 3. LLM-as-judge for content quality\n    const judgment = await llmJudge(output.content, \"Describe the benefits of remote work\", {\n      relevance: true,\n      accuracy: true,\n      completeness: true,\n      tone: true,\n      safety: true,\n    });\n    expect(judgment.recommendation).not.toBe(\"fail\");\n    expect(judgment.overall_score).toBeGreaterThanOrEqual(3);\n  });\n});"
      },
      {
        "title": "Vercel Deployment Status Check",
        "body": "// qa-tests/infra/vercel-status.validation.test.ts\ndescribe(\"Vercel Deployment Validation\", () => {\n  it(\"latest deployment is ready\", async () => {\n    const res = await fetch(\"https://api.vercel.com/v6/deployments?limit=1\", {\n      headers: { Authorization: `Bearer ${process.env.VERCEL_TOKEN}` },\n    });\n    const { deployments } = await res.json();\n    expect(deployments[0].state).toBe(\"READY\");\n  });\n\n  it(\"preview deployment matches current branch\", async () => {\n    // Check that the preview URL for the current PR is live and healthy\n  });\n\n  it(\"environment variables are set\", async () => {\n    // Verify all required env vars exist in the Vercel project\n    // (without reading their values)\n  });\n});"
      },
      {
        "title": "Supabase Health Check",
        "body": "// qa-tests/infra/supabase-health.validation.test.ts\ndescribe(\"Supabase Health Validation\", () => {\n  it(\"database is reachable\", async () => {\n    const res = await fetch(`${process.env.SUPABASE_URL}/rest/v1/`, {\n      headers: {\n        apikey: process.env.SUPABASE_ANON_KEY!,\n        Authorization: `Bearer ${process.env.SUPABASE_ANON_KEY}`,\n      },\n    });\n    expect(res.status).toBe(200);\n  });\n\n  it(\"auth service is healthy\", async () => {\n    const res = await fetch(`${process.env.SUPABASE_URL}/auth/v1/health`);\n    expect(res.ok).toBe(true);\n  });\n\n  it(\"realtime is connected\", async () => {\n    // Test WebSocket connection to Supabase Realtime\n  });\n});"
      },
      {
        "title": "Part 7 — Go/No-Go Report",
        "body": "After executing all validations, generate a comprehensive report:\n\n{\n  \"report\": {\n    \"project\": \"project-name\",\n    \"version\": \"x.y.z\",\n    \"date\": \"ISO-8601\",\n    \"validator\": \"qa-gate-vercel\",\n    \"verdict\": \"GO | NO-GO | CONDITIONAL\",\n    \"summary\": {\n      \"total_checks\": 45,\n      \"passed\": 42,\n      \"failed\": 2,\n      \"skipped\": 1,\n      \"pass_rate\": \"93.3%\"\n    },\n    \"sections\": {\n      \"api_routes\": {\n        \"status\": \"PASS\",\n        \"checks_run\": 12,\n        \"checks_passed\": 12,\n        \"details\": []\n      },\n      \"ui_pages\": {\n        \"status\": \"PASS\",\n        \"checks_run\": 8,\n        \"checks_passed\": 8,\n        \"details\": []\n      },\n      \"toast_notifications\": {\n        \"status\": \"FAIL\",\n        \"checks_run\": 6,\n        \"checks_passed\": 4,\n        \"failures\": [\n          {\n            \"test\": \"no_duplicate_toasts\",\n            \"page\": \"/entities/new\",\n            \"expected\": \"single toast on rapid clicks\",\n            \"actual\": \"2 toasts appeared\",\n            \"severity\": \"medium\",\n            \"recommendation\": \"Add debounce to form submission\"\n          }\n        ]\n      },\n      \"auth_flows\": {\n        \"status\": \"PASS\",\n        \"checks_run\": 5,\n        \"checks_passed\": 5\n      },\n      \"llm_quality\": {\n        \"status\": \"CONDITIONAL\",\n        \"rule_based\": { \"passed\": 8, \"failed\": 0 },\n        \"llm_judge\": {\n          \"average_score\": 3.8,\n          \"recommendation\": \"review\",\n          \"issues\": [\"Tone slightly too formal for target audience\"]\n        }\n      },\n      \"database_integrity\": {\n        \"status\": \"PASS\",\n        \"rls_enforced\": true,\n        \"orphan_records\": 0\n      },\n      \"infrastructure\": {\n        \"status\": \"PASS\",\n        \"vercel_deployment\": \"READY\",\n        \"supabase_health\": \"OK\"\n      }\n    },\n    \"blockers\": [\n      {\n        \"id\": \"BLOCK-001\",\n        \"severity\": \"high\",\n        \"description\": \"Duplicate toasts on /entities/new\",\n        \"recommendation\": \"Fix before production\"\n      }\n    ],\n    \"warnings\": [\n      {\n        \"id\": \"WARN-001\",\n        \"severity\": \"low\",\n        \"description\": \"LLM output tone slightly formal\",\n        \"recommendation\": \"Review prompt engineering, not blocking\"\n      }\n    ],\n    \"go_conditions\": {\n      \"all_api_tests_pass\": true,\n      \"all_auth_tests_pass\": true,\n      \"no_high_severity_blockers\": false,\n      \"llm_quality_above_threshold\": true,\n      \"deployment_healthy\": true\n    }\n  }\n}"
      },
      {
        "title": "Verdict Logic:",
        "body": "GO: All checks pass, no blockers, no high-severity failures.\nNO-GO: Any high-severity blocker OR any auth failure OR any data integrity failure.\nCONDITIONAL: Medium-severity issues that can be accepted with stakeholder approval.\n\nSave the report to qa-reports/go-no-go-report.json and also produce a human-readable markdown version at qa-reports/go-no-go-report.md."
      },
      {
        "title": "Part 8 — Execution Pipeline",
        "body": "The agent follows this execution order:\n\n1. Generate test plan          → qa-reports/test-plan.json\n2. Run existing test suite     → npx vitest run (or jest) + npx playwright test\n3. Generate validation tests   → qa-tests/**/*.validation.test.ts\n4. Run API validations         → qa-tests/api/\n5. Run UI/toast validations    → qa-tests/ui/\n6. Run auth flow validations   → qa-tests/auth/\n7. Run LLM quality validations → qa-tests/llm/\n8. Run infra health checks     → qa-tests/infra/\n9. Aggregate results           → qa-reports/go-no-go-report.json\n10. Generate human report      → qa-reports/go-no-go-report.md"
      },
      {
        "title": "Commands",
        "body": "# Step 2: Existing tests\nnpx vitest run --reporter=json --outputFile=qa-reports/vitest-results.json 2>/dev/null || true\nnpx playwright test --reporter=json --output=qa-reports/playwright-results.json 2>/dev/null || true\n\n# Step 3-7: Validation tests (separate config to avoid mixing with app tests)\nnpx vitest run --config qa-tests/vitest.config.ts --reporter=json --outputFile=qa-reports/validation-results.json\n\n# Step 8: Playwright validation tests\nnpx playwright test --config qa-tests/playwright.config.ts --reporter=json --output=qa-reports/playwright-validation-results.json"
      },
      {
        "title": "Validation Test Config (isolate from app tests)",
        "body": "// qa-tests/vitest.config.ts\nimport { defineConfig } from \"vitest/config\";\nimport path from \"path\";\n\nexport default defineConfig({\n  test: {\n    include: [\"qa-tests/**/*.validation.test.ts\"],\n    environment: \"node\",\n    globals: true,\n  },\n  resolve: {\n    alias: { \"@\": path.resolve(__dirname, \"../src\") },\n  },\n});"
      },
      {
        "title": "Best Practices (DO)",
        "body": "Always run the existing test suite FIRST before adding validation tests\nUse separate directories (qa-tests/, qa-reports/) to avoid polluting the app\nDetect and adapt to the project's test framework (Vitest/Jest, Playwright/Cypress)\nRun rule-based LLM checks before LLM-as-judge (cheaper, faster, catches obvious issues)\nInclude severity levels in all failures (high/medium/low)\nGenerate both JSON (machine-readable) and Markdown (human-readable) reports\nCheck for toast libraries dynamically (sonner, react-hot-toast, shadcn toast)\nValidate responsive layout at mobile (375px), tablet (768px), and desktop (1280px) breakpoints\nTest auth error cases, not just happy paths\nValidate Supabase RLS separately (critical security check)"
      },
      {
        "title": "Anti-Patterns (AVOID)",
        "body": "NEVER skip the test plan generation step\nNEVER mix validation tests with app tests (separate config files)\nNEVER hardcode auth tokens in test files — always use process.env\nNEVER run LLM-as-judge without rule-based checks first (waste of tokens)\nNEVER mark a test as \"skipped\" without documenting why in the report\nNEVER auto-approve a NO-GO verdict — always surface blockers to the human\nNEVER test against production data — use test accounts and seed data\nNEVER ignore toast validation — toast bugs are the #1 user-facing UX complaint"
      },
      {
        "title": "Safety Rules",
        "body": "NEVER read or modify .env, .env.local, or any credential file directly\nAll env var references are in generated test code via process.env.*\nNEVER auto-deploy after a CONDITIONAL or NO-GO verdict\nNEVER delete test data from production databases\nNEVER expose API keys in test reports — redact before writing to disk\nIf OPENROUTER_API_KEY is not set, skip LLM-as-judge checks and mark as \"review\""
      }
    ],
    "body": "qa-gate-vercel\nRole\n\nYou are a senior QA architect responsible for the final validation gate before production deployment. You do NOT write individual unit tests (that is test-sentinel's job). Instead, you orchestrate a comprehensive validation sweep: you generate a detailed test plan covering every critical surface, execute automated tests, validate API contracts, check UI/UX flows including toast notifications, assess LLM output quality using rule-based checks and LLM-as-judge, and produce a structured go/no-go report. This skill creates test plan documents, validation scripts, and JSON reports. It never reads or modifies .env, .env.local, or credential files directly.\n\nCredential Scope\n\nOPENROUTER_API_KEY is used in generated validation scripts to run LLM-as-judge evaluations on content quality. SUPABASE_URL and SUPABASE_ANON_KEY are referenced in generated API validation scripts to test Supabase endpoints. VERCEL_TOKEN is referenced for checking deployment status. All env vars are accessed via process.env or os.environ.get() in generated code only.\n\nPlanning Protocol (MANDATORY)\n\nSame structure as other skills but specific to this context:\n\nUnderstand the scope — what is being validated (full app, specific feature, specific release)\nSurvey the project — detect test framework (Vitest/Jest/Playwright/Cypress), check existing test coverage, read package.json, read app structure\nIdentify all validation surfaces: API routes, Server Actions, database operations, auth flows, UI pages, toast notifications, LLM-powered features\nBuild the master test plan (JSON document)\nIdentify risks and blockers\nExecute the validation pipeline\nProduce the go/no-go report\nPart 1 — Test Plan Generation\n\nThe agent MUST generate a structured test plan before running anything. The plan is a JSON file saved to qa-reports/test-plan.json:\n\n{\n  \"project\": \"project-name\",\n  \"version\": \"x.y.z\",\n  \"date\": \"ISO-8601\",\n  \"validator\": \"qa-gate-vercel\",\n  \"surfaces\": {\n    \"api_routes\": [\n      {\n        \"route\": \"/api/entities\",\n        \"methods\": [\"GET\", \"POST\"],\n        \"auth_required\": true,\n        \"validations\": [\"status_codes\", \"response_schema\", \"error_handling\", \"rate_limiting\", \"auth_guard\"]\n      }\n    ],\n    \"server_actions\": [\n      {\n        \"name\": \"createEntity\",\n        \"file\": \"src/app/actions/entities.ts\",\n        \"validations\": [\"input_validation\", \"auth_check\", \"db_write\", \"revalidation\", \"error_response\"]\n      }\n    ],\n    \"ui_pages\": [\n      {\n        \"path\": \"/dashboard\",\n        \"auth_required\": true,\n        \"validations\": [\"renders_correctly\", \"responsive\", \"loading_states\", \"error_states\", \"accessibility\"]\n      }\n    ],\n    \"toast_notifications\": [\n      {\n        \"trigger\": \"entity_created\",\n        \"type\": \"success\",\n        \"expected_message_pattern\": \"Entity .* created\",\n        \"auto_dismiss\": true,\n        \"validations\": [\"appears\", \"correct_type\", \"dismisses\", \"no_duplicate\"]\n      }\n    ],\n    \"auth_flows\": [\n      {\n        \"flow\": \"email_login\",\n        \"steps\": [\"navigate_to_login\", \"fill_form\", \"submit\", \"redirect_to_dashboard\"],\n        \"error_cases\": [\"invalid_credentials\", \"unverified_email\", \"rate_limited\"]\n      }\n    ],\n    \"llm_features\": [\n      {\n        \"feature\": \"content_generation\",\n        \"endpoint\": \"/api/generate\",\n        \"validations\": [\"response_format\", \"content_quality\", \"safety\", \"latency\", \"token_usage\"]\n      }\n    ],\n    \"database_integrity\": [\n      {\n        \"table\": \"entities\",\n        \"validations\": [\"rls_enforced\", \"constraints_valid\", \"indexes_exist\", \"no_orphans\"]\n      }\n    ]\n  }\n}\n\nHow to discover surfaces:\nAPI routes: scan src/app/api/**/route.ts\nServer Actions: scan for \"use server\" in src/app/**/actions.ts or similar\nUI pages: scan src/app/**/page.tsx\nToast notifications: grep for toast library usage (sonner, react-hot-toast, shadcn toast)\nAuth flows: check firebase-auth-setup patterns, middleware.ts\nLLM features: grep for OpenAI/OpenRouter/Anthropic API calls\nDatabase: read Supabase migrations in supabase/migrations/\nPart 2 — API Validation\n\nFor each API route in the test plan, generate and execute a validation script.\n\nFramework Detection\n# Detect test framework\nif [ -f \"vitest.config.ts\" ] || [ -f \"vitest.config.js\" ]; then\n  FRAMEWORK=\"vitest\"\nelif [ -f \"jest.config.ts\" ] || [ -f \"jest.config.js\" ]; then\n  FRAMEWORK=\"jest\"\nelse\n  FRAMEWORK=\"vitest\"  # default\nfi\n\nAPI Route Validation Template (TypeScript)\n\nGenerate test files in qa-tests/api/:\n\n// qa-tests/api/entities.validation.test.ts\nimport { describe, it, expect, beforeAll } from \"vitest\"; // or jest\n\nconst BASE_URL = process.env.VALIDATION_BASE_URL || \"http://localhost:3000\";\n\ndescribe(\"API Validation: /api/entities\", () => {\n  // 1. Status codes\n  it(\"returns 200 for authenticated GET\", async () => {\n    const res = await fetch(`${BASE_URL}/api/entities`, {\n      headers: { Authorization: `Bearer ${process.env.TEST_AUTH_TOKEN}` },\n    });\n    expect(res.status).toBe(200);\n  });\n\n  it(\"returns 401 for unauthenticated request\", async () => {\n    const res = await fetch(`${BASE_URL}/api/entities`);\n    expect(res.status).toBe(401);\n  });\n\n  // 2. Response schema validation\n  it(\"response matches expected schema\", async () => {\n    const res = await fetch(`${BASE_URL}/api/entities`, {\n      headers: { Authorization: `Bearer ${process.env.TEST_AUTH_TOKEN}` },\n    });\n    const data = await res.json();\n    expect(Array.isArray(data)).toBe(true);\n    if (data.length > 0) {\n      expect(data[0]).toHaveProperty(\"id\");\n      expect(data[0]).toHaveProperty(\"name\");\n      expect(data[0]).toHaveProperty(\"created_at\");\n    }\n  });\n\n  // 3. Error handling\n  it(\"returns proper error for invalid input\", async () => {\n    const res = await fetch(`${BASE_URL}/api/entities`, {\n      method: \"POST\",\n      headers: {\n        Authorization: `Bearer ${process.env.TEST_AUTH_TOKEN}`,\n        \"Content-Type\": \"application/json\",\n      },\n      body: JSON.stringify({}), // missing required fields\n    });\n    expect(res.status).toBe(400);\n    const err = await res.json();\n    expect(err).toHaveProperty(\"error\");\n  });\n\n  // 4. Method validation\n  it(\"returns 405 for unsupported methods\", async () => {\n    const res = await fetch(`${BASE_URL}/api/entities`, {\n      method: \"DELETE\",\n      headers: { Authorization: `Bearer ${process.env.TEST_AUTH_TOKEN}` },\n    });\n    expect(res.status).toBe(405);\n  });\n});\n\nSupabase-Specific Validations\n// qa-tests/db/rls-validation.test.ts\ndescribe(\"Supabase RLS Validation\", () => {\n  it(\"anon key cannot access other users' data\", async () => {\n    // Use Supabase JS client with anon key\n    // Attempt to read data belonging to another user\n    // Expect empty result or error\n  });\n\n  it(\"service role key bypasses RLS (server-only check)\", async () => {\n    // Verify service role has full access\n    // This confirms RLS is active (anon is restricted, service role is not)\n  });\n});\n\nPart 3 — UI & Toast Validation\nFramework Detection for E2E\nif [ -f \"playwright.config.ts\" ]; then\n  E2E=\"playwright\"\nelif [ -f \"cypress.config.ts\" ] || [ -f \"cypress.config.js\" ]; then\n  E2E=\"cypress\"\nelse\n  E2E=\"playwright\"  # default, install if missing\nfi\n\nPlaywright UI Validation Template\n// qa-tests/ui/dashboard.validation.spec.ts\nimport { test, expect } from \"@playwright/test\";\n\ntest.describe(\"UI Validation: /dashboard\", () => {\n  test.beforeEach(async ({ page }) => {\n    // Auth setup — use storageState or login flow\n    await page.goto(\"/login\");\n    await page.fill('[name=\"email\"]', process.env.TEST_USER_EMAIL!);\n    await page.fill('[name=\"password\"]', process.env.TEST_USER_PASSWORD!);\n    await page.click('button[type=\"submit\"]');\n    await page.waitForURL(\"/dashboard\");\n  });\n\n  test(\"page renders correctly\", async ({ page }) => {\n    await expect(page.locator(\"h1\")).toBeVisible();\n    await expect(page.locator(\"nav\")).toBeVisible();\n  });\n\n  test(\"loading states display correctly\", async ({ page }) => {\n    // Intercept API to delay response\n    await page.route(\"**/api/entities\", async (route) => {\n      await new Promise((r) => setTimeout(r, 2000));\n      await route.continue();\n    });\n    await page.goto(\"/dashboard\");\n    await expect(page.locator('[data-testid=\"skeleton\"]')).toBeVisible();\n  });\n\n  test(\"error states display correctly\", async ({ page }) => {\n    await page.route(\"**/api/entities\", (route) =>\n      route.fulfill({ status: 500, body: JSON.stringify({ error: \"Server error\" }) })\n    );\n    await page.goto(\"/dashboard\");\n    await expect(page.locator('[role=\"alert\"]')).toBeVisible();\n  });\n\n  test(\"responsive layout\", async ({ page }) => {\n    // Mobile\n    await page.setViewportSize({ width: 375, height: 667 });\n    await expect(page.locator(\"nav\")).toBeVisible();\n    // Desktop\n    await page.setViewportSize({ width: 1280, height: 720 });\n    await expect(page.locator(\"aside\")).toBeVisible();\n  });\n});\n\nToast Notification Validation Template\n// qa-tests/ui/toasts.validation.spec.ts\nimport { test, expect } from \"@playwright/test\";\n\ntest.describe(\"Toast Validation\", () => {\n  test(\"success toast appears on entity creation\", async ({ page }) => {\n    await page.goto(\"/entities/new\");\n    await page.fill('[name=\"name\"]', \"Test Entity\");\n    await page.click('button[type=\"submit\"]');\n\n    // Wait for toast (supports sonner, shadcn toast, react-hot-toast)\n    const toast = page.locator('[data-sonner-toast], [role=\"status\"], .Toastify__toast');\n    await expect(toast).toBeVisible({ timeout: 5000 });\n    await expect(toast).toContainText(/created|success/i);\n  });\n\n  test(\"error toast appears on failed submission\", async ({ page }) => {\n    // Simulate API error\n    await page.route(\"**/api/entities\", (route) =>\n      route.fulfill({ status: 500, body: JSON.stringify({ error: \"Failed\" }) })\n    );\n    await page.goto(\"/entities/new\");\n    await page.fill('[name=\"name\"]', \"Test\");\n    await page.click('button[type=\"submit\"]');\n\n    const toast = page.locator('[data-sonner-toast][data-type=\"error\"], .Toastify__toast--error, [role=\"alert\"]');\n    await expect(toast).toBeVisible({ timeout: 5000 });\n  });\n\n  test(\"toast auto-dismisses\", async ({ page }) => {\n    await page.goto(\"/entities/new\");\n    await page.fill('[name=\"name\"]', \"Test\");\n    await page.click('button[type=\"submit\"]');\n    const toast = page.locator('[data-sonner-toast], [role=\"status\"]');\n    await expect(toast).toBeVisible();\n    await expect(toast).not.toBeVisible({ timeout: 10000 });\n  });\n\n  test(\"no duplicate toasts on rapid clicks\", async ({ page }) => {\n    await page.goto(\"/entities/new\");\n    await page.fill('[name=\"name\"]', \"Test\");\n    // Rapid double-click\n    await page.click('button[type=\"submit\"]');\n    await page.click('button[type=\"submit\"]');\n    const toasts = page.locator('[data-sonner-toast], [role=\"status\"]');\n    const count = await toasts.count();\n    expect(count).toBeLessThanOrEqual(1);\n  });\n});\n\nPart 4 — Auth Flow Validation\nFirebase Auth Validation\n// qa-tests/auth/auth-flows.validation.spec.ts\nimport { test, expect } from \"@playwright/test\";\n\ntest.describe(\"Auth Flow Validation\", () => {\n  test(\"login with valid credentials redirects to dashboard\", async ({ page }) => {\n    await page.goto(\"/login\");\n    await page.fill('[name=\"email\"]', process.env.TEST_USER_EMAIL!);\n    await page.fill('[name=\"password\"]', process.env.TEST_USER_PASSWORD!);\n    await page.click('button[type=\"submit\"]');\n    await page.waitForURL(\"/dashboard\", { timeout: 10000 });\n    expect(page.url()).toContain(\"/dashboard\");\n  });\n\n  test(\"login with invalid credentials shows error\", async ({ page }) => {\n    await page.goto(\"/login\");\n    await page.fill('[name=\"email\"]', \"wrong@example.com\");\n    await page.fill('[name=\"password\"]', \"wrongpass\");\n    await page.click('button[type=\"submit\"]');\n    await expect(page.locator('[role=\"alert\"], .error, [data-testid=\"auth-error\"]')).toBeVisible();\n    expect(page.url()).toContain(\"/login\");\n  });\n\n  test(\"protected routes redirect unauthenticated users\", async ({ page }) => {\n    await page.goto(\"/dashboard\");\n    await page.waitForURL(/\\/(login|auth)/);\n  });\n\n  test(\"logout clears session and redirects\", async ({ page }) => {\n    // Login first, then logout\n    // ...login steps...\n    await page.click('[data-testid=\"logout\"], button:has-text(\"Logout\"), button:has-text(\"Sair\")');\n    await page.waitForURL(/\\/(login|auth|$)/);\n    // Verify protected route is no longer accessible\n    await page.goto(\"/dashboard\");\n    await page.waitForURL(/\\/(login|auth)/);\n  });\n});\n\nPart 5 — LLM Output Quality Validation\nTwo-Layer Approach: Rule-Based + LLM-as-Judge\nLayer 1: Rule-Based Checks (always run first)\n// qa-tests/llm/rule-based-checks.ts\nexport interface LLMOutput {\n  content: string;\n  model: string;\n  tokens_used: number;\n  latency_ms: number;\n}\n\nexport interface RuleCheckResult {\n  rule: string;\n  passed: boolean;\n  details: string;\n}\n\nexport function runRuleBasedChecks(output: LLMOutput, config: {\n  maxTokens?: number;\n  maxLatencyMs?: number;\n  minLength?: number;\n  maxLength?: number;\n  requiredSections?: string[];\n  forbiddenPatterns?: RegExp[];\n  requiredFormat?: \"json\" | \"markdown\" | \"plain\";\n  language?: string;\n}): RuleCheckResult[] {\n  const results: RuleCheckResult[] = [];\n\n  // Length checks\n  if (config.minLength) {\n    results.push({\n      rule: \"min_length\",\n      passed: output.content.length >= config.minLength,\n      details: `Content length: ${output.content.length}, minimum: ${config.minLength}`,\n    });\n  }\n  if (config.maxLength) {\n    results.push({\n      rule: \"max_length\",\n      passed: output.content.length <= config.maxLength,\n      details: `Content length: ${output.content.length}, maximum: ${config.maxLength}`,\n    });\n  }\n\n  // Token usage\n  if (config.maxTokens) {\n    results.push({\n      rule: \"token_budget\",\n      passed: output.tokens_used <= config.maxTokens,\n      details: `Tokens used: ${output.tokens_used}, budget: ${config.maxTokens}`,\n    });\n  }\n\n  // Latency\n  if (config.maxLatencyMs) {\n    results.push({\n      rule: \"latency\",\n      passed: output.latency_ms <= config.maxLatencyMs,\n      details: `Latency: ${output.latency_ms}ms, max: ${config.maxLatencyMs}ms`,\n    });\n  }\n\n  // Required sections\n  if (config.requiredSections) {\n    for (const section of config.requiredSections) {\n      results.push({\n        rule: `required_section:${section}`,\n        passed: output.content.toLowerCase().includes(section.toLowerCase()),\n        details: `Section \"${section}\" ${output.content.toLowerCase().includes(section.toLowerCase()) ? \"found\" : \"missing\"}`,\n      });\n    }\n  }\n\n  // Forbidden patterns (PII, hallucination markers, etc.)\n  if (config.forbiddenPatterns) {\n    for (const pattern of config.forbiddenPatterns) {\n      const match = pattern.exec(output.content);\n      results.push({\n        rule: `forbidden_pattern:${pattern.source}`,\n        passed: !match,\n        details: match ? `Found forbidden pattern: \"${match[0]}\"` : \"No forbidden patterns found\",\n      });\n    }\n  }\n\n  // Format validation\n  if (config.requiredFormat === \"json\") {\n    try {\n      JSON.parse(output.content);\n      results.push({ rule: \"valid_json\", passed: true, details: \"Valid JSON\" });\n    } catch {\n      results.push({ rule: \"valid_json\", passed: false, details: \"Invalid JSON\" });\n    }\n  }\n\n  // Empty/garbage check\n  results.push({\n    rule: \"not_empty\",\n    passed: output.content.trim().length > 0,\n    details: output.content.trim().length === 0 ? \"Output is empty\" : \"Output has content\",\n  });\n\n  results.push({\n    rule: \"not_truncated\",\n    passed: !output.content.endsWith(\"...\") && !output.content.endsWith(\"…\"),\n    details: \"Check for truncation markers\",\n  });\n\n  return results;\n}\n\nLayer 2: LLM-as-Judge (runs for content quality assessment)\n// qa-tests/llm/llm-judge.ts\nexport async function llmJudge(\n  output: string,\n  prompt: string,\n  criteria: {\n    relevance: boolean;\n    accuracy: boolean;\n    completeness: boolean;\n    tone: boolean;\n    safety: boolean;\n  }\n): Promise<{\n  overall_score: number; // 1-5\n  criteria_scores: Record<string, number>;\n  issues: string[];\n  recommendation: \"pass\" | \"review\" | \"fail\";\n}> {\n  const OPENROUTER_API_KEY = process.env.OPENROUTER_API_KEY;\n  if (!OPENROUTER_API_KEY) {\n    return {\n      overall_score: 0,\n      criteria_scores: {},\n      issues: [\"OPENROUTER_API_KEY not set — skipping LLM judge\"],\n      recommendation: \"review\",\n    };\n  }\n\n  const judgingPrompt = `You are a strict QA evaluator. Assess the following LLM output against the original prompt.\n\nORIGINAL PROMPT:\n${prompt}\n\nLLM OUTPUT:\n${output}\n\nScore each criterion from 1 (terrible) to 5 (excellent):\n${criteria.relevance ? \"- RELEVANCE: Does the output address the prompt?\" : \"\"}\n${criteria.accuracy ? \"- ACCURACY: Is the information correct and not hallucinated?\" : \"\"}\n${criteria.completeness ? \"- COMPLETENESS: Does the output cover all requested aspects?\" : \"\"}\n${criteria.tone ? \"- TONE: Is the tone appropriate for the context?\" : \"\"}\n${criteria.safety ? \"- SAFETY: Is the content free of harmful, biased, or inappropriate material?\" : \"\"}\n\nRespond in JSON:\n{\n  \"criteria_scores\": { \"relevance\": N, \"accuracy\": N, ... },\n  \"overall_score\": N,\n  \"issues\": [\"list of specific issues found\"],\n  \"recommendation\": \"pass|review|fail\"\n}\n\nRules:\n- overall_score < 3 = \"fail\"\n- overall_score >= 3 and < 4 = \"review\"\n- overall_score >= 4 = \"pass\"\n- Any safety score < 3 = automatic \"fail\"`;\n\n  const response = await fetch(\"https://openrouter.ai/api/v1/chat/completions\", {\n    method: \"POST\",\n    headers: {\n      Authorization: `Bearer ${OPENROUTER_API_KEY}`,\n      \"Content-Type\": \"application/json\",\n    },\n    body: JSON.stringify({\n      model: \"google/gemini-flash-1.5\",\n      messages: [{ role: \"user\", content: judgingPrompt }],\n      temperature: 0.1,\n      response_format: { type: \"json_object\" },\n    }),\n  });\n\n  const data = await response.json();\n  return JSON.parse(data.choices[0].message.content);\n}\n\nLLM Validation Test Template\n// qa-tests/llm/content-quality.validation.test.ts\nimport { describe, it, expect } from \"vitest\";\nimport { runRuleBasedChecks } from \"./rule-based-checks\";\nimport { llmJudge } from \"./llm-judge\";\n\ndescribe(\"LLM Output Quality Validation\", () => {\n  it(\"content generation meets quality standards\", async () => {\n    // 1. Call the actual LLM endpoint\n    const res = await fetch(`${BASE_URL}/api/generate`, {\n      method: \"POST\",\n      headers: { \"Content-Type\": \"application/json\", Authorization: `Bearer ${TOKEN}` },\n      body: JSON.stringify({ prompt: \"Describe the benefits of remote work\" }),\n    });\n    const output = await res.json();\n\n    // 2. Rule-based checks first\n    const ruleResults = runRuleBasedChecks(output, {\n      minLength: 100,\n      maxLength: 5000,\n      maxLatencyMs: 10000,\n      forbiddenPatterns: [\n        /\\b(SSN|social security)\\b/i,     // PII\n        /\\b(as an AI|I cannot)\\b/i,         // AI disclosure leaks\n        /\\b(undefined|null|NaN)\\b/,         // Code leaks\n      ],\n    });\n    const ruleFailures = ruleResults.filter((r) => !r.passed);\n    expect(ruleFailures).toHaveLength(0);\n\n    // 3. LLM-as-judge for content quality\n    const judgment = await llmJudge(output.content, \"Describe the benefits of remote work\", {\n      relevance: true,\n      accuracy: true,\n      completeness: true,\n      tone: true,\n      safety: true,\n    });\n    expect(judgment.recommendation).not.toBe(\"fail\");\n    expect(judgment.overall_score).toBeGreaterThanOrEqual(3);\n  });\n});\n\nPart 6 — Integration & Workflow Validation\nVercel Deployment Status Check\n// qa-tests/infra/vercel-status.validation.test.ts\ndescribe(\"Vercel Deployment Validation\", () => {\n  it(\"latest deployment is ready\", async () => {\n    const res = await fetch(\"https://api.vercel.com/v6/deployments?limit=1\", {\n      headers: { Authorization: `Bearer ${process.env.VERCEL_TOKEN}` },\n    });\n    const { deployments } = await res.json();\n    expect(deployments[0].state).toBe(\"READY\");\n  });\n\n  it(\"preview deployment matches current branch\", async () => {\n    // Check that the preview URL for the current PR is live and healthy\n  });\n\n  it(\"environment variables are set\", async () => {\n    // Verify all required env vars exist in the Vercel project\n    // (without reading their values)\n  });\n});\n\nSupabase Health Check\n// qa-tests/infra/supabase-health.validation.test.ts\ndescribe(\"Supabase Health Validation\", () => {\n  it(\"database is reachable\", async () => {\n    const res = await fetch(`${process.env.SUPABASE_URL}/rest/v1/`, {\n      headers: {\n        apikey: process.env.SUPABASE_ANON_KEY!,\n        Authorization: `Bearer ${process.env.SUPABASE_ANON_KEY}`,\n      },\n    });\n    expect(res.status).toBe(200);\n  });\n\n  it(\"auth service is healthy\", async () => {\n    const res = await fetch(`${process.env.SUPABASE_URL}/auth/v1/health`);\n    expect(res.ok).toBe(true);\n  });\n\n  it(\"realtime is connected\", async () => {\n    // Test WebSocket connection to Supabase Realtime\n  });\n});\n\nPart 7 — Go/No-Go Report\n\nAfter executing all validations, generate a comprehensive report:\n\n{\n  \"report\": {\n    \"project\": \"project-name\",\n    \"version\": \"x.y.z\",\n    \"date\": \"ISO-8601\",\n    \"validator\": \"qa-gate-vercel\",\n    \"verdict\": \"GO | NO-GO | CONDITIONAL\",\n    \"summary\": {\n      \"total_checks\": 45,\n      \"passed\": 42,\n      \"failed\": 2,\n      \"skipped\": 1,\n      \"pass_rate\": \"93.3%\"\n    },\n    \"sections\": {\n      \"api_routes\": {\n        \"status\": \"PASS\",\n        \"checks_run\": 12,\n        \"checks_passed\": 12,\n        \"details\": []\n      },\n      \"ui_pages\": {\n        \"status\": \"PASS\",\n        \"checks_run\": 8,\n        \"checks_passed\": 8,\n        \"details\": []\n      },\n      \"toast_notifications\": {\n        \"status\": \"FAIL\",\n        \"checks_run\": 6,\n        \"checks_passed\": 4,\n        \"failures\": [\n          {\n            \"test\": \"no_duplicate_toasts\",\n            \"page\": \"/entities/new\",\n            \"expected\": \"single toast on rapid clicks\",\n            \"actual\": \"2 toasts appeared\",\n            \"severity\": \"medium\",\n            \"recommendation\": \"Add debounce to form submission\"\n          }\n        ]\n      },\n      \"auth_flows\": {\n        \"status\": \"PASS\",\n        \"checks_run\": 5,\n        \"checks_passed\": 5\n      },\n      \"llm_quality\": {\n        \"status\": \"CONDITIONAL\",\n        \"rule_based\": { \"passed\": 8, \"failed\": 0 },\n        \"llm_judge\": {\n          \"average_score\": 3.8,\n          \"recommendation\": \"review\",\n          \"issues\": [\"Tone slightly too formal for target audience\"]\n        }\n      },\n      \"database_integrity\": {\n        \"status\": \"PASS\",\n        \"rls_enforced\": true,\n        \"orphan_records\": 0\n      },\n      \"infrastructure\": {\n        \"status\": \"PASS\",\n        \"vercel_deployment\": \"READY\",\n        \"supabase_health\": \"OK\"\n      }\n    },\n    \"blockers\": [\n      {\n        \"id\": \"BLOCK-001\",\n        \"severity\": \"high\",\n        \"description\": \"Duplicate toasts on /entities/new\",\n        \"recommendation\": \"Fix before production\"\n      }\n    ],\n    \"warnings\": [\n      {\n        \"id\": \"WARN-001\",\n        \"severity\": \"low\",\n        \"description\": \"LLM output tone slightly formal\",\n        \"recommendation\": \"Review prompt engineering, not blocking\"\n      }\n    ],\n    \"go_conditions\": {\n      \"all_api_tests_pass\": true,\n      \"all_auth_tests_pass\": true,\n      \"no_high_severity_blockers\": false,\n      \"llm_quality_above_threshold\": true,\n      \"deployment_healthy\": true\n    }\n  }\n}\n\nVerdict Logic:\nGO: All checks pass, no blockers, no high-severity failures.\nNO-GO: Any high-severity blocker OR any auth failure OR any data integrity failure.\nCONDITIONAL: Medium-severity issues that can be accepted with stakeholder approval.\n\nSave the report to qa-reports/go-no-go-report.json and also produce a human-readable markdown version at qa-reports/go-no-go-report.md.\n\nPart 8 — Execution Pipeline\n\nThe agent follows this execution order:\n\n1. Generate test plan          → qa-reports/test-plan.json\n2. Run existing test suite     → npx vitest run (or jest) + npx playwright test\n3. Generate validation tests   → qa-tests/**/*.validation.test.ts\n4. Run API validations         → qa-tests/api/\n5. Run UI/toast validations    → qa-tests/ui/\n6. Run auth flow validations   → qa-tests/auth/\n7. Run LLM quality validations → qa-tests/llm/\n8. Run infra health checks     → qa-tests/infra/\n9. Aggregate results           → qa-reports/go-no-go-report.json\n10. Generate human report      → qa-reports/go-no-go-report.md\n\nCommands\n# Step 2: Existing tests\nnpx vitest run --reporter=json --outputFile=qa-reports/vitest-results.json 2>/dev/null || true\nnpx playwright test --reporter=json --output=qa-reports/playwright-results.json 2>/dev/null || true\n\n# Step 3-7: Validation tests (separate config to avoid mixing with app tests)\nnpx vitest run --config qa-tests/vitest.config.ts --reporter=json --outputFile=qa-reports/validation-results.json\n\n# Step 8: Playwright validation tests\nnpx playwright test --config qa-tests/playwright.config.ts --reporter=json --output=qa-reports/playwright-validation-results.json\n\nValidation Test Config (isolate from app tests)\n// qa-tests/vitest.config.ts\nimport { defineConfig } from \"vitest/config\";\nimport path from \"path\";\n\nexport default defineConfig({\n  test: {\n    include: [\"qa-tests/**/*.validation.test.ts\"],\n    environment: \"node\",\n    globals: true,\n  },\n  resolve: {\n    alias: { \"@\": path.resolve(__dirname, \"../src\") },\n  },\n});\n\nBest Practices (DO)\nAlways run the existing test suite FIRST before adding validation tests\nUse separate directories (qa-tests/, qa-reports/) to avoid polluting the app\nDetect and adapt to the project's test framework (Vitest/Jest, Playwright/Cypress)\nRun rule-based LLM checks before LLM-as-judge (cheaper, faster, catches obvious issues)\nInclude severity levels in all failures (high/medium/low)\nGenerate both JSON (machine-readable) and Markdown (human-readable) reports\nCheck for toast libraries dynamically (sonner, react-hot-toast, shadcn toast)\nValidate responsive layout at mobile (375px), tablet (768px), and desktop (1280px) breakpoints\nTest auth error cases, not just happy paths\nValidate Supabase RLS separately (critical security check)\nAnti-Patterns (AVOID)\nNEVER skip the test plan generation step\nNEVER mix validation tests with app tests (separate config files)\nNEVER hardcode auth tokens in test files — always use process.env\nNEVER run LLM-as-judge without rule-based checks first (waste of tokens)\nNEVER mark a test as \"skipped\" without documenting why in the report\nNEVER auto-approve a NO-GO verdict — always surface blockers to the human\nNEVER test against production data — use test accounts and seed data\nNEVER ignore toast validation — toast bugs are the #1 user-facing UX complaint\nSafety Rules\nNEVER read or modify .env, .env.local, or any credential file directly\nAll env var references are in generated test code via process.env.*\nNEVER auto-deploy after a CONDITIONAL or NO-GO verdict\nNEVER delete test data from production databases\nNEVER expose API keys in test reports — redact before writing to disk\nIf OPENROUTER_API_KEY is not set, skip LLM-as-judge checks and mark as \"review\""
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/guifav/qa-gate-vercel",
    "publisherUrl": "https://clawhub.ai/guifav/qa-gate-vercel",
    "owner": "guifav",
    "version": "0.1.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/qa-gate-vercel",
    "downloadUrl": "https://openagent3.xyz/downloads/qa-gate-vercel",
    "agentUrl": "https://openagent3.xyz/skills/qa-gate-vercel/agent",
    "manifestUrl": "https://openagent3.xyz/skills/qa-gate-vercel/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/qa-gate-vercel/agent.md"
  }
}