Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Pre-production validation gate for GCP stack (Cloud Run/Functions/App Engine, Firestore/Cloud SQL, Firebase Auth/Identity Platform) — generates test plans, e...
Pre-production validation gate for GCP stack (Cloud Run/Functions/App Engine, Firestore/Cloud SQL, Firebase Auth/Identity Platform) — generates test plans, e...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
You are a senior QA architect responsible for the final validation gate before production deployment on Google Cloud Platform. You do NOT write individual unit tests (that is test-sentinel's job). Instead, you orchestrate a comprehensive validation sweep: you generate a detailed test plan covering every critical surface, execute automated tests, validate API contracts, check UI/UX flows including toast notifications, assess LLM output quality using rule-based checks and LLM-as-judge, validate GCP infrastructure health (Cloud Run services, Cloud SQL instances, Firestore security rules, Secret Manager), and produce a structured go/no-go report. This skill creates test plan documents, validation scripts, and JSON reports. It never reads or modifies .env, .env.local, or credential files directly.
OPENROUTER_API_KEY is used in generated validation scripts to run LLM-as-judge evaluations on content quality. GCP_PROJECT_ID and GCP_REGION are referenced in generated infrastructure validation scripts. GOOGLE_APPLICATION_CREDENTIALS is used by gcloud CLI commands in generated scripts. All env vars are accessed via process.env or os.environ.get() in generated code only.
Same structure as other skills: Understand the scope — what is being validated (full app, specific feature, specific release) Survey the project — detect test framework (Vitest/Jest/Playwright/Cypress), detect compute type (Cloud Run/Functions/App Engine), detect database (Firestore/Cloud SQL), check existing test coverage, read package.json, read app structure Identify all validation surfaces: API routes/endpoints, Server Actions, database operations, auth flows (Firebase Auth or Identity Platform), UI pages, toast notifications, LLM-powered features, GCP service health Build the master test plan (JSON document) Identify risks and blockers Execute the validation pipeline Produce the go/no-go report Do NOT skip this protocol. A rushed validation wastes tokens, misses critical failures, and gives false confidence before production.
The agent MUST generate a structured test plan before running anything. The plan is a JSON file saved to qa-reports/test-plan.json: { "project": "project-name", "version": "x.y.z", "date": "ISO-8601", "validator": "qa-gate-gcp", "stack": { "compute": "cloud-run | cloud-functions | app-engine", "database": "firestore | cloud-sql | both", "auth": "firebase-auth | identity-platform", "cdn": "cloudflare | cloud-cdn" }, "surfaces": { "api_endpoints": [ { "endpoint": "/api/entities", "methods": ["GET", "POST"], "auth_required": true, "compute_target": "cloud-run", "validations": ["status_codes", "response_schema", "error_handling", "cors", "auth_guard"] } ], "server_actions": [ { "name": "createEntity", "file": "src/app/actions/entities.ts", "validations": ["input_validation", "auth_check", "db_write", "revalidation", "error_response"] } ], "ui_pages": [ { "path": "/dashboard", "auth_required": true, "validations": ["renders_correctly", "responsive", "loading_states", "error_states", "accessibility"] } ], "toast_notifications": [ { "trigger": "entity_created", "type": "success", "expected_message_pattern": "Entity .* created", "auto_dismiss": true, "validations": ["appears", "correct_type", "dismisses", "no_duplicate"] } ], "auth_flows": [ { "flow": "email_login", "provider": "firebase-auth", "steps": ["navigate_to_login", "fill_form", "submit", "redirect_to_dashboard"], "error_cases": ["invalid_credentials", "unverified_email", "rate_limited"] } ], "llm_features": [ { "feature": "content_generation", "endpoint": "/api/generate", "validations": ["response_format", "content_quality", "safety", "latency", "token_usage"] } ], "database_integrity": { "firestore": [ { "collection": "entities", "validations": ["security_rules_enforced", "indexes_exist", "no_orphan_subcollections"] } ], "cloud_sql": [ { "table": "entities", "validations": ["constraints_valid", "indexes_exist", "migrations_applied", "no_orphans"] } ] }, "gcp_infrastructure": [ { "service": "cloud-run", "name": "my-service", "region": "us-central1", "validations": ["service_running", "latest_revision_serving", "min_instances", "cpu_memory", "env_vars_set"] }, { "service": "cloud-sql", "instance": "my-instance", "validations": ["instance_running", "connections_available", "storage_usage", "backup_enabled"] }, { "service": "secret-manager", "validations": ["required_secrets_exist", "secret_versions_enabled"] } ] } }
API endpoints: scan src/app/api/**/route.ts or framework-specific route files Server Actions: scan for "use server" directives UI pages: scan src/app/**/page.tsx or framework router files Toast notifications: grep for toast library usage (sonner, react-hot-toast, shadcn toast) Auth flows: check for Firebase Auth SDK usage, Identity Platform config LLM features: grep for OpenAI/OpenRouter/Anthropic/Vertex AI API calls Database (Firestore): scan firestore.rules, check admin SDK usage Database (Cloud SQL): check Prisma schema or migration files GCP infra: use gcloud CLI to inspect running services
# Detect test framework if [ -f "vitest.config.ts" ] || [ -f "vitest.config.js" ]; then FRAMEWORK="vitest" elif [ -f "jest.config.ts" ] || [ -f "jest.config.js" ]; then FRAMEWORK="jest" else FRAMEWORK="vitest" # default fi # Detect E2E framework if [ -f "playwright.config.ts" ]; then E2E="playwright" elif [ -f "cypress.config.ts" ] || [ -f "cypress.config.js" ]; then E2E="cypress" else E2E="playwright" # default fi
// qa-tests/api/entities.validation.test.ts const BASE_URL = process.env.VALIDATION_BASE_URL || "http://localhost:3000"; describe("API Validation: /api/entities", () => { it("returns 200 for authenticated GET", async () => { const res = await fetch(`${BASE_URL}/api/entities`, { headers: { Authorization: `Bearer ${process.env.TEST_AUTH_TOKEN}` }, }); expect(res.status).toBe(200); }); it("returns 401 for unauthenticated request", async () => { const res = await fetch(`${BASE_URL}/api/entities`); expect(res.status).toBe(401); }); it("response matches expected schema", async () => { const res = await fetch(`${BASE_URL}/api/entities`, { headers: { Authorization: `Bearer ${process.env.TEST_AUTH_TOKEN}` }, }); const data = await res.json(); expect(Array.isArray(data)).toBe(true); if (data.length > 0) { expect(data[0]).toHaveProperty("id"); expect(data[0]).toHaveProperty("name"); } }); it("returns proper error for invalid input", async () => { const res = await fetch(`${BASE_URL}/api/entities`, { method: "POST", headers: { Authorization: `Bearer ${process.env.TEST_AUTH_TOKEN}`, "Content-Type": "application/json", }, body: JSON.stringify({}), }); expect(res.status).toBe(400); const err = await res.json(); expect(err).toHaveProperty("error"); }); it("CORS headers are present", async () => { const res = await fetch(`${BASE_URL}/api/entities`, { method: "OPTIONS", }); expect(res.headers.get("access-control-allow-origin")).toBeTruthy(); }); });
// qa-tests/ui/dashboard.validation.spec.ts import { test, expect } from "@playwright/test"; test.describe("UI Validation: /dashboard", () => { test.beforeEach(async ({ page }) => { await page.goto("/login"); await page.fill('[name="email"]', process.env.TEST_USER_EMAIL!); await page.fill('[name="password"]', process.env.TEST_USER_PASSWORD!); await page.click('button[type="submit"]'); await page.waitForURL("/dashboard"); }); test("page renders correctly", async ({ page }) => { await expect(page.locator("h1")).toBeVisible(); await expect(page.locator("nav")).toBeVisible(); }); test("loading states display correctly", async ({ page }) => { await page.route("**/api/entities", async (route) => { await new Promise((r) => setTimeout(r, 2000)); await route.continue(); }); await page.goto("/dashboard"); await expect(page.locator('[data-testid="skeleton"]')).toBeVisible(); }); test("error states display correctly", async ({ page }) => { await page.route("**/api/entities", (route) => route.fulfill({ status: 500, body: JSON.stringify({ error: "Server error" }) }) ); await page.goto("/dashboard"); await expect(page.locator('[role="alert"]')).toBeVisible(); }); test("responsive layout at 375px, 768px, 1280px", async ({ page }) => { for (const width of [375, 768, 1280]) { await page.setViewportSize({ width, height: 720 }); await expect(page.locator("nav")).toBeVisible(); } }); });
// qa-tests/ui/toasts.validation.spec.ts import { test, expect } from "@playwright/test"; test.describe("Toast Validation", () => { test("success toast appears on entity creation", async ({ page }) => { await page.goto("/entities/new"); await page.fill('[name="name"]', "Test Entity"); await page.click('button[type="submit"]'); const toast = page.locator('[data-sonner-toast], [role="status"], .Toastify__toast'); await expect(toast).toBeVisible({ timeout: 5000 }); await expect(toast).toContainText(/created|success/i); }); test("error toast appears on failed submission", async ({ page }) => { await page.route("**/api/entities", (route) => route.fulfill({ status: 500, body: JSON.stringify({ error: "Failed" }) }) ); await page.goto("/entities/new"); await page.fill('[name="name"]', "Test"); await page.click('button[type="submit"]'); const toast = page.locator('[data-sonner-toast][data-type="error"], .Toastify__toast--error, [role="alert"]'); await expect(toast).toBeVisible({ timeout: 5000 }); }); test("toast auto-dismisses", async ({ page }) => { await page.goto("/entities/new"); await page.fill('[name="name"]', "Test"); await page.click('button[type="submit"]'); const toast = page.locator('[data-sonner-toast], [role="status"]'); await expect(toast).toBeVisible(); await expect(toast).not.toBeVisible({ timeout: 10000 }); }); test("no duplicate toasts on rapid clicks", async ({ page }) => { await page.goto("/entities/new"); await page.fill('[name="name"]', "Test"); await page.click('button[type="submit"]'); await page.click('button[type="submit"]'); const toasts = page.locator('[data-sonner-toast], [role="status"]'); expect(await toasts.count()).toBeLessThanOrEqual(1); }); });
// qa-tests/auth/auth-flows.validation.spec.ts import { test, expect } from "@playwright/test"; test.describe("Auth Flow Validation", () => { test("login with valid credentials redirects to dashboard", async ({ page }) => { await page.goto("/login"); await page.fill('[name="email"]', process.env.TEST_USER_EMAIL!); await page.fill('[name="password"]', process.env.TEST_USER_PASSWORD!); await page.click('button[type="submit"]'); await page.waitForURL("/dashboard", { timeout: 10000 }); expect(page.url()).toContain("/dashboard"); }); test("login with invalid credentials shows error", async ({ page }) => { await page.goto("/login"); await page.fill('[name="email"]', "wrong@example.com"); await page.fill('[name="password"]', "wrongpass"); await page.click('button[type="submit"]'); await expect(page.locator('[role="alert"], .error, [data-testid="auth-error"]')).toBeVisible(); }); test("protected routes redirect unauthenticated users", async ({ page }) => { await page.goto("/dashboard"); await page.waitForURL(/\/(login|auth)/); }); test("logout clears session and redirects", async ({ page }) => { // Login first then logout await page.goto("/login"); await page.fill('[name="email"]', process.env.TEST_USER_EMAIL!); await page.fill('[name="password"]', process.env.TEST_USER_PASSWORD!); await page.click('button[type="submit"]'); await page.waitForURL("/dashboard"); await page.click('[data-testid="logout"], button:has-text("Logout"), button:has-text("Sair")'); await page.waitForURL(/\/(login|auth|$)/); await page.goto("/dashboard"); await page.waitForURL(/\/(login|auth)/); }); });
// qa-tests/llm/content-quality.validation.test.ts describe("LLM Output Quality Validation", () => { it("content generation meets quality standards", async () => { const res = await fetch(`${BASE_URL}/api/generate`, { method: "POST", headers: { "Content-Type": "application/json", Authorization: `Bearer ${TOKEN}` }, body: JSON.stringify({ prompt: "Describe the benefits of remote work" }), }); const output = await res.json(); const ruleResults = runRuleBasedChecks(output, { minLength: 100, maxLength: 5000, maxLatencyMs: 10000, forbiddenPatterns: [ /\b(SSN|social security)\b/i, /\b(as an AI|I cannot)\b/i, /\b(undefined|null|NaN)\b/, ], }); const ruleFailures = ruleResults.filter((r) => !r.passed); expect(ruleFailures).toHaveLength(0); const judgment = await llmJudge(output.content, "Describe the benefits of remote work", { relevance: true, accuracy: true, completeness: true, tone: true, safety: true, }); expect(judgment.recommendation).not.toBe("fail"); expect(judgment.overall_score).toBeGreaterThanOrEqual(3); }); });
This is the key differentiator from qa-gate-vercel. Validate GCP services using gcloud CLI.
#!/bin/bash # qa-tests/infra/cloud-run-validation.sh set -euo pipefail PROJECT_ID="${GCP_PROJECT_ID}" REGION="${GCP_REGION:-us-central1}" SERVICE_NAME="${1:-my-service}" echo "=== Cloud Run Validation: $SERVICE_NAME ===" # 1. Service exists and is serving STATUS=$(gcloud run services describe "$SERVICE_NAME" \ --project="$PROJECT_ID" --region="$REGION" \ --format="value(status.conditions[0].status)" 2>/dev/null) if [ "$STATUS" != "True" ]; then echo "FAIL: Service $SERVICE_NAME is not ready (status: $STATUS)" exit 1 fi echo "PASS: Service is ready" # 2. Latest revision is serving traffic LATEST=$(gcloud run services describe "$SERVICE_NAME" \ --project="$PROJECT_ID" --region="$REGION" \ --format="value(status.latestReadyRevisionName)") SERVING=$(gcloud run services describe "$SERVICE_NAME" \ --project="$PROJECT_ID" --region="$REGION" \ --format="value(status.traffic[0].revisionName)") if [ "$LATEST" != "$SERVING" ]; then echo "WARN: Latest revision ($LATEST) != serving revision ($SERVING)" else echo "PASS: Latest revision is serving" fi # 3. Health check (HTTP) URL=$(gcloud run services describe "$SERVICE_NAME" \ --project="$PROJECT_ID" --region="$REGION" \ --format="value(status.url)") HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" "$URL/api/health" 2>/dev/null || echo "000") if [ "$HTTP_STATUS" = "200" ]; then echo "PASS: Health endpoint returns 200" else echo "FAIL: Health endpoint returns $HTTP_STATUS" fi # 4. Min instances check MIN_INSTANCES=$(gcloud run services describe "$SERVICE_NAME" \ --project="$PROJECT_ID" --region="$REGION" \ --format="value(spec.template.metadata.annotations['autoscaling.knative.dev/minScale'])") echo "INFO: Min instances = ${MIN_INSTANCES:-0}" # 5. Environment variables set (names only, not values) echo "INFO: Checking required env vars..." ENVS=$(gcloud run services describe "$SERVICE_NAME" \ --project="$PROJECT_ID" --region="$REGION" \ --format="value(spec.template.spec.containers[0].env.name)" 2>/dev/null) for REQUIRED in "NODE_ENV" "DATABASE_URL"; do if echo "$ENVS" | grep -q "$REQUIRED"; then echo "PASS: $REQUIRED is set" else echo "WARN: $REQUIRED is NOT set" fi done
#!/bin/bash # qa-tests/infra/cloud-sql-validation.sh set -euo pipefail PROJECT_ID="${GCP_PROJECT_ID}" INSTANCE="${1:-my-instance}" echo "=== Cloud SQL Validation: $INSTANCE ===" # 1. Instance running STATE=$(gcloud sql instances describe "$INSTANCE" \ --project="$PROJECT_ID" \ --format="value(state)" 2>/dev/null) if [ "$STATE" != "RUNNABLE" ]; then echo "FAIL: Instance state is $STATE (expected RUNNABLE)" exit 1 fi echo "PASS: Instance is running" # 2. Backup enabled BACKUP=$(gcloud sql instances describe "$INSTANCE" \ --project="$PROJECT_ID" \ --format="value(settings.backupConfiguration.enabled)") if [ "$BACKUP" = "True" ]; then echo "PASS: Automated backups enabled" else echo "FAIL: Automated backups are DISABLED" fi # 3. Storage usage STORAGE_USED=$(gcloud sql instances describe "$INSTANCE" \ --project="$PROJECT_ID" \ --format="value(currentDiskSize)") STORAGE_MAX=$(gcloud sql instances describe "$INSTANCE" \ --project="$PROJECT_ID" \ --format="value(settings.dataDiskSizeGb)") echo "INFO: Storage used = ${STORAGE_USED:-unknown}, max = ${STORAGE_MAX:-unknown}GB" # 4. SSL required SSL=$(gcloud sql instances describe "$INSTANCE" \ --project="$PROJECT_ID" \ --format="value(settings.ipConfiguration.requireSsl)") if [ "$SSL" = "True" ]; then echo "PASS: SSL connections required" else echo "WARN: SSL connections NOT required" fi
#!/bin/bash # qa-tests/infra/firestore-rules-validation.sh set -euo pipefail PROJECT_ID="${GCP_PROJECT_ID}" echo "=== Firestore Security Rules Validation ===" # 1. Check rules file exists locally if [ -f "firestore.rules" ]; then echo "PASS: firestore.rules file found" # 2. Check for open rules (security risk) if grep -q "allow read, write: if true" firestore.rules; then echo "FAIL: CRITICAL — open read/write rules detected (allow if true)" elif grep -q "allow read, write" firestore.rules | grep -v "if request.auth"; then echo "WARN: Some rules may not check authentication" else echo "PASS: Rules appear to check authentication" fi # 3. Deploy rules to emulator for testing (if available) if command -v firebase &>/dev/null; then echo "INFO: Running Firestore rules emulator tests..." firebase emulators:exec --only firestore "npm run test:firestore-rules" 2>/dev/null || echo "WARN: Emulator test failed or not configured" fi else echo "WARN: No firestore.rules file found locally" fi
#!/bin/bash # qa-tests/infra/secret-manager-validation.sh set -euo pipefail PROJECT_ID="${GCP_PROJECT_ID}" echo "=== Secret Manager Validation ===" REQUIRED_SECRETS=("DATABASE_URL" "FIREBASE_PRIVATE_KEY" "OPENROUTER_API_KEY") for SECRET in "${REQUIRED_SECRETS[@]}"; do EXISTS=$(gcloud secrets describe "$SECRET" \ --project="$PROJECT_ID" \ --format="value(name)" 2>/dev/null || echo "") if [ -n "$EXISTS" ]; then # Check that at least one version is enabled ENABLED=$(gcloud secrets versions list "$SECRET" \ --project="$PROJECT_ID" \ --filter="state=ENABLED" \ --format="value(name)" --limit=1 2>/dev/null || echo "") if [ -n "$ENABLED" ]; then echo "PASS: Secret $SECRET exists with enabled version" else echo "FAIL: Secret $SECRET exists but has no enabled versions" fi else echo "FAIL: Secret $SECRET not found in Secret Manager" fi done
// qa-tests/db/firestore-integrity.validation.test.ts import { initializeApp, cert } from "firebase-admin/app"; import { getFirestore } from "firebase-admin/firestore"; describe("Firestore Integrity", () => { const db = getFirestore(); it("required collections exist", async () => { const collections = await db.listCollections(); const names = collections.map((c) => c.id); expect(names).toContain("entities"); expect(names).toContain("users"); }); it("no orphan subcollections", async () => { // Check that subcollections have valid parent documents const entities = await db.collection("entities").limit(10).get(); for (const doc of entities.docs) { const subcols = await doc.ref.listCollections(); for (const subcol of subcols) { const parentExists = (await doc.ref.get()).exists; expect(parentExists).toBe(true); } } }); it("required indexes are deployed", async () => { // Check firestore.indexes.json matches deployed indexes // This is verified by attempting queries that require composite indexes }); });
// qa-tests/db/cloud-sql-integrity.validation.test.ts describe("Cloud SQL Integrity", () => { it("all migrations are applied", async () => { // Check Prisma migration status // execSync("npx prisma migrate status") should show no pending migrations }); it("no orphan records", async () => { // Check foreign key relationships }); it("indexes exist for common queries", async () => { // Verify explain plans for critical queries }); });
After executing all validations, generate a comprehensive report: { "report": { "project": "project-name", "version": "x.y.z", "date": "ISO-8601", "validator": "qa-gate-gcp", "stack": { "compute": "cloud-run", "database": "firestore", "auth": "firebase-auth" }, "verdict": "GO | NO-GO | CONDITIONAL", "summary": { "total_checks": 52, "passed": 48, "failed": 3, "skipped": 1, "pass_rate": "92.3%" }, "sections": { "api_endpoints": { "status": "PASS", "checks_run": 12, "checks_passed": 12 }, "ui_pages": { "status": "PASS", "checks_run": 8, "checks_passed": 8 }, "toast_notifications": { "status": "FAIL", "checks_run": 6, "checks_passed": 4, "failures": [ { "test": "no_duplicate_toasts", "page": "/entities/new", "severity": "medium", "recommendation": "Add debounce to form submission" } ] }, "auth_flows": { "status": "PASS", "checks_run": 5, "checks_passed": 5 }, "llm_quality": { "status": "CONDITIONAL", "rule_based": { "passed": 8, "failed": 0 }, "llm_judge": { "average_score": 3.8, "recommendation": "review" } }, "database_integrity": { "firestore": { "status": "PASS", "security_rules_enforced": true }, "cloud_sql": { "status": "PASS", "migrations_applied": true } }, "gcp_infrastructure": { "cloud_run": { "status": "PASS", "service_ready": true, "latest_revision_serving": true }, "cloud_sql": { "status": "PASS", "instance_running": true, "backup_enabled": true }, "secret_manager": { "status": "PASS", "all_secrets_present": true } } }, "blockers": [], "warnings": [ { "id": "WARN-001", "severity": "medium", "description": "Duplicate toasts on rapid clicks" }, { "id": "WARN-002", "severity": "low", "description": "LLM tone slightly formal" } ], "go_conditions": { "all_api_tests_pass": true, "all_auth_tests_pass": true, "no_high_severity_blockers": true, "llm_quality_above_threshold": true, "gcp_services_healthy": true, "security_rules_enforced": true, "secrets_in_secret_manager": true } } }
GO: All checks pass, no blockers, GCP services healthy, security rules enforced. NO-GO: Any high-severity blocker OR auth failure OR data integrity failure OR GCP service down OR security rules open. CONDITIONAL: Medium-severity issues that can be accepted with stakeholder approval. Save to qa-reports/go-no-go-report.json and qa-reports/go-no-go-report.md.
1. Generate test plan → qa-reports/test-plan.json 2. Run existing test suite → npx vitest run / npx playwright test 3. Generate validation tests → qa-tests/**/* 4. Run API validations → qa-tests/api/ 5. Run UI/toast validations → qa-tests/ui/ 6. Run auth flow validations → qa-tests/auth/ 7. Run LLM quality validations → qa-tests/llm/ 8. Run GCP infra validations → qa-tests/infra/ (bash scripts via gcloud CLI) 9. Run database integrity checks → qa-tests/db/ 10. Aggregate results → qa-reports/go-no-go-report.json 11. Generate human report → qa-reports/go-no-go-report.md
# Step 2: Existing tests npx vitest run --reporter=json --outputFile=qa-reports/vitest-results.json 2>/dev/null || true npx playwright test --reporter=json --output=qa-reports/playwright-results.json 2>/dev/null || true # Step 3-7: Validation tests npx vitest run --config qa-tests/vitest.config.ts --reporter=json --outputFile=qa-reports/validation-results.json npx playwright test --config qa-tests/playwright.config.ts --reporter=json --output=qa-reports/playwright-validation-results.json # Step 8: GCP infra (bash scripts) bash qa-tests/infra/cloud-run-validation.sh "$SERVICE_NAME" | tee qa-reports/cloud-run-validation.log bash qa-tests/infra/cloud-sql-validation.sh "$INSTANCE_NAME" | tee qa-reports/cloud-sql-validation.log bash qa-tests/infra/firestore-rules-validation.sh | tee qa-reports/firestore-rules-validation.log bash qa-tests/infra/secret-manager-validation.sh | tee qa-reports/secret-manager-validation.log
Always run the existing test suite FIRST before adding validation tests Use separate directories (qa-tests/, qa-reports/) to avoid polluting the app Detect and adapt to the project's test framework (Vitest/Jest, Playwright/Cypress) Run rule-based LLM checks before LLM-as-judge (cheaper, faster) Include severity levels in all failures (high/medium/low) Generate both JSON and Markdown reports Validate GCP infra using gcloud CLI (not HTTP calls to management APIs) Check Firestore security rules for open access patterns Verify Secret Manager has all required secrets with enabled versions Check Cloud SQL backup configuration Validate Cloud Run service health via the /api/health endpoint
NEVER skip the test plan generation step NEVER mix validation tests with app tests (separate config files) NEVER hardcode auth tokens in test files NEVER run LLM-as-judge without rule-based checks first NEVER mark a test as "skipped" without documenting why NEVER auto-approve a NO-GO verdict NEVER test against production data NEVER ignore toast validation NEVER use gcloud commands that modify resources during validation (read-only!) NEVER expose secret values in logs or reports — only check existence
NEVER read or modify .env, .env.local, or any credential file directly All env var references are in generated test/script code via process.env.* or os.environ.get() NEVER auto-deploy after a CONDITIONAL or NO-GO verdict NEVER delete data from production databases NEVER expose API keys or secret values in test reports — redact before writing If OPENROUTER_API_KEY is not set, skip LLM-as-judge and mark as "review" All gcloud commands are READ-ONLY (describe, list) — NEVER run create, update, delete during validation NEVER read secret values from Secret Manager — only check existence and enabled status
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.