{
  "schemaVersion": "1.0",
  "item": {
    "slug": "database-operations",
    "name": "Database Operations",
    "source": "tencent",
    "type": "skill",
    "category": "数据分析",
    "sourceUrl": "https://clawhub.ai/jgarrison929/database-operations",
    "canonicalUrl": "https://clawhub.ai/jgarrison929/database-operations",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/database-operations",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=database-operations",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/database-operations"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/database-operations",
    "agentPageUrl": "https://openagent3.xyz/skills/database-operations/agent",
    "manifestUrl": "https://openagent3.xyz/skills/database-operations/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/database-operations/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Database Operations",
        "body": "Comprehensive database design, migration, and optimization specialist. Adapted from buildwithclaude by Dave Poon (MIT)."
      },
      {
        "title": "Role Definition",
        "body": "You are a database optimization expert specializing in PostgreSQL, query performance, schema design, and EF Core migrations. You measure first, optimize second, and always plan rollback procedures."
      },
      {
        "title": "Core Principles",
        "body": "Measure first — always use EXPLAIN ANALYZE before optimizing\nIndex strategically — based on query patterns, not every column\nDenormalize selectively — only when justified by read patterns\nCache expensive computations — Redis/materialized views for hot paths\nPlan rollback — every migration has a reverse migration\nZero-downtime migrations — additive changes first, destructive later"
      },
      {
        "title": "User Management",
        "body": "CREATE TYPE user_status AS ENUM ('active', 'inactive', 'suspended', 'pending');\n\nCREATE TABLE users (\n  id BIGSERIAL PRIMARY KEY,\n  email VARCHAR(255) UNIQUE NOT NULL,\n  username VARCHAR(50) UNIQUE NOT NULL,\n  password_hash VARCHAR(255) NOT NULL,\n  first_name VARCHAR(100) NOT NULL,\n  last_name VARCHAR(100) NOT NULL,\n  status user_status DEFAULT 'active',\n  email_verified BOOLEAN DEFAULT FALSE,\n  created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,\n  updated_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,\n  deleted_at TIMESTAMPTZ,  -- Soft delete\n\n  CONSTRAINT users_email_format CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}$'),\n  CONSTRAINT users_names_not_empty CHECK (LENGTH(TRIM(first_name)) > 0 AND LENGTH(TRIM(last_name)) > 0)\n);\n\n-- Strategic indexes\nCREATE INDEX idx_users_email ON users(email);\nCREATE INDEX idx_users_status ON users(status) WHERE status != 'active';\nCREATE INDEX idx_users_created_at ON users(created_at);\nCREATE INDEX idx_users_deleted_at ON users(deleted_at) WHERE deleted_at IS NULL;"
      },
      {
        "title": "Audit Trail",
        "body": "CREATE TYPE audit_operation AS ENUM ('INSERT', 'UPDATE', 'DELETE');\n\nCREATE TABLE audit_log (\n  id BIGSERIAL PRIMARY KEY,\n  table_name VARCHAR(255) NOT NULL,\n  record_id BIGINT NOT NULL,\n  operation audit_operation NOT NULL,\n  old_values JSONB,\n  new_values JSONB,\n  changed_fields TEXT[],\n  user_id BIGINT REFERENCES users(id),\n  created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP\n);\n\nCREATE INDEX idx_audit_table_record ON audit_log(table_name, record_id);\nCREATE INDEX idx_audit_user_time ON audit_log(user_id, created_at);\n\n-- Trigger function\nCREATE OR REPLACE FUNCTION audit_trigger_function()\nRETURNS TRIGGER AS $$\nBEGIN\n  IF TG_OP = 'DELETE' THEN\n    INSERT INTO audit_log (table_name, record_id, operation, old_values)\n    VALUES (TG_TABLE_NAME, OLD.id, 'DELETE', to_jsonb(OLD));\n    RETURN OLD;\n  ELSIF TG_OP = 'UPDATE' THEN\n    INSERT INTO audit_log (table_name, record_id, operation, old_values, new_values)\n    VALUES (TG_TABLE_NAME, NEW.id, 'UPDATE', to_jsonb(OLD), to_jsonb(NEW));\n    RETURN NEW;\n  ELSIF TG_OP = 'INSERT' THEN\n    INSERT INTO audit_log (table_name, record_id, operation, new_values)\n    VALUES (TG_TABLE_NAME, NEW.id, 'INSERT', to_jsonb(NEW));\n    RETURN NEW;\n  END IF;\nEND;\n$$ LANGUAGE plpgsql;\n\n-- Apply to any table\nCREATE TRIGGER audit_users\nAFTER INSERT OR UPDATE OR DELETE ON users\nFOR EACH ROW EXECUTE FUNCTION audit_trigger_function();"
      },
      {
        "title": "Soft Delete Pattern",
        "body": "-- Query filter view\nCREATE VIEW active_users AS SELECT * FROM users WHERE deleted_at IS NULL;\n\n-- Soft delete function\nCREATE OR REPLACE FUNCTION soft_delete(p_table TEXT, p_id BIGINT)\nRETURNS VOID AS $$\nBEGIN\n  EXECUTE format('UPDATE %I SET deleted_at = CURRENT_TIMESTAMP WHERE id = $1 AND deleted_at IS NULL', p_table)\n  USING p_id;\nEND;\n$$ LANGUAGE plpgsql;"
      },
      {
        "title": "Full-Text Search",
        "body": "ALTER TABLE products ADD COLUMN search_vector tsvector\n  GENERATED ALWAYS AS (\n    to_tsvector('english', COALESCE(name, '') || ' ' || COALESCE(description, '') || ' ' || COALESCE(sku, ''))\n  ) STORED;\n\nCREATE INDEX idx_products_search ON products USING gin(search_vector);\n\n-- Query\nSELECT * FROM products\nWHERE search_vector @@ to_tsquery('english', 'laptop & gaming');"
      },
      {
        "title": "Analyze Before Optimizing",
        "body": "-- Always start here\nEXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)\nSELECT u.id, u.name, COUNT(o.id) as order_count\nFROM users u\nLEFT JOIN orders o ON u.id = o.user_id\nWHERE u.created_at > '2024-01-01'\nGROUP BY u.id, u.name\nORDER BY order_count DESC;"
      },
      {
        "title": "Indexing Strategy",
        "body": "-- Single column for exact lookups\nCREATE INDEX CONCURRENTLY idx_users_email ON users(email);\n\n-- Composite for multi-column queries (order matters!)\nCREATE INDEX CONCURRENTLY idx_orders_user_status ON orders(user_id, status, created_at);\n\n-- Partial index for filtered queries\nCREATE INDEX CONCURRENTLY idx_products_low_stock\nON products(inventory_quantity)\nWHERE inventory_tracking = true AND inventory_quantity <= 5;\n\n-- Covering index (includes extra columns to avoid table lookup)\nCREATE INDEX CONCURRENTLY idx_orders_covering\nON orders(user_id, status) INCLUDE (total, created_at);\n\n-- GIN index for JSONB\nCREATE INDEX CONCURRENTLY idx_products_attrs ON products USING gin(attributes);\n\n-- Expression index\nCREATE INDEX CONCURRENTLY idx_users_email_lower ON users(lower(email));"
      },
      {
        "title": "Find Unused Indexes",
        "body": "SELECT\n  schemaname, tablename, indexname,\n  idx_scan as scans,\n  pg_size_pretty(pg_relation_size(indexrelid)) as size\nFROM pg_stat_user_indexes\nWHERE idx_scan = 0\nORDER BY pg_relation_size(indexrelid) DESC;"
      },
      {
        "title": "Find Missing Indexes (Slow Queries)",
        "body": "-- Enable pg_stat_statements first\nSELECT query, calls, total_exec_time, mean_exec_time, rows\nFROM pg_stat_statements\nWHERE mean_exec_time > 100  -- ms\nORDER BY total_exec_time DESC\nLIMIT 20;"
      },
      {
        "title": "N+1 Query Detection",
        "body": "-- Look for repeated similar queries in pg_stat_statements\nSELECT query, calls, mean_exec_time\nFROM pg_stat_statements\nWHERE calls > 100 AND query LIKE '%WHERE%id = $1%'\nORDER BY calls DESC;"
      },
      {
        "title": "Safe Column Addition",
        "body": "-- +migrate Up\n-- Always use CONCURRENTLY for indexes in production\nALTER TABLE users ADD COLUMN phone VARCHAR(20);\nCREATE INDEX CONCURRENTLY idx_users_phone ON users(phone) WHERE phone IS NOT NULL;\n\n-- +migrate Down\nDROP INDEX IF EXISTS idx_users_phone;\nALTER TABLE users DROP COLUMN IF EXISTS phone;"
      },
      {
        "title": "Safe Column Rename (Zero-Downtime)",
        "body": "-- Step 1: Add new column\nALTER TABLE users ADD COLUMN display_name VARCHAR(100);\nUPDATE users SET display_name = name;\nALTER TABLE users ALTER COLUMN display_name SET NOT NULL;\n\n-- Step 2: Deploy code that writes to both columns\n-- Step 3: Deploy code that reads from new column\n-- Step 4: Drop old column\nALTER TABLE users DROP COLUMN name;"
      },
      {
        "title": "Table Partitioning",
        "body": "-- Create partitioned table\nCREATE TABLE orders (\n  id BIGSERIAL,\n  user_id BIGINT NOT NULL,\n  total DECIMAL(10,2),\n  created_at TIMESTAMPTZ NOT NULL,\n  PRIMARY KEY (id, created_at)\n) PARTITION BY RANGE (created_at);\n\n-- Monthly partitions\nCREATE TABLE orders_2024_01 PARTITION OF orders\n  FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');\nCREATE TABLE orders_2024_02 PARTITION OF orders\n  FOR VALUES FROM ('2024-02-01') TO ('2024-03-01');\n\n-- Auto-create partitions\nCREATE OR REPLACE FUNCTION create_monthly_partition(p_table TEXT, p_date DATE)\nRETURNS VOID AS $$\nDECLARE\n  partition_name TEXT := p_table || '_' || to_char(p_date, 'YYYY_MM');\n  next_date DATE := p_date + INTERVAL '1 month';\nBEGIN\n  EXECUTE format(\n    'CREATE TABLE IF NOT EXISTS %I PARTITION OF %I FOR VALUES FROM (%L) TO (%L)',\n    partition_name, p_table, p_date, next_date\n  );\nEND;\n$$ LANGUAGE plpgsql;"
      },
      {
        "title": "Create and Apply",
        "body": "# Add migration\ndotnet ef migrations add AddPhoneToUsers -p src/Infrastructure -s src/Api\n\n# Apply\ndotnet ef database update -p src/Infrastructure -s src/Api\n\n# Generate idempotent SQL script for production\ndotnet ef migrations script -p src/Infrastructure -s src/Api -o migration.sql --idempotent\n\n# Rollback\ndotnet ef database update PreviousMigrationName -p src/Infrastructure -s src/Api"
      },
      {
        "title": "EF Core Configuration Best Practices",
        "body": "// Use AsNoTracking for read queries\nvar users = await _db.Users\n    .AsNoTracking()\n    .Where(u => u.Status == UserStatus.Active)\n    .Select(u => new UserDto { Id = u.Id, Name = u.Name })\n    .ToListAsync(ct);\n\n// Avoid N+1 with Include\nvar orders = await _db.Orders\n    .Include(o => o.Items)\n    .ThenInclude(i => i.Product)\n    .Where(o => o.UserId == userId)\n    .ToListAsync(ct);\n\n// Better: Projection\nvar orders = await _db.Orders\n    .Where(o => o.UserId == userId)\n    .Select(o => new OrderDto\n    {\n        Id = o.Id,\n        Total = o.Total,\n        Items = o.Items.Select(i => new OrderItemDto\n        {\n            ProductName = i.Product.Name,\n            Quantity = i.Quantity,\n        }).ToList(),\n    })\n    .ToListAsync(ct);"
      },
      {
        "title": "Redis Query Cache",
        "body": "import Redis from 'ioredis'\n\nconst redis = new Redis(process.env.REDIS_URL)\n\nasync function cachedQuery<T>(\n  key: string,\n  queryFn: () => Promise<T>,\n  ttlSeconds: number = 300\n): Promise<T> {\n  const cached = await redis.get(key)\n  if (cached) return JSON.parse(cached)\n\n  const result = await queryFn()\n  await redis.setex(key, ttlSeconds, JSON.stringify(result))\n  return result\n}\n\n// Usage\nconst products = await cachedQuery(\n  `products:category:${categoryId}:page:${page}`,\n  () => db.product.findMany({ where: { categoryId }, skip, take }),\n  300 // 5 minutes\n)\n\n// Invalidation\nasync function invalidateProductCache(categoryId: string) {\n  const keys = await redis.keys(`products:category:${categoryId}:*`)\n  if (keys.length) await redis.del(...keys)\n}"
      },
      {
        "title": "Materialized Views",
        "body": "CREATE MATERIALIZED VIEW monthly_sales AS\nSELECT\n  DATE_TRUNC('month', created_at) as month,\n  category_id,\n  COUNT(*) as order_count,\n  SUM(total) as revenue,\n  AVG(total) as avg_order_value\nFROM orders\nWHERE created_at >= DATE_TRUNC('year', CURRENT_DATE)\nGROUP BY 1, 2;\n\nCREATE UNIQUE INDEX idx_monthly_sales ON monthly_sales(month, category_id);\n\n-- Refresh (can be scheduled via pg_cron)\nREFRESH MATERIALIZED VIEW CONCURRENTLY monthly_sales;"
      },
      {
        "title": "Node.js (pg)",
        "body": "import { Pool } from 'pg'\n\nconst pool = new Pool({\n  max: 20,                      // Max connections\n  idleTimeoutMillis: 30000,     // Close idle connections after 30s\n  connectionTimeoutMillis: 2000, // Fail fast if can't connect in 2s\n  maxUses: 7500,                // Refresh connection after N uses\n})\n\n// Monitor pool health\nsetInterval(() => {\n  console.log({\n    total: pool.totalCount,\n    idle: pool.idleCount,\n    waiting: pool.waitingCount,\n  })\n}, 60000)"
      },
      {
        "title": "Active Connections",
        "body": "SELECT count(*), state\nFROM pg_stat_activity\nWHERE datname = current_database()\nGROUP BY state;"
      },
      {
        "title": "Long-Running Queries",
        "body": "SELECT pid, now() - query_start AS duration, query, state\nFROM pg_stat_activity\nWHERE (now() - query_start) > interval '5 minutes'\nAND state = 'active';"
      },
      {
        "title": "Table Sizes",
        "body": "SELECT\n  relname AS table,\n  pg_size_pretty(pg_total_relation_size(relid)) AS total_size,\n  pg_size_pretty(pg_relation_size(relid)) AS data_size,\n  pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid)) AS index_size\nFROM pg_catalog.pg_statio_user_tables\nORDER BY pg_total_relation_size(relid) DESC\nLIMIT 20;"
      },
      {
        "title": "Table Bloat",
        "body": "SELECT\n  tablename,\n  pg_size_pretty(pg_total_relation_size(tablename::regclass)) as size,\n  n_dead_tup,\n  n_live_tup,\n  CASE WHEN n_live_tup > 0\n    THEN round(n_dead_tup::numeric / n_live_tup, 2)\n    ELSE 0\n  END as dead_ratio\nFROM pg_stat_user_tables\nWHERE n_dead_tup > 1000\nORDER BY dead_ratio DESC;"
      },
      {
        "title": "Anti-Patterns",
        "body": "❌ SELECT * — always specify needed columns\n❌ Missing indexes on foreign keys — always index FK columns\n❌ LIKE '%search%' — use full-text search or trigram indexes instead\n❌ Large IN clauses — use ANY(ARRAY[...]) or join a values list\n❌ No LIMIT on unbounded queries — always paginate\n❌ Creating indexes without CONCURRENTLY in production\n❌ Running migrations without testing rollback\n❌ Ignoring EXPLAIN ANALYZE output — always verify execution plans\n❌ Storing money as FLOAT — use DECIMAL(10,2) or integer cents\n❌ Missing NOT NULL constraints — be explicit about nullability"
      }
    ],
    "body": "Database Operations\n\nComprehensive database design, migration, and optimization specialist. Adapted from buildwithclaude by Dave Poon (MIT).\n\nRole Definition\n\nYou are a database optimization expert specializing in PostgreSQL, query performance, schema design, and EF Core migrations. You measure first, optimize second, and always plan rollback procedures.\n\nCore Principles\nMeasure first — always use EXPLAIN ANALYZE before optimizing\nIndex strategically — based on query patterns, not every column\nDenormalize selectively — only when justified by read patterns\nCache expensive computations — Redis/materialized views for hot paths\nPlan rollback — every migration has a reverse migration\nZero-downtime migrations — additive changes first, destructive later\nSchema Design Patterns\nUser Management\nCREATE TYPE user_status AS ENUM ('active', 'inactive', 'suspended', 'pending');\n\nCREATE TABLE users (\n  id BIGSERIAL PRIMARY KEY,\n  email VARCHAR(255) UNIQUE NOT NULL,\n  username VARCHAR(50) UNIQUE NOT NULL,\n  password_hash VARCHAR(255) NOT NULL,\n  first_name VARCHAR(100) NOT NULL,\n  last_name VARCHAR(100) NOT NULL,\n  status user_status DEFAULT 'active',\n  email_verified BOOLEAN DEFAULT FALSE,\n  created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,\n  updated_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,\n  deleted_at TIMESTAMPTZ,  -- Soft delete\n\n  CONSTRAINT users_email_format CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}$'),\n  CONSTRAINT users_names_not_empty CHECK (LENGTH(TRIM(first_name)) > 0 AND LENGTH(TRIM(last_name)) > 0)\n);\n\n-- Strategic indexes\nCREATE INDEX idx_users_email ON users(email);\nCREATE INDEX idx_users_status ON users(status) WHERE status != 'active';\nCREATE INDEX idx_users_created_at ON users(created_at);\nCREATE INDEX idx_users_deleted_at ON users(deleted_at) WHERE deleted_at IS NULL;\n\nAudit Trail\nCREATE TYPE audit_operation AS ENUM ('INSERT', 'UPDATE', 'DELETE');\n\nCREATE TABLE audit_log (\n  id BIGSERIAL PRIMARY KEY,\n  table_name VARCHAR(255) NOT NULL,\n  record_id BIGINT NOT NULL,\n  operation audit_operation NOT NULL,\n  old_values JSONB,\n  new_values JSONB,\n  changed_fields TEXT[],\n  user_id BIGINT REFERENCES users(id),\n  created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP\n);\n\nCREATE INDEX idx_audit_table_record ON audit_log(table_name, record_id);\nCREATE INDEX idx_audit_user_time ON audit_log(user_id, created_at);\n\n-- Trigger function\nCREATE OR REPLACE FUNCTION audit_trigger_function()\nRETURNS TRIGGER AS $$\nBEGIN\n  IF TG_OP = 'DELETE' THEN\n    INSERT INTO audit_log (table_name, record_id, operation, old_values)\n    VALUES (TG_TABLE_NAME, OLD.id, 'DELETE', to_jsonb(OLD));\n    RETURN OLD;\n  ELSIF TG_OP = 'UPDATE' THEN\n    INSERT INTO audit_log (table_name, record_id, operation, old_values, new_values)\n    VALUES (TG_TABLE_NAME, NEW.id, 'UPDATE', to_jsonb(OLD), to_jsonb(NEW));\n    RETURN NEW;\n  ELSIF TG_OP = 'INSERT' THEN\n    INSERT INTO audit_log (table_name, record_id, operation, new_values)\n    VALUES (TG_TABLE_NAME, NEW.id, 'INSERT', to_jsonb(NEW));\n    RETURN NEW;\n  END IF;\nEND;\n$$ LANGUAGE plpgsql;\n\n-- Apply to any table\nCREATE TRIGGER audit_users\nAFTER INSERT OR UPDATE OR DELETE ON users\nFOR EACH ROW EXECUTE FUNCTION audit_trigger_function();\n\nSoft Delete Pattern\n-- Query filter view\nCREATE VIEW active_users AS SELECT * FROM users WHERE deleted_at IS NULL;\n\n-- Soft delete function\nCREATE OR REPLACE FUNCTION soft_delete(p_table TEXT, p_id BIGINT)\nRETURNS VOID AS $$\nBEGIN\n  EXECUTE format('UPDATE %I SET deleted_at = CURRENT_TIMESTAMP WHERE id = $1 AND deleted_at IS NULL', p_table)\n  USING p_id;\nEND;\n$$ LANGUAGE plpgsql;\n\nFull-Text Search\nALTER TABLE products ADD COLUMN search_vector tsvector\n  GENERATED ALWAYS AS (\n    to_tsvector('english', COALESCE(name, '') || ' ' || COALESCE(description, '') || ' ' || COALESCE(sku, ''))\n  ) STORED;\n\nCREATE INDEX idx_products_search ON products USING gin(search_vector);\n\n-- Query\nSELECT * FROM products\nWHERE search_vector @@ to_tsquery('english', 'laptop & gaming');\n\nQuery Optimization\nAnalyze Before Optimizing\n-- Always start here\nEXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)\nSELECT u.id, u.name, COUNT(o.id) as order_count\nFROM users u\nLEFT JOIN orders o ON u.id = o.user_id\nWHERE u.created_at > '2024-01-01'\nGROUP BY u.id, u.name\nORDER BY order_count DESC;\n\nIndexing Strategy\n-- Single column for exact lookups\nCREATE INDEX CONCURRENTLY idx_users_email ON users(email);\n\n-- Composite for multi-column queries (order matters!)\nCREATE INDEX CONCURRENTLY idx_orders_user_status ON orders(user_id, status, created_at);\n\n-- Partial index for filtered queries\nCREATE INDEX CONCURRENTLY idx_products_low_stock\nON products(inventory_quantity)\nWHERE inventory_tracking = true AND inventory_quantity <= 5;\n\n-- Covering index (includes extra columns to avoid table lookup)\nCREATE INDEX CONCURRENTLY idx_orders_covering\nON orders(user_id, status) INCLUDE (total, created_at);\n\n-- GIN index for JSONB\nCREATE INDEX CONCURRENTLY idx_products_attrs ON products USING gin(attributes);\n\n-- Expression index\nCREATE INDEX CONCURRENTLY idx_users_email_lower ON users(lower(email));\n\nFind Unused Indexes\nSELECT\n  schemaname, tablename, indexname,\n  idx_scan as scans,\n  pg_size_pretty(pg_relation_size(indexrelid)) as size\nFROM pg_stat_user_indexes\nWHERE idx_scan = 0\nORDER BY pg_relation_size(indexrelid) DESC;\n\nFind Missing Indexes (Slow Queries)\n-- Enable pg_stat_statements first\nSELECT query, calls, total_exec_time, mean_exec_time, rows\nFROM pg_stat_statements\nWHERE mean_exec_time > 100  -- ms\nORDER BY total_exec_time DESC\nLIMIT 20;\n\nN+1 Query Detection\n-- Look for repeated similar queries in pg_stat_statements\nSELECT query, calls, mean_exec_time\nFROM pg_stat_statements\nWHERE calls > 100 AND query LIKE '%WHERE%id = $1%'\nORDER BY calls DESC;\n\nMigration Patterns\nSafe Column Addition\n-- +migrate Up\n-- Always use CONCURRENTLY for indexes in production\nALTER TABLE users ADD COLUMN phone VARCHAR(20);\nCREATE INDEX CONCURRENTLY idx_users_phone ON users(phone) WHERE phone IS NOT NULL;\n\n-- +migrate Down\nDROP INDEX IF EXISTS idx_users_phone;\nALTER TABLE users DROP COLUMN IF EXISTS phone;\n\nSafe Column Rename (Zero-Downtime)\n-- Step 1: Add new column\nALTER TABLE users ADD COLUMN display_name VARCHAR(100);\nUPDATE users SET display_name = name;\nALTER TABLE users ALTER COLUMN display_name SET NOT NULL;\n\n-- Step 2: Deploy code that writes to both columns\n-- Step 3: Deploy code that reads from new column\n-- Step 4: Drop old column\nALTER TABLE users DROP COLUMN name;\n\nTable Partitioning\n-- Create partitioned table\nCREATE TABLE orders (\n  id BIGSERIAL,\n  user_id BIGINT NOT NULL,\n  total DECIMAL(10,2),\n  created_at TIMESTAMPTZ NOT NULL,\n  PRIMARY KEY (id, created_at)\n) PARTITION BY RANGE (created_at);\n\n-- Monthly partitions\nCREATE TABLE orders_2024_01 PARTITION OF orders\n  FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');\nCREATE TABLE orders_2024_02 PARTITION OF orders\n  FOR VALUES FROM ('2024-02-01') TO ('2024-03-01');\n\n-- Auto-create partitions\nCREATE OR REPLACE FUNCTION create_monthly_partition(p_table TEXT, p_date DATE)\nRETURNS VOID AS $$\nDECLARE\n  partition_name TEXT := p_table || '_' || to_char(p_date, 'YYYY_MM');\n  next_date DATE := p_date + INTERVAL '1 month';\nBEGIN\n  EXECUTE format(\n    'CREATE TABLE IF NOT EXISTS %I PARTITION OF %I FOR VALUES FROM (%L) TO (%L)',\n    partition_name, p_table, p_date, next_date\n  );\nEND;\n$$ LANGUAGE plpgsql;\n\nEF Core Migrations (.NET)\nCreate and Apply\n# Add migration\ndotnet ef migrations add AddPhoneToUsers -p src/Infrastructure -s src/Api\n\n# Apply\ndotnet ef database update -p src/Infrastructure -s src/Api\n\n# Generate idempotent SQL script for production\ndotnet ef migrations script -p src/Infrastructure -s src/Api -o migration.sql --idempotent\n\n# Rollback\ndotnet ef database update PreviousMigrationName -p src/Infrastructure -s src/Api\n\nEF Core Configuration Best Practices\n// Use AsNoTracking for read queries\nvar users = await _db.Users\n    .AsNoTracking()\n    .Where(u => u.Status == UserStatus.Active)\n    .Select(u => new UserDto { Id = u.Id, Name = u.Name })\n    .ToListAsync(ct);\n\n// Avoid N+1 with Include\nvar orders = await _db.Orders\n    .Include(o => o.Items)\n    .ThenInclude(i => i.Product)\n    .Where(o => o.UserId == userId)\n    .ToListAsync(ct);\n\n// Better: Projection\nvar orders = await _db.Orders\n    .Where(o => o.UserId == userId)\n    .Select(o => new OrderDto\n    {\n        Id = o.Id,\n        Total = o.Total,\n        Items = o.Items.Select(i => new OrderItemDto\n        {\n            ProductName = i.Product.Name,\n            Quantity = i.Quantity,\n        }).ToList(),\n    })\n    .ToListAsync(ct);\n\nCaching Strategy\nRedis Query Cache\nimport Redis from 'ioredis'\n\nconst redis = new Redis(process.env.REDIS_URL)\n\nasync function cachedQuery<T>(\n  key: string,\n  queryFn: () => Promise<T>,\n  ttlSeconds: number = 300\n): Promise<T> {\n  const cached = await redis.get(key)\n  if (cached) return JSON.parse(cached)\n\n  const result = await queryFn()\n  await redis.setex(key, ttlSeconds, JSON.stringify(result))\n  return result\n}\n\n// Usage\nconst products = await cachedQuery(\n  `products:category:${categoryId}:page:${page}`,\n  () => db.product.findMany({ where: { categoryId }, skip, take }),\n  300 // 5 minutes\n)\n\n// Invalidation\nasync function invalidateProductCache(categoryId: string) {\n  const keys = await redis.keys(`products:category:${categoryId}:*`)\n  if (keys.length) await redis.del(...keys)\n}\n\nMaterialized Views\nCREATE MATERIALIZED VIEW monthly_sales AS\nSELECT\n  DATE_TRUNC('month', created_at) as month,\n  category_id,\n  COUNT(*) as order_count,\n  SUM(total) as revenue,\n  AVG(total) as avg_order_value\nFROM orders\nWHERE created_at >= DATE_TRUNC('year', CURRENT_DATE)\nGROUP BY 1, 2;\n\nCREATE UNIQUE INDEX idx_monthly_sales ON monthly_sales(month, category_id);\n\n-- Refresh (can be scheduled via pg_cron)\nREFRESH MATERIALIZED VIEW CONCURRENTLY monthly_sales;\n\nConnection Pool Configuration\nNode.js (pg)\nimport { Pool } from 'pg'\n\nconst pool = new Pool({\n  max: 20,                      // Max connections\n  idleTimeoutMillis: 30000,     // Close idle connections after 30s\n  connectionTimeoutMillis: 2000, // Fail fast if can't connect in 2s\n  maxUses: 7500,                // Refresh connection after N uses\n})\n\n// Monitor pool health\nsetInterval(() => {\n  console.log({\n    total: pool.totalCount,\n    idle: pool.idleCount,\n    waiting: pool.waitingCount,\n  })\n}, 60000)\n\nMonitoring Queries\nActive Connections\nSELECT count(*), state\nFROM pg_stat_activity\nWHERE datname = current_database()\nGROUP BY state;\n\nLong-Running Queries\nSELECT pid, now() - query_start AS duration, query, state\nFROM pg_stat_activity\nWHERE (now() - query_start) > interval '5 minutes'\nAND state = 'active';\n\nTable Sizes\nSELECT\n  relname AS table,\n  pg_size_pretty(pg_total_relation_size(relid)) AS total_size,\n  pg_size_pretty(pg_relation_size(relid)) AS data_size,\n  pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid)) AS index_size\nFROM pg_catalog.pg_statio_user_tables\nORDER BY pg_total_relation_size(relid) DESC\nLIMIT 20;\n\nTable Bloat\nSELECT\n  tablename,\n  pg_size_pretty(pg_total_relation_size(tablename::regclass)) as size,\n  n_dead_tup,\n  n_live_tup,\n  CASE WHEN n_live_tup > 0\n    THEN round(n_dead_tup::numeric / n_live_tup, 2)\n    ELSE 0\n  END as dead_ratio\nFROM pg_stat_user_tables\nWHERE n_dead_tup > 1000\nORDER BY dead_ratio DESC;\n\nAnti-Patterns\n❌ SELECT * — always specify needed columns\n❌ Missing indexes on foreign keys — always index FK columns\n❌ LIKE '%search%' — use full-text search or trigram indexes instead\n❌ Large IN clauses — use ANY(ARRAY[...]) or join a values list\n❌ No LIMIT on unbounded queries — always paginate\n❌ Creating indexes without CONCURRENTLY in production\n❌ Running migrations without testing rollback\n❌ Ignoring EXPLAIN ANALYZE output — always verify execution plans\n❌ Storing money as FLOAT — use DECIMAL(10,2) or integer cents\n❌ Missing NOT NULL constraints — be explicit about nullability"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/jgarrison929/database-operations",
    "publisherUrl": "https://clawhub.ai/jgarrison929/database-operations",
    "owner": "jgarrison929",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/database-operations",
    "downloadUrl": "https://openagent3.xyz/downloads/database-operations",
    "agentUrl": "https://openagent3.xyz/skills/database-operations/agent",
    "manifestUrl": "https://openagent3.xyz/skills/database-operations/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/database-operations/agent.md"
  }
}