{
  "schemaVersion": "1.0",
  "item": {
    "slug": "pls-audit-website",
    "name": "PLS Website Audit",
    "source": "tencent",
    "type": "skill",
    "category": "数据分析",
    "sourceUrl": "https://clawhub.ai/mattvalenta/pls-audit-website",
    "canonicalUrl": "https://clawhub.ai/mattvalenta/pls-audit-website",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/pls-audit-website",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=pls-audit-website",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/pls-audit-website"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/pls-audit-website",
    "agentPageUrl": "https://openagent3.xyz/skills/pls-audit-website/agent",
    "manifestUrl": "https://openagent3.xyz/skills/pls-audit-website/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/pls-audit-website/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Website Audit",
        "body": "Comprehensive website health check for performance, accessibility, security, and user experience."
      },
      {
        "title": "Quick Health Check",
        "body": "# One-command overview\ncurl -I https://example.com && \\\ncurl -w \"DNS: %{time_namelookup}s\\nConnect: %{time_connect}s\\nTTFB: %{time_starttransfer}s\\nTotal: %{time_total}s\\n\" -o /dev/null -s https://example.com"
      },
      {
        "title": "Page Load Time",
        "body": "# Using curl for timing\ncurl -w \"DNS: %{time_namelookup}s\\nConnect: %{time_connect}s\\nSSL: %{time_appconnect}s\\nTTFB: %{time_starttransfer}s\\nTotal: %{time_total}s\\nSize: %{size_download} bytes\\n\" -o /dev/null -s https://example.com\n\n# Using lighthouse\nnpx lighthouse https://example.com --only-categories=performance --output=json"
      },
      {
        "title": "Resource Analysis",
        "body": "import requests\nfrom urllib.parse import urlparse\n\ndef analyze_resources(url):\n    response = requests.get(url)\n    resources = []\n    \n    # Parse HTML for resources\n    from bs4 import BeautifulSoup\n    soup = BeautifulSoup(response.text, 'html.parser')\n    \n    # Images\n    for img in soup.find_all('img'):\n        resources.append({\n            'type': 'image',\n            'url': img.get('src'),\n            'size_estimate': 'unknown'\n        })\n    \n    # Scripts\n    for script in soup.find_all('script', src=True):\n        resources.append({\n            'type': 'script',\n            'url': script.get('src')\n        })\n    \n    # Stylesheets\n    for link in soup.find_all('link', rel='stylesheet'):\n        resources.append({\n            'type': 'stylesheet',\n            'url': link.get('href')\n        })\n    \n    return resources"
      },
      {
        "title": "Core Web Vitals",
        "body": "# Using web-vitals CLI\nnpx web-vitals https://example.com\n\n# LCP (Largest Contentful Paint): < 2.5s\n# FID (First Input Delay): < 100ms\n# CLS (Cumulative Layout Shift): < 0.1"
      },
      {
        "title": "Find Broken Links",
        "body": "import requests\nfrom bs4 import BeautifulSoup\nfrom urllib.parse import urljoin, urlparse\n\ndef find_broken_links(base_url, max_depth=2):\n    visited = set()\n    broken = []\n    \n    def check_page(url, depth):\n        if depth > max_depth or url in visited:\n            return\n        visited.add(url)\n        \n        try:\n            response = requests.get(url, timeout=10)\n            if response.status_code >= 400:\n                broken.append({'url': url, 'status': response.status_code})\n                return\n            \n            soup = BeautifulSoup(response.text, 'html.parser')\n            for link in soup.find_all('a', href=True):\n                href = urljoin(url, link['href'])\n                if urlparse(href).netloc == urlparse(base_url).netloc:\n                    check_page(href, depth + 1)\n        except Exception as e:\n            broken.append({'url': url, 'error': str(e)})\n    \n    check_page(base_url, 0)\n    return broken"
      },
      {
        "title": "Quick Link Check",
        "body": "# Using wget\nwget --spider -r -l 2 https://example.com 2>&1 | grep -E \"(broken|failed|error)\"\n\n# Using linkchecker\npip install LinkChecker\nlinkchecker https://example.com"
      },
      {
        "title": "Check Security Headers",
        "body": "# Fetch and analyze headers\ncurl -I https://example.com\n\n# Expected headers:\n# - Strict-Transport-Security (HSTS)\n# - X-Content-Type-Options: nosniff\n# - X-Frame-Options: DENY or SAMEORIGIN\n# - Content-Security-Policy\n# - X-XSS-Protection"
      },
      {
        "title": "Security Header Analysis",
        "body": "import requests\n\ndef audit_security_headers(url):\n    response = requests.head(url)\n    headers = response.headers\n    \n    recommended = {\n        'Strict-Transport-Security': 'Enable HSTS',\n        'X-Content-Type-Options': 'Set to nosniff',\n        'X-Frame-Options': 'Set to DENY or SAMEORIGIN',\n        'Content-Security-Policy': 'Define CSP',\n        'X-XSS-Protection': 'Enable XSS filter',\n        'Referrer-Policy': 'Set referrer policy',\n        'Permissions-Policy': 'Define permissions'\n    }\n    \n    issues = []\n    for header, recommendation in recommended.items():\n        if header not in headers:\n            issues.append(f\"Missing {header}: {recommendation}\")\n    \n    return {\n        \"present\": {h: headers.get(h) for h in recommended if h in headers},\n        \"missing\": issues\n    }"
      },
      {
        "title": "SSL Certificate Check",
        "body": "# Check SSL details\nopenssl s_client -connect example.com:443 -servername example.com 2>/dev/null | openssl x509 -noout -text | grep -E \"(Issuer|Not After|Subject)\"\n\n# Quick expiry check\nopenssl s_client -connect example.com:443 -servername example.com 2>/dev/null | openssl x509 -noout -dates"
      },
      {
        "title": "Basic Accessibility Check",
        "body": "from bs4 import BeautifulSoup\n\ndef accessibility_audit(html):\n    soup = BeautifulSoup(html, 'html.parser')\n    issues = []\n    \n    # Check images for alt text\n    for img in soup.find_all('img'):\n        if not img.get('alt'):\n            issues.append(f\"Image missing alt: {img.get('src', 'unknown')}\")\n    \n    # Check for lang attribute\n    if not soup.find('html', lang=True):\n        issues.append(\"Missing lang attribute on <html>\")\n    \n    # Check headings hierarchy\n    headings = ['h1', 'h2', 'h3', 'h4', 'h5', 'h6']\n    prev_level = 0\n    for h in soup.find_all(headings):\n        level = int(h.name[1])\n        if level > prev_level + 1:\n            issues.append(f\"Skipped heading level: h{prev_level} to h{level}\")\n        prev_level = level\n    \n    # Check for form labels\n    for input_tag in soup.find_all('input'):\n        if not input_tag.get('id') or not soup.find('label', attrs={'for': input_tag.get('id')}):\n            if not input_tag.get('aria-label'):\n                issues.append(f\"Input missing label: {input_tag.get('name', 'unknown')}\")\n    \n    return issues"
      },
      {
        "title": "Using axe-core",
        "body": "# Using @axe-core/cli\nnpx axe-cli https://example.com\n\n# Using pa11y\nnpx pa11y https://example.com"
      },
      {
        "title": "SEO Quick Check",
        "body": "def seo_quick_check(html, url):\n    from bs4 import BeautifulSoup\n    soup = BeautifulSoup(html, 'html.parser')\n    \n    issues = []\n    \n    # Title\n    title = soup.find('title')\n    if not title:\n        issues.append(\"Missing <title> tag\")\n    elif len(title.text) < 30 or len(title.text) > 60:\n        issues.append(f\"Title length suboptimal: {len(title.text)} chars (30-60 ideal)\")\n    \n    # Meta description\n    desc = soup.find('meta', attrs={'name': 'description'})\n    if not desc:\n        issues.append(\"Missing meta description\")\n    \n    # H1\n    h1_tags = soup.find_all('h1')\n    if len(h1_tags) == 0:\n        issues.append(\"Missing H1 tag\")\n    elif len(h1_tags) > 1:\n        issues.append(\"Multiple H1 tags found\")\n    \n    # Canonical\n    if not soup.find('link', rel='canonical'):\n        issues.append(\"Missing canonical tag\")\n    \n    # Robots meta\n    robots = soup.find('meta', attrs={'name': 'robots'})\n    if robots and 'noindex' in robots.get('content', ''):\n        issues.append(\"Page is set to noindex\")\n    \n    return issues"
      },
      {
        "title": "Website Audit Report Template",
        "body": "# Website Audit Report\n\n**URL:** https://example.com  \n**Date:** YYYY-MM-DD  \n**Overall Score:** X/100\n\n---\n\n## Performance\n| Metric | Value | Target | Status |\n|--------|-------|--------|--------|\n| Load Time | 3.2s | <3s | ⚠️ |\n| TTFB | 0.8s | <0.5s | ⚠️ |\n| Page Size | 1.2MB | <1MB | ⚠️ |\n| Requests | 45 | <30 | ⚠️ |\n\n## Security\n| Header | Status |\n|--------|--------|\n| HSTS | ✅ Present |\n| X-Frame-Options | ❌ Missing |\n| CSP | ❌ Missing |\n| X-Content-Type-Options | ✅ Present |\n\n## Accessibility\n- Images missing alt: 3\n- Form inputs missing labels: 2\n- Heading hierarchy issues: 1\n\n## SEO\n- Title: ✅ 52 chars\n- Meta description: ❌ Missing\n- H1: ✅ Single tag\n- Canonical: ✅ Present\n\n## Broken Links\n- /old-page (404)\n- /missing-resource (404)\n\n## Recommendations\n1. Add missing security headers (CSP, X-Frame-Options)\n2. Optimize images to reduce page size\n3. Add meta descriptions to all pages\n4. Fix broken links\n5. Add alt text to images\n\n## Priority Actions\n1. **Critical:** Add CSP header\n2. **High:** Fix broken links\n3. **Medium:** Optimize images\n4. **Low:** Add meta descriptions"
      },
      {
        "title": "Quick Commands Reference",
        "body": "CheckCommandResponse headerscurl -I URLLoad timingcurl -w \"%{time_total}s\" -o /dev/null -s URLSSL checkopenssl s_client -connect HOST:443Broken linkslinkchecker URLAccessibilitynpx pa11y URLPerformancenpx lighthouse URLSecurity headerscurl -I URL | grep -i \"x-|strict\""
      }
    ],
    "body": "Website Audit\n\nComprehensive website health check for performance, accessibility, security, and user experience.\n\nQuick Health Check\n# One-command overview\ncurl -I https://example.com && \\\ncurl -w \"DNS: %{time_namelookup}s\\nConnect: %{time_connect}s\\nTTFB: %{time_starttransfer}s\\nTotal: %{time_total}s\\n\" -o /dev/null -s https://example.com\n\nPerformance Audit\nPage Load Time\n# Using curl for timing\ncurl -w \"DNS: %{time_namelookup}s\\nConnect: %{time_connect}s\\nSSL: %{time_appconnect}s\\nTTFB: %{time_starttransfer}s\\nTotal: %{time_total}s\\nSize: %{size_download} bytes\\n\" -o /dev/null -s https://example.com\n\n# Using lighthouse\nnpx lighthouse https://example.com --only-categories=performance --output=json\n\nResource Analysis\nimport requests\nfrom urllib.parse import urlparse\n\ndef analyze_resources(url):\n    response = requests.get(url)\n    resources = []\n    \n    # Parse HTML for resources\n    from bs4 import BeautifulSoup\n    soup = BeautifulSoup(response.text, 'html.parser')\n    \n    # Images\n    for img in soup.find_all('img'):\n        resources.append({\n            'type': 'image',\n            'url': img.get('src'),\n            'size_estimate': 'unknown'\n        })\n    \n    # Scripts\n    for script in soup.find_all('script', src=True):\n        resources.append({\n            'type': 'script',\n            'url': script.get('src')\n        })\n    \n    # Stylesheets\n    for link in soup.find_all('link', rel='stylesheet'):\n        resources.append({\n            'type': 'stylesheet',\n            'url': link.get('href')\n        })\n    \n    return resources\n\nCore Web Vitals\n# Using web-vitals CLI\nnpx web-vitals https://example.com\n\n# LCP (Largest Contentful Paint): < 2.5s\n# FID (First Input Delay): < 100ms\n# CLS (Cumulative Layout Shift): < 0.1\n\nBroken Link Checker\nFind Broken Links\nimport requests\nfrom bs4 import BeautifulSoup\nfrom urllib.parse import urljoin, urlparse\n\ndef find_broken_links(base_url, max_depth=2):\n    visited = set()\n    broken = []\n    \n    def check_page(url, depth):\n        if depth > max_depth or url in visited:\n            return\n        visited.add(url)\n        \n        try:\n            response = requests.get(url, timeout=10)\n            if response.status_code >= 400:\n                broken.append({'url': url, 'status': response.status_code})\n                return\n            \n            soup = BeautifulSoup(response.text, 'html.parser')\n            for link in soup.find_all('a', href=True):\n                href = urljoin(url, link['href'])\n                if urlparse(href).netloc == urlparse(base_url).netloc:\n                    check_page(href, depth + 1)\n        except Exception as e:\n            broken.append({'url': url, 'error': str(e)})\n    \n    check_page(base_url, 0)\n    return broken\n\nQuick Link Check\n# Using wget\nwget --spider -r -l 2 https://example.com 2>&1 | grep -E \"(broken|failed|error)\"\n\n# Using linkchecker\npip install LinkChecker\nlinkchecker https://example.com\n\nSecurity Audit\nCheck Security Headers\n# Fetch and analyze headers\ncurl -I https://example.com\n\n# Expected headers:\n# - Strict-Transport-Security (HSTS)\n# - X-Content-Type-Options: nosniff\n# - X-Frame-Options: DENY or SAMEORIGIN\n# - Content-Security-Policy\n# - X-XSS-Protection\n\nSecurity Header Analysis\nimport requests\n\ndef audit_security_headers(url):\n    response = requests.head(url)\n    headers = response.headers\n    \n    recommended = {\n        'Strict-Transport-Security': 'Enable HSTS',\n        'X-Content-Type-Options': 'Set to nosniff',\n        'X-Frame-Options': 'Set to DENY or SAMEORIGIN',\n        'Content-Security-Policy': 'Define CSP',\n        'X-XSS-Protection': 'Enable XSS filter',\n        'Referrer-Policy': 'Set referrer policy',\n        'Permissions-Policy': 'Define permissions'\n    }\n    \n    issues = []\n    for header, recommendation in recommended.items():\n        if header not in headers:\n            issues.append(f\"Missing {header}: {recommendation}\")\n    \n    return {\n        \"present\": {h: headers.get(h) for h in recommended if h in headers},\n        \"missing\": issues\n    }\n\nSSL Certificate Check\n# Check SSL details\nopenssl s_client -connect example.com:443 -servername example.com 2>/dev/null | openssl x509 -noout -text | grep -E \"(Issuer|Not After|Subject)\"\n\n# Quick expiry check\nopenssl s_client -connect example.com:443 -servername example.com 2>/dev/null | openssl x509 -noout -dates\n\nAccessibility Audit\nBasic Accessibility Check\nfrom bs4 import BeautifulSoup\n\ndef accessibility_audit(html):\n    soup = BeautifulSoup(html, 'html.parser')\n    issues = []\n    \n    # Check images for alt text\n    for img in soup.find_all('img'):\n        if not img.get('alt'):\n            issues.append(f\"Image missing alt: {img.get('src', 'unknown')}\")\n    \n    # Check for lang attribute\n    if not soup.find('html', lang=True):\n        issues.append(\"Missing lang attribute on <html>\")\n    \n    # Check headings hierarchy\n    headings = ['h1', 'h2', 'h3', 'h4', 'h5', 'h6']\n    prev_level = 0\n    for h in soup.find_all(headings):\n        level = int(h.name[1])\n        if level > prev_level + 1:\n            issues.append(f\"Skipped heading level: h{prev_level} to h{level}\")\n        prev_level = level\n    \n    # Check for form labels\n    for input_tag in soup.find_all('input'):\n        if not input_tag.get('id') or not soup.find('label', attrs={'for': input_tag.get('id')}):\n            if not input_tag.get('aria-label'):\n                issues.append(f\"Input missing label: {input_tag.get('name', 'unknown')}\")\n    \n    return issues\n\nUsing axe-core\n# Using @axe-core/cli\nnpx axe-cli https://example.com\n\n# Using pa11y\nnpx pa11y https://example.com\n\nSEO Quick Check\ndef seo_quick_check(html, url):\n    from bs4 import BeautifulSoup\n    soup = BeautifulSoup(html, 'html.parser')\n    \n    issues = []\n    \n    # Title\n    title = soup.find('title')\n    if not title:\n        issues.append(\"Missing <title> tag\")\n    elif len(title.text) < 30 or len(title.text) > 60:\n        issues.append(f\"Title length suboptimal: {len(title.text)} chars (30-60 ideal)\")\n    \n    # Meta description\n    desc = soup.find('meta', attrs={'name': 'description'})\n    if not desc:\n        issues.append(\"Missing meta description\")\n    \n    # H1\n    h1_tags = soup.find_all('h1')\n    if len(h1_tags) == 0:\n        issues.append(\"Missing H1 tag\")\n    elif len(h1_tags) > 1:\n        issues.append(\"Multiple H1 tags found\")\n    \n    # Canonical\n    if not soup.find('link', rel='canonical'):\n        issues.append(\"Missing canonical tag\")\n    \n    # Robots meta\n    robots = soup.find('meta', attrs={'name': 'robots'})\n    if robots and 'noindex' in robots.get('content', ''):\n        issues.append(\"Page is set to noindex\")\n    \n    return issues\n\nWebsite Audit Report Template\n# Website Audit Report\n\n**URL:** https://example.com  \n**Date:** YYYY-MM-DD  \n**Overall Score:** X/100\n\n---\n\n## Performance\n| Metric | Value | Target | Status |\n|--------|-------|--------|--------|\n| Load Time | 3.2s | <3s | ⚠️ |\n| TTFB | 0.8s | <0.5s | ⚠️ |\n| Page Size | 1.2MB | <1MB | ⚠️ |\n| Requests | 45 | <30 | ⚠️ |\n\n## Security\n| Header | Status |\n|--------|--------|\n| HSTS | ✅ Present |\n| X-Frame-Options | ❌ Missing |\n| CSP | ❌ Missing |\n| X-Content-Type-Options | ✅ Present |\n\n## Accessibility\n- Images missing alt: 3\n- Form inputs missing labels: 2\n- Heading hierarchy issues: 1\n\n## SEO\n- Title: ✅ 52 chars\n- Meta description: ❌ Missing\n- H1: ✅ Single tag\n- Canonical: ✅ Present\n\n## Broken Links\n- /old-page (404)\n- /missing-resource (404)\n\n## Recommendations\n1. Add missing security headers (CSP, X-Frame-Options)\n2. Optimize images to reduce page size\n3. Add meta descriptions to all pages\n4. Fix broken links\n5. Add alt text to images\n\n## Priority Actions\n1. **Critical:** Add CSP header\n2. **High:** Fix broken links\n3. **Medium:** Optimize images\n4. **Low:** Add meta descriptions\n\nQuick Commands Reference\nCheck\tCommand\nResponse headers\tcurl -I URL\nLoad timing\tcurl -w \"%{time_total}s\" -o /dev/null -s URL\nSSL check\topenssl s_client -connect HOST:443\nBroken links\tlinkchecker URL\nAccessibility\tnpx pa11y URL\nPerformance\tnpx lighthouse URL\nSecurity headers\tcurl -I URL | grep -i \"x-|strict\""
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/mattvalenta/pls-audit-website",
    "publisherUrl": "https://clawhub.ai/mattvalenta/pls-audit-website",
    "owner": "mattvalenta",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/pls-audit-website",
    "downloadUrl": "https://openagent3.xyz/downloads/pls-audit-website",
    "agentUrl": "https://openagent3.xyz/skills/pls-audit-website/agent",
    "manifestUrl": "https://openagent3.xyz/skills/pls-audit-website/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/pls-audit-website/agent.md"
  }
}