{
  "schemaVersion": "1.0",
  "item": {
    "slug": "traktor",
    "name": "Traktor Web Scraper",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/dorukardahan/traktor",
    "canonicalUrl": "https://clawhub.ai/dorukardahan/traktor",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/traktor",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=traktor",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-07T17:22:31.273Z",
      "expiresAt": "2026-05-14T17:22:31.273Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
        "contentDisposition": "attachment; filename=\"afrexai-annual-report-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/traktor"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/traktor",
    "agentPageUrl": "https://openagent3.xyz/skills/traktor/agent",
    "manifestUrl": "https://openagent3.xyz/skills/traktor/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/traktor/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Traktor - web asset extraction skill v2.0",
        "body": "Follow these steps exactly when extracting website assets."
      },
      {
        "title": "Trigger",
        "body": "/traktor <url1> [url2] [url3]..."
      },
      {
        "title": "Step 1: Parse URLs from arguments",
        "body": "Extract all URLs from the user's command. Store them in a list.\n\nURLS = [all URLs provided after /traktor]\nSITE_COUNT = number of URLs\nPROJECT_DIR = current working directory"
      },
      {
        "title": "If SITE_COUNT == 1",
        "body": "RUN THIS BASH COMMAND:\n\nmkdir -p assets/{logos,icons,images,videos,fonts,backgrounds} full-site/{pages,assets,styles}"
      },
      {
        "title": "If SITE_COUNT > 1",
        "body": "For each URL, extract site name (e.g., \"stripe.com\" -> \"stripe-com\") and RUN:\n\n# Replace {site-name} with actual site name for each URL\nmkdir -p assets/{site-name}/{logos,icons,images,videos,fonts,backgrounds}\nmkdir -p full-sites/{site-name}/{pages,assets,styles}\n\nExample for 3 sites:\n\nmkdir -p assets/{stripe-com,vercel-com,linear-app}/{logos,icons,images,videos,fonts,backgrounds}\nmkdir -p full-sites/{stripe-com,vercel-com,linear-app}/{pages,assets,styles}"
      },
      {
        "title": "Step 3: Spawn extraction agents",
        "body": "For EACH URL, call the Task tool with these EXACT parameters:\n\nTool: Task\nParameters:\n  - subagent_type: \"general-purpose\"\n  - run_in_background: true\n  - description: \"Extract {site-name} assets\"\n  - prompt: [USE THE AGENT PROMPT BELOW - replace variables]"
      },
      {
        "title": "Agent prompt (copy this exactly, replace {variables})",
        "body": "NOTE: This agent prompt uses mcp__claude-in-chrome__* tools which require the\nclaude-in-chrome browser extension MCP server. These tools will not be available\nin environments without the browser extension installed.\n\nExtract ALL assets from {URL} with paranoid-level thoroughness. Miss nothing.\n\nOUTPUT DIRECTORIES\n- Assets: {PROJECT_DIR}/assets/{site-name}/ (or {PROJECT_DIR}/assets/ if single site)\n- Full site: {PROJECT_DIR}/full-sites/{site-name}/ (or {PROJECT_DIR}/full-site/ if single site)\n\nPHASE 1: Browser setup and navigation\n\n1. Call mcp__claude-in-chrome__tabs_context_mcp to get browser context\n2. Call mcp__claude-in-chrome__tabs_create_mcp to create new tab\n3. Call mcp__claude-in-chrome__navigate with url=\"{URL}\" and the new tabId\n4. Call mcp__claude-in-chrome__computer with action=\"screenshot\" to verify page loaded\n5. Scroll full page to trigger lazy loading:\n   - mcp__claude-in-chrome__computer action=\"scroll\" scroll_direction=\"down\" scroll_amount=10\n   - Repeat 3-5 times with 1 second waits\n\nPHASE 2: Asset discovery (JavaScript extraction)\n\nCall mcp__claude-in-chrome__javascript_tool with this code:\n\n(() => {\n  const assets = {\n    images: [...document.querySelectorAll('img')].map(img => ({\n      src: img.src,\n      srcset: img.srcset,\n      dataSrc: img.dataset.src,\n      alt: img.alt\n    })).filter(i => i.src || i.srcset || i.dataSrc),\n\n    videos: [...document.querySelectorAll('video, video source')].map(v => ({\n      src: v.src || v.currentSrc,\n      poster: v.poster,\n      type: v.type\n    })).filter(v => v.src),\n\n    svgsInline: [...document.querySelectorAll('svg')].map((svg, i) => ({\n      id: svg.id || 'svg-' + i,\n      class: svg.className?.baseVal || '',\n      html: svg.outerHTML\n    })),\n\n    backgrounds: [...document.querySelectorAll('*')].map(el => {\n      const bg = getComputedStyle(el).backgroundImage;\n      if (bg && bg !== 'none' && bg.includes('url(')) {\n        return bg.match(/url\\(['\"]?([^'\"]+)['\"]?\\)/)?.[1];\n      }\n      return null;\n    }).filter(Boolean),\n\n    favicons: [...document.querySelectorAll('link[rel*=\"icon\"]')].map(l => ({\n      href: l.href,\n      rel: l.rel,\n      sizes: l.sizes?.value\n    })),\n\n    ogImages: (() => {\n      const og = document.querySelector('meta[property=\"og:image\"]');\n      const twitter = document.querySelector('meta[name=\"twitter:image\"]');\n      return [og?.content, twitter?.content].filter(Boolean);\n    })(),\n\n    fonts: (() => {\n      const fonts = [];\n      for (const sheet of document.styleSheets) {\n        try {\n          for (const rule of sheet.cssRules) {\n            if (rule.type === 5) { // FONT_FACE_RULE\n              const src = rule.style.getPropertyValue('src');\n              const urls = src.match(/url\\(['\"]?([^'\"]+)['\"]?\\)/g);\n              if (urls) fonts.push(...urls.map(u => u.match(/url\\(['\"]?([^'\"]+)['\"]?\\)/)?.[1]));\n            }\n          }\n        } catch (e) {}\n      }\n      return [...new Set(fonts)];\n    })()\n  };\n\n  return JSON.stringify(assets, null, 2);\n})()\n\nStore the result as DISCOVERED_ASSETS.\n\nPHASE 3: Content extraction\n\nCall mcp__claude-in-chrome__javascript_tool with this code:\n\n(() => {\n  const content = {\n    url: window.location.href,\n    title: document.title,\n    extractedAt: new Date().toISOString().split('T')[0],\n    meta: {\n      description: document.querySelector('meta[name=\"description\"]')?.content,\n      ogTitle: document.querySelector('meta[property=\"og:title\"]')?.content,\n      ogDescription: document.querySelector('meta[property=\"og:description\"]')?.content,\n      ogImage: document.querySelector('meta[property=\"og:image\"]')?.content,\n      favicon: document.querySelector('link[rel*=\"icon\"]')?.href\n    },\n    navigation: {\n      header: [...document.querySelectorAll('header a, nav a')].slice(0, 20).map(a => ({\n        text: a.textContent?.trim(),\n        href: a.href\n      })),\n      footer: [...document.querySelectorAll('footer a')].slice(0, 20).map(a => ({\n        text: a.textContent?.trim(),\n        href: a.href\n      }))\n    },\n    headings: [...document.querySelectorAll('h1, h2, h3')].slice(0, 30).map(h => ({\n      level: h.tagName,\n      text: h.textContent?.trim()\n    })),\n    buttons: [...document.querySelectorAll('button, a.btn, [class*=\"button\"]')].slice(0, 20).map(b => ({\n      text: b.textContent?.trim(),\n      href: b.href || null\n    }))\n  };\n\n  return JSON.stringify(content, null, 2);\n})()\n\nStore the result as PAGE_CONTENT.\n\nPHASE 4: Download assets\n\nFor each asset URL discovered, download using curl with error handling.\nIf curl fails (non-zero exit), log the URL and continue to the next asset.\n\n# Logos (favicon, og:image, header logos)\ncurl -sfLo \"{output_dir}/logos/{site-name}-favicon.ico\" \"{favicon_url}\" || echo \"FAIL: {favicon_url}\"\ncurl -sfLo \"{output_dir}/logos/{site-name}-og-image.png\" \"{og_image_url}\" || echo \"FAIL: {og_image_url}\"\n\n# Images\ncurl -sfLo \"{output_dir}/images/{site-name}-{descriptive-name}.{ext}\" \"{image_url}\" || echo \"FAIL: {image_url}\"\n\n# SVGs - Write inline SVGs to files using Write tool\n\n# Videos\ncurl -sfLo \"{output_dir}/videos/{site-name}-{name}.mp4\" \"{video_url}\" || echo \"FAIL: {video_url}\"\n\n# Fonts\ncurl -sfLo \"{output_dir}/fonts/{font-name}.woff2\" \"{font_url}\" || echo \"FAIL: {font_url}\"\n\nNAMING CONVENTION: {site-prefix}-{descriptive-name}.{ext}\n- Use alt text or context for descriptive names\n- Example: stripe-hero-illustration.svg, vercel-logo-white.svg\n\nPHASE 5: Save JSON files\n\n1. Save page content:\n   Use Write tool to save PAGE_CONTENT to:\n   {full-site-dir}/pages/homepage.json\n\n2. Save asset URLs catalog:\n   Use Write tool to save DISCOVERED_ASSETS to:\n   {full-site-dir}/asset-urls.json\n\nPHASE 6: Report results\n\nWhen complete, report:\n- Total assets downloaded (count by type)\n- Total size (estimate from curl outputs)\n- Any failed downloads (list URLs)\n- Output paths\n\nERROR HANDLING\n- If site unreachable: Report error, skip\n- If asset download fails: Retry once with 2s delay, then log URL and continue\n- If browser tab crashes: Call tabs_create_mcp again, continue\n- If rate limited: Add 2 second delays between requests\n\nCONSTRAINTS\n- Skip files > 50MB (log URL for manual download)\n- Skip duplicate URLs\n- Maximum 100 assets per category\n- 5 minute timeout for entire extraction"
      },
      {
        "title": "Step 4: Monitor agents",
        "body": "After spawning all agents:\n\nReport to user:\n\nTraktor v2.0 - Extraction started\n\nFolder structure created\nSpawned {N} extraction agents:\n   - Agent 1: {site1} [running in background]\n   - Agent 2: {site2} [running in background]\n   ...\n\nAgents working in background. You'll be notified as they complete.\n\nAs agents complete, collect their results."
      },
      {
        "title": "Step 5: Generate final manifest",
        "body": "After ALL agents complete, create manifest file:\n\nUse Write tool to create asset-manifest.json:\n\n{\n  \"generated_at\": \"{current_datetime}\",\n  \"tool\": \"traktor v2.0\",\n  \"project_dir\": \"{PROJECT_DIR}\",\n  \"sites_extracted\": [\"{site1}\", \"{site2}\"],\n  \"total_assets\": \"{total_count}\",\n  \"by_type\": {\n    \"logos\": \"{count}\",\n    \"icons\": \"{count}\",\n    \"images\": \"{count}\",\n    \"videos\": \"{count}\",\n    \"fonts\": \"{count}\",\n    \"svgs\": \"{count}\"\n  },\n  \"naming_convention\": \"{site-prefix}-{descriptive-name}.{ext}\",\n  \"output_structure\": {\n    \"assets\": \"./assets/\",\n    \"full_sites\": \"./full-sites/\"\n  }\n}"
      },
      {
        "title": "Step 6: Final report",
        "body": "Display to user:\n\nTraktor extraction complete!\n\nSummary:\n   Sites: {N}\n   Total assets: {count} ({size_mb} MB)\n\n   By type:\n   - Logos: {n}\n   - Images: {n}\n   - Videos: {n}\n   - SVGs: {n}\n   - Fonts: {n}\n\nOutput:\n   - Assets: ./assets/\n   - Full sites: ./full-sites/\n   - Manifest: ./asset-manifest.json\n\nFailed downloads: {list or \"None\"}"
      },
      {
        "title": "Quick reference",
        "body": "StepActionTool1Parse URLsInternal2Create foldersBash3Spawn agentsTask (background)4MonitorWait for notifications5Create manifestWrite6ReportOutput to user"
      },
      {
        "title": "Example execution",
        "body": "User: /traktor https://0g.ai\n\nClaude executes:\n\nParse: URLS = [\"https://0g.ai\"], SITE_COUNT = 1\nBash: mkdir -p assets/{logos,icons,images,videos,fonts,backgrounds} full-site/{pages,assets,styles}\nTask: Spawn 1 agent with exact prompt above\nReport: \"Traktor started, 1 agent running...\"\nWait for agent completion\nWrite: asset-manifest.json\nReport: Final summary\n\nUser: /traktor https://stripe.com https://vercel.com https://linear.app\n\nClaude executes:\n\nParse: 3 URLs\nBash: Create 3 site folders in assets/ and full-sites/\nTask: Spawn 3 parallel background agents\nReport: \"Traktor started, 3 agents running...\"\nCollect results as agents finish\nWrite: Combined manifest\nReport: Combined summary"
      }
    ],
    "body": "Traktor - web asset extraction skill v2.0\n\nFollow these steps exactly when extracting website assets.\n\nTrigger\n\n/traktor <url1> [url2] [url3]...\n\nStep 1: Parse URLs from arguments\n\nExtract all URLs from the user's command. Store them in a list.\n\nURLS = [all URLs provided after /traktor]\nSITE_COUNT = number of URLs\nPROJECT_DIR = current working directory\n\nStep 2: Create folder structure\nIf SITE_COUNT == 1\n\nRUN THIS BASH COMMAND:\n\nmkdir -p assets/{logos,icons,images,videos,fonts,backgrounds} full-site/{pages,assets,styles}\n\nIf SITE_COUNT > 1\n\nFor each URL, extract site name (e.g., \"stripe.com\" -> \"stripe-com\") and RUN:\n\n# Replace {site-name} with actual site name for each URL\nmkdir -p assets/{site-name}/{logos,icons,images,videos,fonts,backgrounds}\nmkdir -p full-sites/{site-name}/{pages,assets,styles}\n\n\nExample for 3 sites:\n\nmkdir -p assets/{stripe-com,vercel-com,linear-app}/{logos,icons,images,videos,fonts,backgrounds}\nmkdir -p full-sites/{stripe-com,vercel-com,linear-app}/{pages,assets,styles}\n\nStep 3: Spawn extraction agents\n\nFor EACH URL, call the Task tool with these EXACT parameters:\n\nTool: Task\nParameters:\n  - subagent_type: \"general-purpose\"\n  - run_in_background: true\n  - description: \"Extract {site-name} assets\"\n  - prompt: [USE THE AGENT PROMPT BELOW - replace variables]\n\nAgent prompt (copy this exactly, replace {variables})\n\nNOTE: This agent prompt uses mcp__claude-in-chrome__* tools which require the claude-in-chrome browser extension MCP server. These tools will not be available in environments without the browser extension installed.\n\nExtract ALL assets from {URL} with paranoid-level thoroughness. Miss nothing.\n\nOUTPUT DIRECTORIES\n- Assets: {PROJECT_DIR}/assets/{site-name}/ (or {PROJECT_DIR}/assets/ if single site)\n- Full site: {PROJECT_DIR}/full-sites/{site-name}/ (or {PROJECT_DIR}/full-site/ if single site)\n\nPHASE 1: Browser setup and navigation\n\n1. Call mcp__claude-in-chrome__tabs_context_mcp to get browser context\n2. Call mcp__claude-in-chrome__tabs_create_mcp to create new tab\n3. Call mcp__claude-in-chrome__navigate with url=\"{URL}\" and the new tabId\n4. Call mcp__claude-in-chrome__computer with action=\"screenshot\" to verify page loaded\n5. Scroll full page to trigger lazy loading:\n   - mcp__claude-in-chrome__computer action=\"scroll\" scroll_direction=\"down\" scroll_amount=10\n   - Repeat 3-5 times with 1 second waits\n\nPHASE 2: Asset discovery (JavaScript extraction)\n\nCall mcp__claude-in-chrome__javascript_tool with this code:\n\n(() => {\n  const assets = {\n    images: [...document.querySelectorAll('img')].map(img => ({\n      src: img.src,\n      srcset: img.srcset,\n      dataSrc: img.dataset.src,\n      alt: img.alt\n    })).filter(i => i.src || i.srcset || i.dataSrc),\n\n    videos: [...document.querySelectorAll('video, video source')].map(v => ({\n      src: v.src || v.currentSrc,\n      poster: v.poster,\n      type: v.type\n    })).filter(v => v.src),\n\n    svgsInline: [...document.querySelectorAll('svg')].map((svg, i) => ({\n      id: svg.id || 'svg-' + i,\n      class: svg.className?.baseVal || '',\n      html: svg.outerHTML\n    })),\n\n    backgrounds: [...document.querySelectorAll('*')].map(el => {\n      const bg = getComputedStyle(el).backgroundImage;\n      if (bg && bg !== 'none' && bg.includes('url(')) {\n        return bg.match(/url\\(['\"]?([^'\"]+)['\"]?\\)/)?.[1];\n      }\n      return null;\n    }).filter(Boolean),\n\n    favicons: [...document.querySelectorAll('link[rel*=\"icon\"]')].map(l => ({\n      href: l.href,\n      rel: l.rel,\n      sizes: l.sizes?.value\n    })),\n\n    ogImages: (() => {\n      const og = document.querySelector('meta[property=\"og:image\"]');\n      const twitter = document.querySelector('meta[name=\"twitter:image\"]');\n      return [og?.content, twitter?.content].filter(Boolean);\n    })(),\n\n    fonts: (() => {\n      const fonts = [];\n      for (const sheet of document.styleSheets) {\n        try {\n          for (const rule of sheet.cssRules) {\n            if (rule.type === 5) { // FONT_FACE_RULE\n              const src = rule.style.getPropertyValue('src');\n              const urls = src.match(/url\\(['\"]?([^'\"]+)['\"]?\\)/g);\n              if (urls) fonts.push(...urls.map(u => u.match(/url\\(['\"]?([^'\"]+)['\"]?\\)/)?.[1]));\n            }\n          }\n        } catch (e) {}\n      }\n      return [...new Set(fonts)];\n    })()\n  };\n\n  return JSON.stringify(assets, null, 2);\n})()\n\nStore the result as DISCOVERED_ASSETS.\n\nPHASE 3: Content extraction\n\nCall mcp__claude-in-chrome__javascript_tool with this code:\n\n(() => {\n  const content = {\n    url: window.location.href,\n    title: document.title,\n    extractedAt: new Date().toISOString().split('T')[0],\n    meta: {\n      description: document.querySelector('meta[name=\"description\"]')?.content,\n      ogTitle: document.querySelector('meta[property=\"og:title\"]')?.content,\n      ogDescription: document.querySelector('meta[property=\"og:description\"]')?.content,\n      ogImage: document.querySelector('meta[property=\"og:image\"]')?.content,\n      favicon: document.querySelector('link[rel*=\"icon\"]')?.href\n    },\n    navigation: {\n      header: [...document.querySelectorAll('header a, nav a')].slice(0, 20).map(a => ({\n        text: a.textContent?.trim(),\n        href: a.href\n      })),\n      footer: [...document.querySelectorAll('footer a')].slice(0, 20).map(a => ({\n        text: a.textContent?.trim(),\n        href: a.href\n      }))\n    },\n    headings: [...document.querySelectorAll('h1, h2, h3')].slice(0, 30).map(h => ({\n      level: h.tagName,\n      text: h.textContent?.trim()\n    })),\n    buttons: [...document.querySelectorAll('button, a.btn, [class*=\"button\"]')].slice(0, 20).map(b => ({\n      text: b.textContent?.trim(),\n      href: b.href || null\n    }))\n  };\n\n  return JSON.stringify(content, null, 2);\n})()\n\nStore the result as PAGE_CONTENT.\n\nPHASE 4: Download assets\n\nFor each asset URL discovered, download using curl with error handling.\nIf curl fails (non-zero exit), log the URL and continue to the next asset.\n\n# Logos (favicon, og:image, header logos)\ncurl -sfLo \"{output_dir}/logos/{site-name}-favicon.ico\" \"{favicon_url}\" || echo \"FAIL: {favicon_url}\"\ncurl -sfLo \"{output_dir}/logos/{site-name}-og-image.png\" \"{og_image_url}\" || echo \"FAIL: {og_image_url}\"\n\n# Images\ncurl -sfLo \"{output_dir}/images/{site-name}-{descriptive-name}.{ext}\" \"{image_url}\" || echo \"FAIL: {image_url}\"\n\n# SVGs - Write inline SVGs to files using Write tool\n\n# Videos\ncurl -sfLo \"{output_dir}/videos/{site-name}-{name}.mp4\" \"{video_url}\" || echo \"FAIL: {video_url}\"\n\n# Fonts\ncurl -sfLo \"{output_dir}/fonts/{font-name}.woff2\" \"{font_url}\" || echo \"FAIL: {font_url}\"\n\nNAMING CONVENTION: {site-prefix}-{descriptive-name}.{ext}\n- Use alt text or context for descriptive names\n- Example: stripe-hero-illustration.svg, vercel-logo-white.svg\n\nPHASE 5: Save JSON files\n\n1. Save page content:\n   Use Write tool to save PAGE_CONTENT to:\n   {full-site-dir}/pages/homepage.json\n\n2. Save asset URLs catalog:\n   Use Write tool to save DISCOVERED_ASSETS to:\n   {full-site-dir}/asset-urls.json\n\nPHASE 6: Report results\n\nWhen complete, report:\n- Total assets downloaded (count by type)\n- Total size (estimate from curl outputs)\n- Any failed downloads (list URLs)\n- Output paths\n\nERROR HANDLING\n- If site unreachable: Report error, skip\n- If asset download fails: Retry once with 2s delay, then log URL and continue\n- If browser tab crashes: Call tabs_create_mcp again, continue\n- If rate limited: Add 2 second delays between requests\n\nCONSTRAINTS\n- Skip files > 50MB (log URL for manual download)\n- Skip duplicate URLs\n- Maximum 100 assets per category\n- 5 minute timeout for entire extraction\n\nStep 4: Monitor agents\n\nAfter spawning all agents:\n\nReport to user:\nTraktor v2.0 - Extraction started\n\nFolder structure created\nSpawned {N} extraction agents:\n   - Agent 1: {site1} [running in background]\n   - Agent 2: {site2} [running in background]\n   ...\n\nAgents working in background. You'll be notified as they complete.\n\nAs agents complete, collect their results.\nStep 5: Generate final manifest\n\nAfter ALL agents complete, create manifest file:\n\nUse Write tool to create asset-manifest.json:\n\n{\n  \"generated_at\": \"{current_datetime}\",\n  \"tool\": \"traktor v2.0\",\n  \"project_dir\": \"{PROJECT_DIR}\",\n  \"sites_extracted\": [\"{site1}\", \"{site2}\"],\n  \"total_assets\": \"{total_count}\",\n  \"by_type\": {\n    \"logos\": \"{count}\",\n    \"icons\": \"{count}\",\n    \"images\": \"{count}\",\n    \"videos\": \"{count}\",\n    \"fonts\": \"{count}\",\n    \"svgs\": \"{count}\"\n  },\n  \"naming_convention\": \"{site-prefix}-{descriptive-name}.{ext}\",\n  \"output_structure\": {\n    \"assets\": \"./assets/\",\n    \"full_sites\": \"./full-sites/\"\n  }\n}\n\nStep 6: Final report\n\nDisplay to user:\n\nTraktor extraction complete!\n\nSummary:\n   Sites: {N}\n   Total assets: {count} ({size_mb} MB)\n\n   By type:\n   - Logos: {n}\n   - Images: {n}\n   - Videos: {n}\n   - SVGs: {n}\n   - Fonts: {n}\n\nOutput:\n   - Assets: ./assets/\n   - Full sites: ./full-sites/\n   - Manifest: ./asset-manifest.json\n\nFailed downloads: {list or \"None\"}\n\nQuick reference\nStep\tAction\tTool\n1\tParse URLs\tInternal\n2\tCreate folders\tBash\n3\tSpawn agents\tTask (background)\n4\tMonitor\tWait for notifications\n5\tCreate manifest\tWrite\n6\tReport\tOutput to user\nExample execution\n\nUser: /traktor https://0g.ai\n\nClaude executes:\n\nParse: URLS = [\"https://0g.ai\"], SITE_COUNT = 1\nBash: mkdir -p assets/{logos,icons,images,videos,fonts,backgrounds} full-site/{pages,assets,styles}\nTask: Spawn 1 agent with exact prompt above\nReport: \"Traktor started, 1 agent running...\"\nWait for agent completion\nWrite: asset-manifest.json\nReport: Final summary\n\nUser: /traktor https://stripe.com https://vercel.com https://linear.app\n\nClaude executes:\n\nParse: 3 URLs\nBash: Create 3 site folders in assets/ and full-sites/\nTask: Spawn 3 parallel background agents\nReport: \"Traktor started, 3 agents running...\"\nCollect results as agents finish\nWrite: Combined manifest\nReport: Combined summary"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/dorukardahan/traktor",
    "publisherUrl": "https://clawhub.ai/dorukardahan/traktor",
    "owner": "dorukardahan",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/traktor",
    "downloadUrl": "https://openagent3.xyz/downloads/traktor",
    "agentUrl": "https://openagent3.xyz/skills/traktor/agent",
    "manifestUrl": "https://openagent3.xyz/skills/traktor/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/traktor/agent.md"
  }
}