{
  "schemaVersion": "1.0",
  "item": {
    "slug": "tandemn-tuna",
    "name": "Tandemn Tuna Skill",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/choprahetarth/tandemn-tuna",
    "canonicalUrl": "https://clawhub.ai/choprahetarth/tandemn-tuna",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/tandemn-tuna",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=tandemn-tuna",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "clawhub.json"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/tandemn-tuna"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/tandemn-tuna",
    "agentPageUrl": "https://openagent3.xyz/skills/tandemn-tuna/agent",
    "manifestUrl": "https://openagent3.xyz/skills/tandemn-tuna/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/tandemn-tuna/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Tuna — Deploy and Serve LLM Models on GPU Infrastructure",
        "body": "Tuna is a hybrid GPU inference orchestrator. It lets you deploy, serve, and manage LLM models (Llama, Qwen, Mistral, DeepSeek, Gemma, and any HuggingFace model) on serverless GPUs from Modal, RunPod, Cerebrium, Google Cloud Run, Baseten, or Azure Container Apps, with optional spot instance fallback on AWS via SkyPilot. Every deployment gets an OpenAI-compatible /v1/chat/completions endpoint.\n\nThe key idea: serverless GPUs handle requests immediately (fast cold start, pay-per-second) while a cheaper spot GPU boots in the background. Once spot is ready, traffic shifts there. If spot gets preempted, traffic falls back to serverless automatically. This gives you 3–5x cost savings over pure serverless with zero downtime."
      },
      {
        "title": "Quick Start — Deploy a Model in 3 Commands",
        "body": "# 1. Install tuna\nuv pip install tandemn-tuna\n\n# 2. Deploy a model (auto-picks cheapest serverless provider for the GPU)\ntuna deploy --model Qwen/Qwen3-0.6B --gpu L4 --service-name my-llm\n\n# 3. Query your endpoint (shown in deploy output)\ncurl http://<router-ip>:8080/v1/chat/completions \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\": \"Qwen/Qwen3-0.6B\", \"messages\": [{\"role\": \"user\", \"content\": \"Hello!\"}]}'\n\nFor serverless-only (no spot, no AWS needed):\n\ntuna deploy --model Qwen/Qwen3-0.6B --gpu L4 --serverless-only"
      },
      {
        "title": "tuna deploy — Launch a model on GPU",
        "body": "Deploy a model across serverless + spot infrastructure. This is the main command.\n\ntuna deploy --model <HuggingFace-model-ID> --gpu <GPU> [options]\n\nRequired arguments:\n\n--model — HuggingFace model ID (e.g., Qwen/Qwen3-0.6B, meta-llama/Llama-3-70b)\n--gpu — GPU type (e.g., T4, L4, L40S, A100, H100, B200)\n\nCommon options:\n\n--service-name — Name for the deployment (auto-generated if omitted)\n--serverless-provider — Force a specific provider: modal, runpod, cloudrun, baseten, azure, cerebrium (default: cheapest available)\n--serverless-only — Serverless only, no spot backend or router (no AWS needed)\n--gpu-count — Number of GPUs (default: 1)\n--tp-size — Tensor parallel size (default: 1)\n--max-model-len — Max sequence length (default: 4096)\n--spots-cloud — Cloud for spot GPUs: aws or azure (default: aws)\n--region — Cloud region for spot instances\n--concurrency — Override serverless concurrency limit\n--no-scale-to-zero — Keep at least 1 spot replica running\n--public — Make endpoint publicly accessible (no auth)\n--scaling-policy — Path to YAML with scaling parameters\n\nProvider-specific options:\n\n--gcp-project, --gcp-region — For Cloud Run\n--azure-subscription, --azure-resource-group, --azure-region, --azure-environment — For Azure\n\nExamples:\n\n# Deploy Llama 3 on Modal with hybrid spot\ntuna deploy --model meta-llama/Llama-3-8b --gpu A100 --serverless-provider modal\n\n# Deploy on RunPod, serverless-only\ntuna deploy --model mistralai/Mistral-7B-Instruct-v0.3 --gpu L40S --serverless-provider runpod --serverless-only\n\n# Deploy on Azure with an existing environment\ntuna deploy --model Qwen/Qwen3-0.6B --gpu T4 --serverless-provider azure --azure-environment my-env\n\n# Deploy a large model with tensor parallelism\ntuna deploy --model meta-llama/Llama-3-70b --gpu H100 --gpu-count 4 --tp-size 4"
      },
      {
        "title": "tuna show-gpus — Compare GPU Prices Across Providers",
        "body": "Show GPU pricing from all serverless providers, optionally including spot prices.\n\ntuna show-gpus [--gpu <GPU>] [--provider <provider>] [--spot]\n\nExamples:\n\n# Show all GPU prices across all providers\ntuna show-gpus\n\n# Show H100 pricing specifically\ntuna show-gpus --gpu H100\n\n# Show Modal's prices only\ntuna show-gpus --provider modal\n\n# Include AWS spot prices for comparison\ntuna show-gpus --spot"
      },
      {
        "title": "tuna check — Validate Provider Setup (Preflight)",
        "body": "Run preflight checks to verify credentials, CLIs, and quotas for a provider before deploying.\n\ntuna check --provider <provider> [--gpu <GPU>]\n\nExamples:\n\n# Check Modal setup\ntuna check --provider modal\n\n# Check Azure with specific GPU\ntuna check --provider azure --gpu T4 --azure-subscription <id> --azure-resource-group <rg>"
      },
      {
        "title": "tuna status — Check Deployment Status",
        "body": "tuna status --service-name <name>"
      },
      {
        "title": "tuna cost — Show Cost Savings Dashboard",
        "body": "tuna cost --service-name <name>"
      },
      {
        "title": "tuna list — List All Deployments",
        "body": "tuna list [--status active|destroyed|failed]"
      },
      {
        "title": "tuna destroy — Tear Down a Deployment",
        "body": "# Destroy a specific deployment\ntuna destroy --service-name <name>\n\n# Destroy all deployments\ntuna destroy --all"
      },
      {
        "title": "Provider Setup Guide",
        "body": "Each serverless provider needs its own credentials. Run tuna check --provider <name> to verify setup."
      },
      {
        "title": "Modal",
        "body": "pip install modal  # or: uv pip install tandemn-tuna[modal]\nmodal token new    # opens browser to authenticate\n\nNo environment variables needed — token is stored in Modal's config."
      },
      {
        "title": "RunPod",
        "body": "export RUNPOD_API_KEY=\"your-api-key\"\n\nGet your API key from the RunPod console."
      },
      {
        "title": "Google Cloud Run",
        "body": "pip install google-cloud-run  # or: uv pip install tandemn-tuna[cloudrun]\ngcloud auth login\ngcloud auth application-default login\n\nOptionally set GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION, or pass --gcp-project and --gcp-region."
      },
      {
        "title": "Baseten",
        "body": "pip install truss  # or: uv pip install tandemn-tuna[baseten]\nexport BASETEN_API_KEY=\"your-api-key\"\ntruss login --api-key $BASETEN_API_KEY"
      },
      {
        "title": "Azure Container Apps",
        "body": "pip install azure-mgmt-appcontainers azure-identity  # or: uv pip install tandemn-tuna[azure]\naz login\naz provider register --namespace Microsoft.App\naz provider register --namespace Microsoft.OperationalInsights\n\nPass --azure-subscription, --azure-resource-group, and --azure-region on deploy, or set AZURE_SUBSCRIPTION_ID, AZURE_RESOURCE_GROUP, AZURE_REGION env vars. First deploy creates a GPU environment (~30 min); subsequent deploys reuse it (~2 min). Use --azure-environment to specify an existing environment."
      },
      {
        "title": "Cerebrium",
        "body": "pip install cerebrium  # or: uv pip install tandemn-tuna[cerebrium]\ncerebrium login\nexport CEREBRIUM_API_KEY=\"your-api-key\"\n\nNote: Hobby plan gives T4, A10, L4, L40S. A100 and H100 require Enterprise."
      },
      {
        "title": "Spot GPUs (AWS via SkyPilot)",
        "body": "Spot is included automatically in hybrid deploys. Just configure AWS:\n\naws configure  # set access key, secret key, region\n\nUse --serverless-only to skip spot if you don't have AWS set up."
      },
      {
        "title": "Common Scenarios",
        "body": "When the user wants to deploy a model for quick testing:\nUse --serverless-only to skip spot setup. Pick a small GPU like L4 or T4. Example:\n\ntuna deploy --model Qwen/Qwen3-0.6B --gpu L4 --serverless-only\n\nWhen the user wants the cheapest deployment:\nFirst run tuna show-gpus --spot to compare serverless and spot prices. Then deploy with hybrid mode (the default) to get spot savings. The auto provider selector already picks the cheapest serverless option for the chosen GPU.\n\nWhen the user wants to compare GPU prices:\n\ntuna show-gpus\ntuna show-gpus --gpu A100\ntuna show-gpus --spot  # includes AWS spot prices\n\nWhen the user asks \"which providers support H100?\" or a specific GPU:\n\ntuna show-gpus --gpu H100\n\nWhen the user wants to deploy on a specific provider:\nUse --serverless-provider <name>. Run tuna check --provider <name> first to verify credentials.\n\nWhen the user wants to deploy a large model (70B+):\nUse multiple GPUs with tensor parallelism:\n\ntuna deploy --model meta-llama/Llama-3-70b --gpu H100 --gpu-count 4 --tp-size 4\n\nWhen the user wants to check if their setup is ready:\n\ntuna check --provider modal\ntuna check --provider runpod\n\nWhen the user wants to see what's currently deployed:\n\ntuna list\ntuna list --status active\n\nWhen the user wants to tear down everything:\n\ntuna destroy --all"
      },
      {
        "title": "Supported GPUs",
        "body": "All GPU types that tuna supports across its providers:\n\nGPUVRAMArchitectureAvailable OnT416 GBTuringModal, RunPod, Baseten, Azure, Cerebrium, SpotA1024 GBAmpereCerebriumA10G24 GBAmpereModal, Baseten, SpotA400016 GBAmpereRunPodA500024 GBAmpereRunPodRTX 409024 GBAdaRunPodL424 GBAdaModal, RunPod, Cloud Run, Baseten, Cerebrium, SpotA4048 GBAmpereRunPodA600048 GBAmpereRunPodL4048 GBAdaRunPodL40S48 GBAdaModal, RunPod, Cerebrium, SpotA100 (40 GB)40 GBAmpereModal, Cerebrium, SpotA100 (80 GB)80 GBAmpereModal, RunPod, Azure, Baseten, Cerebrium, SpotH10080 GBHopperModal, RunPod, Baseten, Cerebrium, SpotH200141 GBHopperSpotB200192 GBBlackwellModal, BasetenRTX PRO 600032 GBBlackwellCloud Run\n\nUse tuna show-gpus for current pricing across all providers."
      },
      {
        "title": "Error Handling",
        "body": "Preflight check fails (tuna check):\nThe output tells you exactly what's wrong — missing CLI tool, expired credentials, unregistered provider, insufficient quota. Fix the reported issue and re-run tuna check.\n\nDeploy fails:\n\nRun tuna check --provider <provider> --gpu <gpu> to validate the environment\nAdd -v for verbose logs: tuna deploy -v ...\nCheck tuna status --service-name <name> for deployment state\n\nSpot instance not available:\nSpot GPUs depend on cloud availability. If spot fails to launch, the serverless backend keeps serving — no downtime. Try a different region with --region, or use --serverless-only.\n\n\"No provider supports GPU X\":\nRun tuna show-gpus --gpu <GPU> to see which providers offer that GPU. Not all GPUs are available on all providers.\n\nAzure environment takes too long:\nFirst Azure deploy creates a GPU environment (~30 min). Subsequent deploys reuse it (~2 min). Use --azure-environment to specify an existing one."
      }
    ],
    "body": "Tuna — Deploy and Serve LLM Models on GPU Infrastructure\n\nTuna is a hybrid GPU inference orchestrator. It lets you deploy, serve, and manage LLM models (Llama, Qwen, Mistral, DeepSeek, Gemma, and any HuggingFace model) on serverless GPUs from Modal, RunPod, Cerebrium, Google Cloud Run, Baseten, or Azure Container Apps, with optional spot instance fallback on AWS via SkyPilot. Every deployment gets an OpenAI-compatible /v1/chat/completions endpoint.\n\nThe key idea: serverless GPUs handle requests immediately (fast cold start, pay-per-second) while a cheaper spot GPU boots in the background. Once spot is ready, traffic shifts there. If spot gets preempted, traffic falls back to serverless automatically. This gives you 3–5x cost savings over pure serverless with zero downtime.\n\nQuick Start — Deploy a Model in 3 Commands\n# 1. Install tuna\nuv pip install tandemn-tuna\n\n# 2. Deploy a model (auto-picks cheapest serverless provider for the GPU)\ntuna deploy --model Qwen/Qwen3-0.6B --gpu L4 --service-name my-llm\n\n# 3. Query your endpoint (shown in deploy output)\ncurl http://<router-ip>:8080/v1/chat/completions \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\": \"Qwen/Qwen3-0.6B\", \"messages\": [{\"role\": \"user\", \"content\": \"Hello!\"}]}'\n\n\nFor serverless-only (no spot, no AWS needed):\n\ntuna deploy --model Qwen/Qwen3-0.6B --gpu L4 --serverless-only\n\nAll Commands\ntuna deploy — Launch a model on GPU\n\nDeploy a model across serverless + spot infrastructure. This is the main command.\n\ntuna deploy --model <HuggingFace-model-ID> --gpu <GPU> [options]\n\n\nRequired arguments:\n\n--model — HuggingFace model ID (e.g., Qwen/Qwen3-0.6B, meta-llama/Llama-3-70b)\n--gpu — GPU type (e.g., T4, L4, L40S, A100, H100, B200)\n\nCommon options:\n\n--service-name — Name for the deployment (auto-generated if omitted)\n--serverless-provider — Force a specific provider: modal, runpod, cloudrun, baseten, azure, cerebrium (default: cheapest available)\n--serverless-only — Serverless only, no spot backend or router (no AWS needed)\n--gpu-count — Number of GPUs (default: 1)\n--tp-size — Tensor parallel size (default: 1)\n--max-model-len — Max sequence length (default: 4096)\n--spots-cloud — Cloud for spot GPUs: aws or azure (default: aws)\n--region — Cloud region for spot instances\n--concurrency — Override serverless concurrency limit\n--no-scale-to-zero — Keep at least 1 spot replica running\n--public — Make endpoint publicly accessible (no auth)\n--scaling-policy — Path to YAML with scaling parameters\n\nProvider-specific options:\n\n--gcp-project, --gcp-region — For Cloud Run\n--azure-subscription, --azure-resource-group, --azure-region, --azure-environment — For Azure\n\nExamples:\n\n# Deploy Llama 3 on Modal with hybrid spot\ntuna deploy --model meta-llama/Llama-3-8b --gpu A100 --serverless-provider modal\n\n# Deploy on RunPod, serverless-only\ntuna deploy --model mistralai/Mistral-7B-Instruct-v0.3 --gpu L40S --serverless-provider runpod --serverless-only\n\n# Deploy on Azure with an existing environment\ntuna deploy --model Qwen/Qwen3-0.6B --gpu T4 --serverless-provider azure --azure-environment my-env\n\n# Deploy a large model with tensor parallelism\ntuna deploy --model meta-llama/Llama-3-70b --gpu H100 --gpu-count 4 --tp-size 4\n\ntuna show-gpus — Compare GPU Prices Across Providers\n\nShow GPU pricing from all serverless providers, optionally including spot prices.\n\ntuna show-gpus [--gpu <GPU>] [--provider <provider>] [--spot]\n\n\nExamples:\n\n# Show all GPU prices across all providers\ntuna show-gpus\n\n# Show H100 pricing specifically\ntuna show-gpus --gpu H100\n\n# Show Modal's prices only\ntuna show-gpus --provider modal\n\n# Include AWS spot prices for comparison\ntuna show-gpus --spot\n\ntuna check — Validate Provider Setup (Preflight)\n\nRun preflight checks to verify credentials, CLIs, and quotas for a provider before deploying.\n\ntuna check --provider <provider> [--gpu <GPU>]\n\n\nExamples:\n\n# Check Modal setup\ntuna check --provider modal\n\n# Check Azure with specific GPU\ntuna check --provider azure --gpu T4 --azure-subscription <id> --azure-resource-group <rg>\n\ntuna status — Check Deployment Status\ntuna status --service-name <name>\n\ntuna cost — Show Cost Savings Dashboard\ntuna cost --service-name <name>\n\ntuna list — List All Deployments\ntuna list [--status active|destroyed|failed]\n\ntuna destroy — Tear Down a Deployment\n# Destroy a specific deployment\ntuna destroy --service-name <name>\n\n# Destroy all deployments\ntuna destroy --all\n\nProvider Setup Guide\n\nEach serverless provider needs its own credentials. Run tuna check --provider <name> to verify setup.\n\nModal\npip install modal  # or: uv pip install tandemn-tuna[modal]\nmodal token new    # opens browser to authenticate\n\n\nNo environment variables needed — token is stored in Modal's config.\n\nRunPod\nexport RUNPOD_API_KEY=\"your-api-key\"\n\n\nGet your API key from the RunPod console.\n\nGoogle Cloud Run\npip install google-cloud-run  # or: uv pip install tandemn-tuna[cloudrun]\ngcloud auth login\ngcloud auth application-default login\n\n\nOptionally set GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION, or pass --gcp-project and --gcp-region.\n\nBaseten\npip install truss  # or: uv pip install tandemn-tuna[baseten]\nexport BASETEN_API_KEY=\"your-api-key\"\ntruss login --api-key $BASETEN_API_KEY\n\nAzure Container Apps\npip install azure-mgmt-appcontainers azure-identity  # or: uv pip install tandemn-tuna[azure]\naz login\naz provider register --namespace Microsoft.App\naz provider register --namespace Microsoft.OperationalInsights\n\n\nPass --azure-subscription, --azure-resource-group, and --azure-region on deploy, or set AZURE_SUBSCRIPTION_ID, AZURE_RESOURCE_GROUP, AZURE_REGION env vars. First deploy creates a GPU environment (~30 min); subsequent deploys reuse it (~2 min). Use --azure-environment to specify an existing environment.\n\nCerebrium\npip install cerebrium  # or: uv pip install tandemn-tuna[cerebrium]\ncerebrium login\nexport CEREBRIUM_API_KEY=\"your-api-key\"\n\n\nNote: Hobby plan gives T4, A10, L4, L40S. A100 and H100 require Enterprise.\n\nSpot GPUs (AWS via SkyPilot)\n\nSpot is included automatically in hybrid deploys. Just configure AWS:\n\naws configure  # set access key, secret key, region\n\n\nUse --serverless-only to skip spot if you don't have AWS set up.\n\nCommon Scenarios\n\nWhen the user wants to deploy a model for quick testing: Use --serverless-only to skip spot setup. Pick a small GPU like L4 or T4. Example:\n\ntuna deploy --model Qwen/Qwen3-0.6B --gpu L4 --serverless-only\n\n\nWhen the user wants the cheapest deployment: First run tuna show-gpus --spot to compare serverless and spot prices. Then deploy with hybrid mode (the default) to get spot savings. The auto provider selector already picks the cheapest serverless option for the chosen GPU.\n\nWhen the user wants to compare GPU prices:\n\ntuna show-gpus\ntuna show-gpus --gpu A100\ntuna show-gpus --spot  # includes AWS spot prices\n\n\nWhen the user asks \"which providers support H100?\" or a specific GPU:\n\ntuna show-gpus --gpu H100\n\n\nWhen the user wants to deploy on a specific provider: Use --serverless-provider <name>. Run tuna check --provider <name> first to verify credentials.\n\nWhen the user wants to deploy a large model (70B+): Use multiple GPUs with tensor parallelism:\n\ntuna deploy --model meta-llama/Llama-3-70b --gpu H100 --gpu-count 4 --tp-size 4\n\n\nWhen the user wants to check if their setup is ready:\n\ntuna check --provider modal\ntuna check --provider runpod\n\n\nWhen the user wants to see what's currently deployed:\n\ntuna list\ntuna list --status active\n\n\nWhen the user wants to tear down everything:\n\ntuna destroy --all\n\nSupported GPUs\n\nAll GPU types that tuna supports across its providers:\n\nGPU\tVRAM\tArchitecture\tAvailable On\nT4\t16 GB\tTuring\tModal, RunPod, Baseten, Azure, Cerebrium, Spot\nA10\t24 GB\tAmpere\tCerebrium\nA10G\t24 GB\tAmpere\tModal, Baseten, Spot\nA4000\t16 GB\tAmpere\tRunPod\nA5000\t24 GB\tAmpere\tRunPod\nRTX 4090\t24 GB\tAda\tRunPod\nL4\t24 GB\tAda\tModal, RunPod, Cloud Run, Baseten, Cerebrium, Spot\nA40\t48 GB\tAmpere\tRunPod\nA6000\t48 GB\tAmpere\tRunPod\nL40\t48 GB\tAda\tRunPod\nL40S\t48 GB\tAda\tModal, RunPod, Cerebrium, Spot\nA100 (40 GB)\t40 GB\tAmpere\tModal, Cerebrium, Spot\nA100 (80 GB)\t80 GB\tAmpere\tModal, RunPod, Azure, Baseten, Cerebrium, Spot\nH100\t80 GB\tHopper\tModal, RunPod, Baseten, Cerebrium, Spot\nH200\t141 GB\tHopper\tSpot\nB200\t192 GB\tBlackwell\tModal, Baseten\nRTX PRO 6000\t32 GB\tBlackwell\tCloud Run\n\nUse tuna show-gpus for current pricing across all providers.\n\nError Handling\n\nPreflight check fails (tuna check): The output tells you exactly what's wrong — missing CLI tool, expired credentials, unregistered provider, insufficient quota. Fix the reported issue and re-run tuna check.\n\nDeploy fails:\n\nRun tuna check --provider <provider> --gpu <gpu> to validate the environment\nAdd -v for verbose logs: tuna deploy -v ...\nCheck tuna status --service-name <name> for deployment state\n\nSpot instance not available: Spot GPUs depend on cloud availability. If spot fails to launch, the serverless backend keeps serving — no downtime. Try a different region with --region, or use --serverless-only.\n\n\"No provider supports GPU X\": Run tuna show-gpus --gpu <GPU> to see which providers offer that GPU. Not all GPUs are available on all providers.\n\nAzure environment takes too long: First Azure deploy creates a GPU environment (~30 min). Subsequent deploys reuse it (~2 min). Use --azure-environment to specify an existing one."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/choprahetarth/tandemn-tuna",
    "publisherUrl": "https://clawhub.ai/choprahetarth/tandemn-tuna",
    "owner": "choprahetarth",
    "version": "0.0.1",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/tandemn-tuna",
    "downloadUrl": "https://openagent3.xyz/downloads/tandemn-tuna",
    "agentUrl": "https://openagent3.xyz/skills/tandemn-tuna/agent",
    "manifestUrl": "https://openagent3.xyz/skills/tandemn-tuna/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/tandemn-tuna/agent.md"
  }
}