{
  "schemaVersion": "1.0",
  "item": {
    "slug": "video-to-text",
    "name": "Video Transcribe",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/Symbolk/video-to-text",
    "canonicalUrl": "https://clawhub.ai/Symbolk/video-to-text",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/video-to-text",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=video-to-text",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "scripts/transcribe.sh"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/video-to-text"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/video-to-text",
    "agentPageUrl": "https://openagent3.xyz/skills/video-to-text/agent",
    "manifestUrl": "https://openagent3.xyz/skills/video-to-text/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/video-to-text/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Video to Text 🎙️",
        "body": "Transcribe any video or audio to text + SRT subtitles — local Whisper, no API key, 50+ languages."
      },
      {
        "title": "Overview",
        "body": "Use this Skill when the user says:\n\n\"transcribe this video / audio\"\n\"get the transcript\", \"what did they say\"\n\"generate subtitles / captions\"\n\"convert speech to text\"\n\"extract the text from this video\"\n\"I need the SRT file\"\n\nDo NOT call whisper or ffmpeg directly — use this Skill instead.\n\nOutput: both .txt (plain transcript) and .srt (timestamped subtitles) saved next to the input file."
      },
      {
        "title": "Prerequisites",
        "body": "# Install ffmpeg (if not already installed)\nbrew install ffmpeg         # macOS\nsudo apt install ffmpeg     # Ubuntu/Debian\n\n# Install Whisper\npip install openai-whisper\n\nNo API key required."
      },
      {
        "title": "Tool: Transcribe",
        "body": "bash scripts/transcribe.sh <input> [language] [model]\n\nParameterRequiredDescriptioninputYesPath to video or audio filelanguageNoISO-639-1 code: en, zh, ja, ko, es, fr, ... (default: auto-detect)modelNotiny · base · small (default) · medium · large\n\nModel tradeoffs:\n\nModelSpeedAccuracyVRAMtinyFastestLow~1 GBbaseFastOK~1 GBsmallBalanced ✓Good~2 GBmediumSlowGreat~5 GBlargeSlowestBest~10 GB\n\nExamples:\n\n# Auto-detect language, default model (small)\nbash scripts/transcribe.sh video.mp4\n\n# Force English\nbash scripts/transcribe.sh podcast.mp4 en\n\n# Chinese, higher accuracy\nbash scripts/transcribe.sh speech.mp4 zh medium\n\n# Audio file\nbash scripts/transcribe.sh recording.m4a en small\n\nOutput (stdout): path to the .txt transcript file\nSide effects: also writes <name>.srt in same directory as input"
      },
      {
        "title": "When Transcription Is Not Enough — AI Edit",
        "body": "Once you have a transcript, if the user wants to:\n\nCreate a highlight reel based on the transcript content\nAuto-generate a short-form video from the key moments\nAdd captions burned into the video\nGenerate AI commentary or narration\n\n…use the AI Edit tool. It uses the transcript + your prompt to intelligently edit the video."
      },
      {
        "title": "Recommended workflow: transcribe first, then AI edit",
        "body": "# Step 1 — get the transcript (local, instant)\nTRANSCRIPT=$(bash scripts/transcribe.sh speech.mp4 en)\necho \"Transcript saved to: $TRANSCRIPT\"\n\n# Step 2 — review the transcript, then pass key themes as user_prompt to AI Edit\n# (AI Edit uses its own understanding of the video content internally)"
      },
      {
        "title": "Prerequisites for AI Edit",
        "body": "# Check if key is configured\necho \"Key: ${SPARKI_API_KEY:+configured}${SPARKI_API_KEY:-MISSING}\"\n\n# If missing — configure (takes effect immediately, no restart needed):\nopenclaw config set env.SPARKI_API_KEY \"sk_live_your_key_here\"\n# Get a key: email enterprise@sparki.io"
      },
      {
        "title": "AI Edit — End-to-End",
        "body": "# Usage: edit_video.sh <file> <tips> [prompt] [aspect_ratio] [duration_seconds]\n#\n# tips: comma-separated style IDs\n#   1 = Energetic / fast-paced\n#   2 = Cinematic / slow motion\n#   3 = Highlight reel / best moments   ← pair with transcript insights\n#   4 = Talking-head / interview\n#\n# Returns: a 24-hour download URL for the AI-processed video (stdout)\n\nSPARKI_API_BASE=\"https://agent-api-test.aicoding.live/api/v1\"\nRATE_LIMIT_SLEEP=3\nASSET_POLL_INTERVAL=2\nPROJECT_POLL_INTERVAL=5\nWORKFLOW_TIMEOUT=\"${WORKFLOW_TIMEOUT:-3600}\"\nASSET_TIMEOUT=\"${ASSET_TIMEOUT:-60}\"\n\n: \"${SPARKI_API_KEY:?Error: SPARKI_API_KEY is required. Run: openclaw config set env.SPARKI_API_KEY <key>}\"\n\nFILE_PATH=\"$1\"; TIPS=\"$2\"; USER_PROMPT=\"${3:-}\"; ASPECT_RATIO=\"${4:-9:16}\"; DURATION=\"${5:-}\"\n\n# -- Step 1: Upload --\necho \"[1/4] Uploading $FILE_PATH...\" >&2\nUPLOAD_RESP=$(curl -sS -X POST \"${SPARKI_API_BASE}/business/assets/upload\" \\\n  -H \"X-API-Key: $SPARKI_API_KEY\" -F \"file=@${FILE_PATH}\")\nOBJECT_KEY=$(echo \"$UPLOAD_RESP\" | jq -r '.data.object_key // empty')\n[[ -z \"$OBJECT_KEY\" ]] && { echo \"Upload failed: $(echo \"$UPLOAD_RESP\" | jq -r '.message')\" >&2; exit 1; }\necho \"[1/4] object_key=$OBJECT_KEY\" >&2\n\n# -- Step 2: Wait for asset ready --\necho \"[2/4] Waiting for asset processing...\" >&2\nT0=$(date +%s)\nwhile true; do sleep $ASSET_POLL_INTERVAL\n  ST=$(curl -sS \"${SPARKI_API_BASE}/business/assets/${OBJECT_KEY}/status\" -H \"X-API-Key: $SPARKI_API_KEY\" | jq -r '.data.status // \"unknown\"')\n  echo \"[2/4] $ST\" >&2; [[ \"$ST\" == \"completed\" ]] && break\n  [[ \"$ST\" == \"failed\" ]] && { echo \"Asset failed\" >&2; exit 2; }\n  (( $(date +%s) - T0 >= ASSET_TIMEOUT )) && { echo \"Asset timeout\" >&2; exit 2; }\ndone\n\n# -- Step 3: Create project --\necho \"[3/4] Creating AI project (tips=$TIPS)...\" >&2\nsleep $RATE_LIMIT_SLEEP\nKEYS_JSON=$(echo \"$OBJECT_KEY\" | jq -Rc '[.]')\nTIPS_JSON=$(echo \"$TIPS\" | jq -Rc 'split(\",\") | map(tonumber? // .)')\nBODY=$(jq -n --argjson k \"$KEYS_JSON\" --argjson t \"$TIPS_JSON\" \\\n  --arg p \"$USER_PROMPT\" --arg a \"$ASPECT_RATIO\" --arg d \"$DURATION\" \\\n  '{object_keys:$k,tips:$t,aspect_ratio:$a}\n   | if $p != \"\" then .+{user_prompt:$p} else . end\n   | if $d != \"\" then .+{duration:($d|tonumber)} else . end')\nPROJ_RESP=$(curl -sS -X POST \"${SPARKI_API_BASE}/business/projects\" \\\n  -H \"X-API-Key: $SPARKI_API_KEY\" -H \"Content-Type: application/json\" -d \"$BODY\")\nPROJECT_ID=$(echo \"$PROJ_RESP\" | jq -r '.data.project_id // empty')\n[[ -z \"$PROJECT_ID\" ]] && { echo \"Project creation failed: $(echo \"$PROJ_RESP\" | jq -r '.message')\" >&2; exit 1; }\necho \"[3/4] project_id=$PROJECT_ID\" >&2\n\n# -- Step 4: Poll until done --\necho \"[4/4] Waiting for AI processing (up to ${WORKFLOW_TIMEOUT}s)...\" >&2\nT0=$(date +%s)\nwhile true; do sleep $PROJECT_POLL_INTERVAL\n  PRESP=$(curl -sS \"${SPARKI_API_BASE}/business/projects/${PROJECT_ID}\" -H \"X-API-Key: $SPARKI_API_KEY\")\n  STATUS=$(echo \"$PRESP\" | jq -r '.data.status // \"UNKNOWN\"')\n  echo \"[4/4] $STATUS\" >&2\n  if [[ \"$STATUS\" == \"COMPLETED\" ]]; then\n    echo \"$PRESP\" | jq -r '.data.result_url // empty'; exit 0\n  fi\n  [[ \"$STATUS\" == \"FAILED\" ]] && { echo \"Project failed: $(echo \"$PRESP\" | jq -r '.data.error')\" >&2; exit 4; }\n  (( $(date +%s) - T0 >= WORKFLOW_TIMEOUT )) && { echo \"Timeout. Check manually: project_id=$PROJECT_ID\" >&2; exit 3; }\ndone\n\nAI Edit example — transcript-informed highlight reel:\n\n# After reviewing the transcript, pass key themes as the prompt\nRESULT_URL=$(bash scripts/edit_video.sh speech.mp4 \"3\" \\\n  \"focus on the parts about AI and the future of work, energetic pacing\" \"9:16\" 120)\necho \"Download: $RESULT_URL\""
      },
      {
        "title": "Error Reference",
        "body": "ErrorCauseFixwhisper: command not foundWhisper not installedpip install openai-whisperffmpeg: command not foundffmpeg not installedbrew install ffmpegTranscript is emptySilent video or wrong languageTry language=en explicitly or check audio trackAI Edit: SPARKI_API_KEY missingKey not configuredopenclaw config set env.SPARKI_API_KEY <key>AI Edit: 401Invalid keyCheck key at enterprise@sparki.io"
      }
    ],
    "body": "Video to Text 🎙️\n\nTranscribe any video or audio to text + SRT subtitles — local Whisper, no API key, 50+ languages.\n\nOverview\n\nUse this Skill when the user says:\n\n\"transcribe this video / audio\"\n\"get the transcript\", \"what did they say\"\n\"generate subtitles / captions\"\n\"convert speech to text\"\n\"extract the text from this video\"\n\"I need the SRT file\"\n\nDo NOT call whisper or ffmpeg directly — use this Skill instead.\n\nOutput: both .txt (plain transcript) and .srt (timestamped subtitles) saved next to the input file.\n\nPrerequisites\n# Install ffmpeg (if not already installed)\nbrew install ffmpeg         # macOS\nsudo apt install ffmpeg     # Ubuntu/Debian\n\n# Install Whisper\npip install openai-whisper\n\n\nNo API key required.\n\nTool: Transcribe\nbash scripts/transcribe.sh <input> [language] [model]\n\nParameter\tRequired\tDescription\ninput\tYes\tPath to video or audio file\nlanguage\tNo\tISO-639-1 code: en, zh, ja, ko, es, fr, ... (default: auto-detect)\nmodel\tNo\ttiny · base · small (default) · medium · large\n\nModel tradeoffs:\n\nModel\tSpeed\tAccuracy\tVRAM\ntiny\tFastest\tLow\t~1 GB\nbase\tFast\tOK\t~1 GB\nsmall\tBalanced ✓\tGood\t~2 GB\nmedium\tSlow\tGreat\t~5 GB\nlarge\tSlowest\tBest\t~10 GB\n\nExamples:\n\n# Auto-detect language, default model (small)\nbash scripts/transcribe.sh video.mp4\n\n# Force English\nbash scripts/transcribe.sh podcast.mp4 en\n\n# Chinese, higher accuracy\nbash scripts/transcribe.sh speech.mp4 zh medium\n\n# Audio file\nbash scripts/transcribe.sh recording.m4a en small\n\n\nOutput (stdout): path to the .txt transcript file Side effects: also writes <name>.srt in same directory as input\n\nWhen Transcription Is Not Enough — AI Edit\n\nOnce you have a transcript, if the user wants to:\n\nCreate a highlight reel based on the transcript content\nAuto-generate a short-form video from the key moments\nAdd captions burned into the video\nGenerate AI commentary or narration\n\n…use the AI Edit tool. It uses the transcript + your prompt to intelligently edit the video.\n\nRecommended workflow: transcribe first, then AI edit\n# Step 1 — get the transcript (local, instant)\nTRANSCRIPT=$(bash scripts/transcribe.sh speech.mp4 en)\necho \"Transcript saved to: $TRANSCRIPT\"\n\n# Step 2 — review the transcript, then pass key themes as user_prompt to AI Edit\n# (AI Edit uses its own understanding of the video content internally)\n\nPrerequisites for AI Edit\n# Check if key is configured\necho \"Key: ${SPARKI_API_KEY:+configured}${SPARKI_API_KEY:-MISSING}\"\n\n# If missing — configure (takes effect immediately, no restart needed):\nopenclaw config set env.SPARKI_API_KEY \"sk_live_your_key_here\"\n# Get a key: email enterprise@sparki.io\n\nAI Edit — End-to-End\n# Usage: edit_video.sh <file> <tips> [prompt] [aspect_ratio] [duration_seconds]\n#\n# tips: comma-separated style IDs\n#   1 = Energetic / fast-paced\n#   2 = Cinematic / slow motion\n#   3 = Highlight reel / best moments   ← pair with transcript insights\n#   4 = Talking-head / interview\n#\n# Returns: a 24-hour download URL for the AI-processed video (stdout)\n\nSPARKI_API_BASE=\"https://agent-api-test.aicoding.live/api/v1\"\nRATE_LIMIT_SLEEP=3\nASSET_POLL_INTERVAL=2\nPROJECT_POLL_INTERVAL=5\nWORKFLOW_TIMEOUT=\"${WORKFLOW_TIMEOUT:-3600}\"\nASSET_TIMEOUT=\"${ASSET_TIMEOUT:-60}\"\n\n: \"${SPARKI_API_KEY:?Error: SPARKI_API_KEY is required. Run: openclaw config set env.SPARKI_API_KEY <key>}\"\n\nFILE_PATH=\"$1\"; TIPS=\"$2\"; USER_PROMPT=\"${3:-}\"; ASPECT_RATIO=\"${4:-9:16}\"; DURATION=\"${5:-}\"\n\n# -- Step 1: Upload --\necho \"[1/4] Uploading $FILE_PATH...\" >&2\nUPLOAD_RESP=$(curl -sS -X POST \"${SPARKI_API_BASE}/business/assets/upload\" \\\n  -H \"X-API-Key: $SPARKI_API_KEY\" -F \"file=@${FILE_PATH}\")\nOBJECT_KEY=$(echo \"$UPLOAD_RESP\" | jq -r '.data.object_key // empty')\n[[ -z \"$OBJECT_KEY\" ]] && { echo \"Upload failed: $(echo \"$UPLOAD_RESP\" | jq -r '.message')\" >&2; exit 1; }\necho \"[1/4] object_key=$OBJECT_KEY\" >&2\n\n# -- Step 2: Wait for asset ready --\necho \"[2/4] Waiting for asset processing...\" >&2\nT0=$(date +%s)\nwhile true; do sleep $ASSET_POLL_INTERVAL\n  ST=$(curl -sS \"${SPARKI_API_BASE}/business/assets/${OBJECT_KEY}/status\" -H \"X-API-Key: $SPARKI_API_KEY\" | jq -r '.data.status // \"unknown\"')\n  echo \"[2/4] $ST\" >&2; [[ \"$ST\" == \"completed\" ]] && break\n  [[ \"$ST\" == \"failed\" ]] && { echo \"Asset failed\" >&2; exit 2; }\n  (( $(date +%s) - T0 >= ASSET_TIMEOUT )) && { echo \"Asset timeout\" >&2; exit 2; }\ndone\n\n# -- Step 3: Create project --\necho \"[3/4] Creating AI project (tips=$TIPS)...\" >&2\nsleep $RATE_LIMIT_SLEEP\nKEYS_JSON=$(echo \"$OBJECT_KEY\" | jq -Rc '[.]')\nTIPS_JSON=$(echo \"$TIPS\" | jq -Rc 'split(\",\") | map(tonumber? // .)')\nBODY=$(jq -n --argjson k \"$KEYS_JSON\" --argjson t \"$TIPS_JSON\" \\\n  --arg p \"$USER_PROMPT\" --arg a \"$ASPECT_RATIO\" --arg d \"$DURATION\" \\\n  '{object_keys:$k,tips:$t,aspect_ratio:$a}\n   | if $p != \"\" then .+{user_prompt:$p} else . end\n   | if $d != \"\" then .+{duration:($d|tonumber)} else . end')\nPROJ_RESP=$(curl -sS -X POST \"${SPARKI_API_BASE}/business/projects\" \\\n  -H \"X-API-Key: $SPARKI_API_KEY\" -H \"Content-Type: application/json\" -d \"$BODY\")\nPROJECT_ID=$(echo \"$PROJ_RESP\" | jq -r '.data.project_id // empty')\n[[ -z \"$PROJECT_ID\" ]] && { echo \"Project creation failed: $(echo \"$PROJ_RESP\" | jq -r '.message')\" >&2; exit 1; }\necho \"[3/4] project_id=$PROJECT_ID\" >&2\n\n# -- Step 4: Poll until done --\necho \"[4/4] Waiting for AI processing (up to ${WORKFLOW_TIMEOUT}s)...\" >&2\nT0=$(date +%s)\nwhile true; do sleep $PROJECT_POLL_INTERVAL\n  PRESP=$(curl -sS \"${SPARKI_API_BASE}/business/projects/${PROJECT_ID}\" -H \"X-API-Key: $SPARKI_API_KEY\")\n  STATUS=$(echo \"$PRESP\" | jq -r '.data.status // \"UNKNOWN\"')\n  echo \"[4/4] $STATUS\" >&2\n  if [[ \"$STATUS\" == \"COMPLETED\" ]]; then\n    echo \"$PRESP\" | jq -r '.data.result_url // empty'; exit 0\n  fi\n  [[ \"$STATUS\" == \"FAILED\" ]] && { echo \"Project failed: $(echo \"$PRESP\" | jq -r '.data.error')\" >&2; exit 4; }\n  (( $(date +%s) - T0 >= WORKFLOW_TIMEOUT )) && { echo \"Timeout. Check manually: project_id=$PROJECT_ID\" >&2; exit 3; }\ndone\n\n\nAI Edit example — transcript-informed highlight reel:\n\n# After reviewing the transcript, pass key themes as the prompt\nRESULT_URL=$(bash scripts/edit_video.sh speech.mp4 \"3\" \\\n  \"focus on the parts about AI and the future of work, energetic pacing\" \"9:16\" 120)\necho \"Download: $RESULT_URL\"\n\nError Reference\nError\tCause\tFix\nwhisper: command not found\tWhisper not installed\tpip install openai-whisper\nffmpeg: command not found\tffmpeg not installed\tbrew install ffmpeg\nTranscript is empty\tSilent video or wrong language\tTry language=en explicitly or check audio track\nAI Edit: SPARKI_API_KEY missing\tKey not configured\topenclaw config set env.SPARKI_API_KEY <key>\nAI Edit: 401\tInvalid key\tCheck key at enterprise@sparki.io"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/Symbolk/video-to-text",
    "publisherUrl": "https://clawhub.ai/Symbolk/video-to-text",
    "owner": "Symbolk",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/video-to-text",
    "downloadUrl": "https://openagent3.xyz/downloads/video-to-text",
    "agentUrl": "https://openagent3.xyz/skills/video-to-text/agent",
    "manifestUrl": "https://openagent3.xyz/skills/video-to-text/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/video-to-text/agent.md"
  }
}