{
  "schemaVersion": "1.0",
  "item": {
    "slug": "vea",
    "name": "Video Editing Agent (VEA)",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/shawnshenopeninterx/vea",
    "canonicalUrl": "https://clawhub.ai/shawnshenopeninterx/vea",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/vea",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=vea",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "references/api.md",
      "references/config.md",
      "scripts/add_music.sh",
      "scripts/start_server.sh",
      "scripts/vea_helper.sh"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/vea"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/vea",
    "agentPageUrl": "https://openagent3.xyz/skills/vea/agent",
    "manifestUrl": "https://openagent3.xyz/skills/vea/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/vea/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Installation",
        "body": "VEA is open source! Get it from GitHub:\n\n# Clone the repo\ngit clone https://github.com/Memories-ai-labs/vea-open-source.git\ncd vea-open-source\n\n# Install uv package manager\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n\n# Install dependencies\nuv sync\nsource .venv/bin/activate\n\n# Copy config and add your API keys\ncp config.example.json config.json\n\n📄 Paper: https://arxiv.org/abs/2509.16811\n💻 Code: https://github.com/Memories-ai-labs/vea-open-source"
      },
      {
        "title": "Requirements",
        "body": "Python 3.11+\nFFmpeg - Must be installed on system\nuv - Package manager (installed above)\nAPI Keys (in config.json):\n\nMEMORIES_API_KEY (required) - Video indexing & comprehension - Get at https://memories.ai/app/service/key\nGOOGLE_API_KEY (required) - Script generation - Google Cloud Console\nELEVENLABS_API_KEY (required) - TTS narration & subtitles\nSOUNDSTRIPE_KEY (optional) - Background music selection"
      },
      {
        "title": "Install FFmpeg",
        "body": "OSCommandUbuntu/Debiansudo apt install ffmpegmacOSbrew install ffmpegWindowsDownload from ffmpeg.org"
      },
      {
        "title": "Start Server",
        "body": "gcloud auth application-default login  # Authenticate GCP\nsource .venv/bin/activate\npython -m src.app\n\nServer runs at http://localhost:8000"
      },
      {
        "title": "Privacy Note",
        "body": "Videos processed locally by VEA server\nVideo frames sent to Memories.ai for AI comprehension\nElevenLabs receives text for TTS narration\nAll intermediate files stored locally in data/outputs/"
      },
      {
        "title": "Video Editing Agent (VEA)",
        "body": "Local video editing service at http://localhost:8000. Runs from ~/vea."
      },
      {
        "title": "⚠️ User Interaction Flow (MUST FOLLOW)",
        "body": "Before processing any video edit request, show config options and wait for confirmation:\n\n📹 VEA Video Edit Configuration\n\n🎬 Source Video: [video path/name]\n📝 Edit Request: [user's prompt]\n\nPlease confirm the following settings:\n┌─────────────────┬────────┬─────────────────────────┐\n│ Setting         │ Value  │ Description             │\n├─────────────────┼────────┼─────────────────────────┤\n│ 🔊 Original Audio        │ ❌ OFF │ Keep original video sound    │\n│ 🎤 Narration             │ ✅ ON  │ AI-generated voiceover       │\n│ 🎵 Background Music      │ ✅ ON  │ Auto-select from Soundstripe │\n│ 📝 Subtitles             │ ✅ ON  │ Auto-generate and burn-in    │\n│ 📐 Aspect Ratio          │ 16:9   │ 16:9 / 9:16 vertical / 1:1   │\n│ 🎼 Snap to Beat          │ ❌ OFF │ Sync cuts to music beats     │\n└─────────────────┴────────┴─────────────────────────┘\n\nReply \"confirm\" to start editing, or tell me which settings to adjust.\n\nDefault Settings:\n\noriginal_audio: false (mute original, use narration instead)\nnarration: true (enable AI voiceover)\nmusic: true (enable background music)\nsubtitles: true (enable subtitles)\naspect_ratio: 1.78 (16:9 landscape)\nsnap_to_beat: false (no beat sync)\n\nAspect Ratio Options:\n\n16:9 (1.78) — Landscape, YouTube\n9:16 (0.5625) — Vertical, TikTok/Reels\n1:1 (1.0) — Square, Instagram"
      },
      {
        "title": "Quick Start",
        "body": "# Start VEA server (use tmux for long tasks)\ncd ~/vea && source .venv/bin/activate && python src/app.py"
      },
      {
        "title": "1. Index a Video (Required First Step)",
        "body": "Before any editing, index the video to enable AI comprehension:\n\ncurl -X POST \"http://localhost:8000/video-edit/v1/index\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"blob_path\": \"data/videos/PROJECT_NAME/video.mp4\"}'\n\nCreates ~/vea/data/indexing/PROJECT_NAME/media_indexing.json."
      },
      {
        "title": "2. Generate Highlight Reel",
        "body": "curl -X POST \"http://localhost:8000/video-edit/v1/flexible_respond\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"blob_path\": \"data/videos/PROJECT_NAME/video.mp4\",\n    \"prompt\": \"Create a 1-minute highlight reel of the best moments\",\n    \"video_response\": true,\n    \"original_audio\": false,\n    \"music\": true,\n    \"narration\": true,\n    \"aspect_ratio\": 1.78,\n    \"subtitles\": true\n  }'\n\nParameters:\n\nvideo_response: true — Generate video output (vs text-only)\noriginal_audio: false — Mute original audio, use narration\nmusic: true — Add background music (requires Soundstripe API)\nnarration: true — Generate AI voiceover (ElevenLabs)\nsubtitles: true — Burn subtitles into video\naspect_ratio — 1.78 (16:9), 1.0 (square), 0.5625 (9:16 vertical)"
      },
      {
        "title": "3. Manual Video Assembly",
        "body": "For more control, use the helper scripts:\n\n# Add background music to existing video\npython ~/vea/scripts/add_soundstripe_music.py\n\n# Generate video with subtitles\npython ~/vea/scripts/add_music_subtitles.py"
      },
      {
        "title": "Directory Structure",
        "body": "~/vea/\n├── data/\n│   ├── videos/PROJECT_NAME/      # Source videos\n│   ├── indexing/PROJECT_NAME/    # media_indexing.json\n│   └── outputs/PROJECT_NAME/     # Final outputs\n│       ├── PROJECT_NAME.mp4      # Final video\n│       ├── clip_plan.json        # Clip timestamps + narration\n│       ├── narrations/           # TTS audio files\n│       ├── subtitles/            # SRT files\n│       └── music/                # Background music\n├── config.json                   # API keys configuration\n└── src/app.py                    # FastAPI server"
      },
      {
        "title": "API Keys (in config.json)",
        "body": "KeyServicePurposeRequiredMEMORIES_API_KEYMemories.aiVideo indexing & comprehension✅ YesGOOGLE_API_KEYGeminiScript generation✅ YesELEVENLABS_API_KEYElevenLabsTTS narration, STT subtitles✅ YesSOUNDSTRIPE_KEYSoundstripeBackground music selectionOptional"
      },
      {
        "title": "Common Issues",
        "body": "\"ViNet assets not found\" — Dynamic cropping disabled. Set enable_dynamic_cropping: false in config.json.\n\nSubprocess fails from API but works manually — Run server in tmux to preserve environment.\n\nMusic download 401/403 — Check Soundstripe API key validity.\n\nClip timestamps wrong — Ensure original_audio: true to enable timestamp refinement via transcription."
      },
      {
        "title": "Manual Music Addition",
        "body": "When Soundstripe fails, manually download and mix:\n\n# Download from Soundstripe API\nSOUNDSTRIPE_KEY=$(jq -r '.api_keys.SOUNDSTRIPE_KEY' ~/vea/config.json)\ncurl -s \"https://api.soundstripe.com/v1/songs/TRACK_ID\" \\\n  -H \"Authorization: Token $SOUNDSTRIPE_KEY\" | jq '.included[0].attributes.versions.mp3'\n\n# Mix with ffmpeg (15-20% music volume)\nffmpeg -y -i video.mp4 -i music.mp3 \\\n  -filter_complex \"[1:a]volume=0.18,afade=t=out:st=70:d=4[m];[0:a][m]amix=inputs=2:duration=first[a]\" \\\n  -map 0:v -map \"[a]\" -c:v copy -c:a aac output.mp4"
      },
      {
        "title": "References",
        "body": "API Documentation — Full endpoint specs\nConfig Schema — Configuration options"
      }
    ],
    "body": "Installation\n\nVEA is open source! Get it from GitHub:\n\n# Clone the repo\ngit clone https://github.com/Memories-ai-labs/vea-open-source.git\ncd vea-open-source\n\n# Install uv package manager\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n\n# Install dependencies\nuv sync\nsource .venv/bin/activate\n\n# Copy config and add your API keys\ncp config.example.json config.json\n\n\n📄 Paper: https://arxiv.org/abs/2509.16811 💻 Code: https://github.com/Memories-ai-labs/vea-open-source\n\nRequirements\nPython 3.11+\nFFmpeg - Must be installed on system\nuv - Package manager (installed above)\nAPI Keys (in config.json):\nMEMORIES_API_KEY (required) - Video indexing & comprehension - Get at https://memories.ai/app/service/key\nGOOGLE_API_KEY (required) - Script generation - Google Cloud Console\nELEVENLABS_API_KEY (required) - TTS narration & subtitles\nSOUNDSTRIPE_KEY (optional) - Background music selection\nInstall FFmpeg\nOS\tCommand\nUbuntu/Debian\tsudo apt install ffmpeg\nmacOS\tbrew install ffmpeg\nWindows\tDownload from ffmpeg.org\nStart Server\ngcloud auth application-default login  # Authenticate GCP\nsource .venv/bin/activate\npython -m src.app\n\n\nServer runs at http://localhost:8000\n\nPrivacy Note\nVideos processed locally by VEA server\nVideo frames sent to Memories.ai for AI comprehension\nElevenLabs receives text for TTS narration\nAll intermediate files stored locally in data/outputs/\nVideo Editing Agent (VEA)\n\nLocal video editing service at http://localhost:8000. Runs from ~/vea.\n\n⚠️ User Interaction Flow (MUST FOLLOW)\n\nBefore processing any video edit request, show config options and wait for confirmation:\n\n📹 VEA Video Edit Configuration\n\n🎬 Source Video: [video path/name]\n📝 Edit Request: [user's prompt]\n\nPlease confirm the following settings:\n┌─────────────────┬────────┬─────────────────────────┐\n│ Setting         │ Value  │ Description             │\n├─────────────────┼────────┼─────────────────────────┤\n│ 🔊 Original Audio        │ ❌ OFF │ Keep original video sound    │\n│ 🎤 Narration             │ ✅ ON  │ AI-generated voiceover       │\n│ 🎵 Background Music      │ ✅ ON  │ Auto-select from Soundstripe │\n│ 📝 Subtitles             │ ✅ ON  │ Auto-generate and burn-in    │\n│ 📐 Aspect Ratio          │ 16:9   │ 16:9 / 9:16 vertical / 1:1   │\n│ 🎼 Snap to Beat          │ ❌ OFF │ Sync cuts to music beats     │\n└─────────────────┴────────┴─────────────────────────┘\n\nReply \"confirm\" to start editing, or tell me which settings to adjust.\n\n\nDefault Settings:\n\noriginal_audio: false (mute original, use narration instead)\nnarration: true (enable AI voiceover)\nmusic: true (enable background music)\nsubtitles: true (enable subtitles)\naspect_ratio: 1.78 (16:9 landscape)\nsnap_to_beat: false (no beat sync)\n\nAspect Ratio Options:\n\n16:9 (1.78) — Landscape, YouTube\n9:16 (0.5625) — Vertical, TikTok/Reels\n1:1 (1.0) — Square, Instagram\nQuick Start\n# Start VEA server (use tmux for long tasks)\ncd ~/vea && source .venv/bin/activate && python src/app.py\n\nCore Workflows\n1. Index a Video (Required First Step)\n\nBefore any editing, index the video to enable AI comprehension:\n\ncurl -X POST \"http://localhost:8000/video-edit/v1/index\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"blob_path\": \"data/videos/PROJECT_NAME/video.mp4\"}'\n\n\nCreates ~/vea/data/indexing/PROJECT_NAME/media_indexing.json.\n\n2. Generate Highlight Reel\ncurl -X POST \"http://localhost:8000/video-edit/v1/flexible_respond\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"blob_path\": \"data/videos/PROJECT_NAME/video.mp4\",\n    \"prompt\": \"Create a 1-minute highlight reel of the best moments\",\n    \"video_response\": true,\n    \"original_audio\": false,\n    \"music\": true,\n    \"narration\": true,\n    \"aspect_ratio\": 1.78,\n    \"subtitles\": true\n  }'\n\n\nParameters:\n\nvideo_response: true — Generate video output (vs text-only)\noriginal_audio: false — Mute original audio, use narration\nmusic: true — Add background music (requires Soundstripe API)\nnarration: true — Generate AI voiceover (ElevenLabs)\nsubtitles: true — Burn subtitles into video\naspect_ratio — 1.78 (16:9), 1.0 (square), 0.5625 (9:16 vertical)\n3. Manual Video Assembly\n\nFor more control, use the helper scripts:\n\n# Add background music to existing video\npython ~/vea/scripts/add_soundstripe_music.py\n\n# Generate video with subtitles\npython ~/vea/scripts/add_music_subtitles.py\n\nDirectory Structure\n~/vea/\n├── data/\n│   ├── videos/PROJECT_NAME/      # Source videos\n│   ├── indexing/PROJECT_NAME/    # media_indexing.json\n│   └── outputs/PROJECT_NAME/     # Final outputs\n│       ├── PROJECT_NAME.mp4      # Final video\n│       ├── clip_plan.json        # Clip timestamps + narration\n│       ├── narrations/           # TTS audio files\n│       ├── subtitles/            # SRT files\n│       └── music/                # Background music\n├── config.json                   # API keys configuration\n└── src/app.py                    # FastAPI server\n\nAPI Keys (in config.json)\nKey\tService\tPurpose\tRequired\nMEMORIES_API_KEY\tMemories.ai\tVideo indexing & comprehension\t✅ Yes\nGOOGLE_API_KEY\tGemini\tScript generation\t✅ Yes\nELEVENLABS_API_KEY\tElevenLabs\tTTS narration, STT subtitles\t✅ Yes\nSOUNDSTRIPE_KEY\tSoundstripe\tBackground music selection\tOptional\nCommon Issues\n\n\"ViNet assets not found\" — Dynamic cropping disabled. Set enable_dynamic_cropping: false in config.json.\n\nSubprocess fails from API but works manually — Run server in tmux to preserve environment.\n\nMusic download 401/403 — Check Soundstripe API key validity.\n\nClip timestamps wrong — Ensure original_audio: true to enable timestamp refinement via transcription.\n\nManual Music Addition\n\nWhen Soundstripe fails, manually download and mix:\n\n# Download from Soundstripe API\nSOUNDSTRIPE_KEY=$(jq -r '.api_keys.SOUNDSTRIPE_KEY' ~/vea/config.json)\ncurl -s \"https://api.soundstripe.com/v1/songs/TRACK_ID\" \\\n  -H \"Authorization: Token $SOUNDSTRIPE_KEY\" | jq '.included[0].attributes.versions.mp3'\n\n# Mix with ffmpeg (15-20% music volume)\nffmpeg -y -i video.mp4 -i music.mp3 \\\n  -filter_complex \"[1:a]volume=0.18,afade=t=out:st=70:d=4[m];[0:a][m]amix=inputs=2:duration=first[a]\" \\\n  -map 0:v -map \"[a]\" -c:v copy -c:a aac output.mp4\n\nReferences\nAPI Documentation — Full endpoint specs\nConfig Schema — Configuration options"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/shawnshenopeninterx/vea",
    "publisherUrl": "https://clawhub.ai/shawnshenopeninterx/vea",
    "owner": "shawnshenopeninterx",
    "version": "1.1.2",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/vea",
    "downloadUrl": "https://openagent3.xyz/downloads/vea",
    "agentUrl": "https://openagent3.xyz/skills/vea/agent",
    "manifestUrl": "https://openagent3.xyz/skills/vea/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/vea/agent.md"
  }
}