← All skills
Tencent SkillHub Β· AI

Video Editing Agent (VEA)

Video Editing Agent (VEA) for automated video processing, highlight generation, and editing. Use when asked to index videos, create highlight reels, generate...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Video Editing Agent (VEA) for automated video processing, highlight generation, and editing. Use when asked to index videos, create highlight reels, generate...

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, references/api.md, references/config.md, scripts/add_music.sh, scripts/start_server.sh, scripts/vea_helper.sh

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.1.2

Documentation

ClawHub primary doc Primary doc: SKILL.md 16 sections Open source page

Installation

VEA is open source! Get it from GitHub: # Clone the repo git clone https://github.com/Memories-ai-labs/vea-open-source.git cd vea-open-source # Install uv package manager curl -LsSf https://astral.sh/uv/install.sh | sh # Install dependencies uv sync source .venv/bin/activate # Copy config and add your API keys cp config.example.json config.json πŸ“„ Paper: https://arxiv.org/abs/2509.16811 πŸ’» Code: https://github.com/Memories-ai-labs/vea-open-source

Requirements

Python 3.11+ FFmpeg - Must be installed on system uv - Package manager (installed above) API Keys (in config.json): MEMORIES_API_KEY (required) - Video indexing & comprehension - Get at https://memories.ai/app/service/key GOOGLE_API_KEY (required) - Script generation - Google Cloud Console ELEVENLABS_API_KEY (required) - TTS narration & subtitles SOUNDSTRIPE_KEY (optional) - Background music selection

Install FFmpeg

OSCommandUbuntu/Debiansudo apt install ffmpegmacOSbrew install ffmpegWindowsDownload from ffmpeg.org

Start Server

gcloud auth application-default login # Authenticate GCP source .venv/bin/activate python -m src.app Server runs at http://localhost:8000

Privacy Note

Videos processed locally by VEA server Video frames sent to Memories.ai for AI comprehension ElevenLabs receives text for TTS narration All intermediate files stored locally in data/outputs/

Video Editing Agent (VEA)

Local video editing service at http://localhost:8000. Runs from ~/vea.

⚠️ User Interaction Flow (MUST FOLLOW)

Before processing any video edit request, show config options and wait for confirmation: πŸ“Ή VEA Video Edit Configuration 🎬 Source Video: [video path/name] πŸ“ Edit Request: [user's prompt] Please confirm the following settings: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Setting β”‚ Value β”‚ Description β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ πŸ”Š Original Audio β”‚ ❌ OFF β”‚ Keep original video sound β”‚ β”‚ 🎀 Narration β”‚ βœ… ON β”‚ AI-generated voiceover β”‚ β”‚ 🎡 Background Music β”‚ βœ… ON β”‚ Auto-select from Soundstripe β”‚ β”‚ πŸ“ Subtitles β”‚ βœ… ON β”‚ Auto-generate and burn-in β”‚ β”‚ πŸ“ Aspect Ratio β”‚ 16:9 β”‚ 16:9 / 9:16 vertical / 1:1 β”‚ β”‚ 🎼 Snap to Beat β”‚ ❌ OFF β”‚ Sync cuts to music beats β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Reply "confirm" to start editing, or tell me which settings to adjust. Default Settings: original_audio: false (mute original, use narration instead) narration: true (enable AI voiceover) music: true (enable background music) subtitles: true (enable subtitles) aspect_ratio: 1.78 (16:9 landscape) snap_to_beat: false (no beat sync) Aspect Ratio Options: 16:9 (1.78) β€” Landscape, YouTube 9:16 (0.5625) β€” Vertical, TikTok/Reels 1:1 (1.0) β€” Square, Instagram

Quick Start

# Start VEA server (use tmux for long tasks) cd ~/vea && source .venv/bin/activate && python src/app.py

1. Index a Video (Required First Step)

Before any editing, index the video to enable AI comprehension: curl -X POST "http://localhost:8000/video-edit/v1/index" \ -H "Content-Type: application/json" \ -d '{"blob_path": "data/videos/PROJECT_NAME/video.mp4"}' Creates ~/vea/data/indexing/PROJECT_NAME/media_indexing.json.

2. Generate Highlight Reel

curl -X POST "http://localhost:8000/video-edit/v1/flexible_respond" \ -H "Content-Type: application/json" \ -d '{ "blob_path": "data/videos/PROJECT_NAME/video.mp4", "prompt": "Create a 1-minute highlight reel of the best moments", "video_response": true, "original_audio": false, "music": true, "narration": true, "aspect_ratio": 1.78, "subtitles": true }' Parameters: video_response: true β€” Generate video output (vs text-only) original_audio: false β€” Mute original audio, use narration music: true β€” Add background music (requires Soundstripe API) narration: true β€” Generate AI voiceover (ElevenLabs) subtitles: true β€” Burn subtitles into video aspect_ratio β€” 1.78 (16:9), 1.0 (square), 0.5625 (9:16 vertical)

3. Manual Video Assembly

For more control, use the helper scripts: # Add background music to existing video python ~/vea/scripts/add_soundstripe_music.py # Generate video with subtitles python ~/vea/scripts/add_music_subtitles.py

Directory Structure

~/vea/ β”œβ”€β”€ data/ β”‚ β”œβ”€β”€ videos/PROJECT_NAME/ # Source videos β”‚ β”œβ”€β”€ indexing/PROJECT_NAME/ # media_indexing.json β”‚ └── outputs/PROJECT_NAME/ # Final outputs β”‚ β”œβ”€β”€ PROJECT_NAME.mp4 # Final video β”‚ β”œβ”€β”€ clip_plan.json # Clip timestamps + narration β”‚ β”œβ”€β”€ narrations/ # TTS audio files β”‚ β”œβ”€β”€ subtitles/ # SRT files β”‚ └── music/ # Background music β”œβ”€β”€ config.json # API keys configuration └── src/app.py # FastAPI server

API Keys (in config.json)

KeyServicePurposeRequiredMEMORIES_API_KEYMemories.aiVideo indexing & comprehensionβœ… YesGOOGLE_API_KEYGeminiScript generationβœ… YesELEVENLABS_API_KEYElevenLabsTTS narration, STT subtitlesβœ… YesSOUNDSTRIPE_KEYSoundstripeBackground music selectionOptional

Common Issues

"ViNet assets not found" β€” Dynamic cropping disabled. Set enable_dynamic_cropping: false in config.json. Subprocess fails from API but works manually β€” Run server in tmux to preserve environment. Music download 401/403 β€” Check Soundstripe API key validity. Clip timestamps wrong β€” Ensure original_audio: true to enable timestamp refinement via transcription.

Manual Music Addition

When Soundstripe fails, manually download and mix: # Download from Soundstripe API SOUNDSTRIPE_KEY=$(jq -r '.api_keys.SOUNDSTRIPE_KEY' ~/vea/config.json) curl -s "https://api.soundstripe.com/v1/songs/TRACK_ID" \ -H "Authorization: Token $SOUNDSTRIPE_KEY" | jq '.included[0].attributes.versions.mp3' # Mix with ffmpeg (15-20% music volume) ffmpeg -y -i video.mp4 -i music.mp3 \ -filter_complex "[1:a]volume=0.18,afade=t=out:st=70:d=4[m];[0:a][m]amix=inputs=2:duration=first[a]" \ -map 0:v -map "[a]" -c:v copy -c:a aac output.mp4

References

API Documentation β€” Full endpoint specs Config Schema β€” Configuration options

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Docs3 Scripts
  • SKILL.md Primary doc
  • references/api.md Docs
  • references/config.md Docs
  • scripts/add_music.sh Scripts
  • scripts/start_server.sh Scripts
  • scripts/vea_helper.sh Scripts