Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
AI-native workflow analyzer for Loom recordings. Breaks down recorded business processes into structured, automatable workflows. Use when: - Analyzing Loom videos to understand workflows - Extracting steps, tools, and decision points from screen recordings - Generating Lobster workflow files from video walkthroughs - Identifying ambiguities and human intervention points in processes
AI-native workflow analyzer for Loom recordings. Breaks down recorded business processes into structured, automatable workflows. Use when: - Analyzing Loom videos to understand workflows - Extracting steps, tools, and decision points from screen recordings - Generating Lobster workflow files from video walkthroughs - Identifying ambiguities and human intervention points in processes
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Transforms Loom recordings into structured, automatable workflows.
# Full pipeline - download, extract, transcribe, analyze {baseDir}/scripts/loom-workflow analyze https://loom.com/share/abc123 # Individual steps {baseDir}/scripts/loom-workflow download https://loom.com/share/abc123 {baseDir}/scripts/loom-workflow extract ./video.mp4 {baseDir}/scripts/loom-workflow generate ./analysis.json
Download - Fetches Loom video via yt-dlp Smart Extract - Captures frames at scene changes + transcript timing Transcribe - Whisper transcription with word-level timestamps Analyze - Multimodal AI analysis (requires vision model) Generate - Creates Lobster workflow with approval gates
Frames are captured when: Scene changes - Significant visual change (ffmpeg scene detection) Speech starts - New narration segment begins Combined - Speech + visual change = high-value moment Gap fill - Max 10s without a frame
The analyzer produces: workflow-analysis.json - Structured workflow definition workflow-summary.md - Human-readable summary *.lobster - Executable Lobster workflow file
The analyzer flags: Unclear mouse movements Implicit knowledge ("the usual process") Decision points ("depending on...") Missing credentials/context Tool dependencies
After extraction, use the generated prompt with a vision model: # The prompt is at: output/workflow-analysis-prompt.md # Attach frames from: output/frames/ # Example with Claude: cat output/workflow-analysis-prompt.md | claude --images output/frames/*.jpg Save the JSON response to workflow-analysis.json, then: {baseDir}/scripts/loom-workflow generate ./output/workflow-analysis.json
Generated workflows use: approve gates for destructive/external actions llm-task for classification/decision steps Resume tokens for interrupted workflows JSON piping between steps
yt-dlp - Video download ffmpeg - Frame extraction + scene detection whisper - Audio transcription Vision-capable LLM for analysis step
Works with any language - Whisper auto-detects and transcribes. Analysis should be prompted in the video's language for best results.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.