← All skills
Tencent SkillHub Β· AI

Loom Workflow

AI-native workflow analyzer for Loom recordings. Breaks down recorded business processes into structured, automatable workflows. Use when: - Analyzing Loom videos to understand workflows - Extracting steps, tools, and decision points from screen recordings - Generating Lobster workflow files from video walkthroughs - Identifying ambiguities and human intervention points in processes

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

AI-native workflow analyzer for Loom recordings. Breaks down recorded business processes into structured, automatable workflows. Use when: - Analyzing Loom videos to understand workflows - Extracting steps, tools, and decision points from screen recordings - Generating Lobster workflow files from video walkthroughs - Identifying ambiguities and human intervention points in processes

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
DESIGN.md, SKILL.md, scripts/analyze-workflow.py, scripts/generate-lobster.py, scripts/smart-extract.py, test-output/video.info.json

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.1

Documentation

ClawHub primary doc Primary doc: SKILL.md 10 sections Open source page

Loom Workflow Analyzer

Transforms Loom recordings into structured, automatable workflows.

Quick Start

# Full pipeline - download, extract, transcribe, analyze {baseDir}/scripts/loom-workflow analyze https://loom.com/share/abc123 # Individual steps {baseDir}/scripts/loom-workflow download https://loom.com/share/abc123 {baseDir}/scripts/loom-workflow extract ./video.mp4 {baseDir}/scripts/loom-workflow generate ./analysis.json

Pipeline

Download - Fetches Loom video via yt-dlp Smart Extract - Captures frames at scene changes + transcript timing Transcribe - Whisper transcription with word-level timestamps Analyze - Multimodal AI analysis (requires vision model) Generate - Creates Lobster workflow with approval gates

Smart Frame Extraction

Frames are captured when: Scene changes - Significant visual change (ffmpeg scene detection) Speech starts - New narration segment begins Combined - Speech + visual change = high-value moment Gap fill - Max 10s without a frame

Analysis Output

The analyzer produces: workflow-analysis.json - Structured workflow definition workflow-summary.md - Human-readable summary *.lobster - Executable Lobster workflow file

Ambiguity Detection

The analyzer flags: Unclear mouse movements Implicit knowledge ("the usual process") Decision points ("depending on...") Missing credentials/context Tool dependencies

Vision Analysis Step

After extraction, use the generated prompt with a vision model: # The prompt is at: output/workflow-analysis-prompt.md # Attach frames from: output/frames/ # Example with Claude: cat output/workflow-analysis-prompt.md | claude --images output/frames/*.jpg Save the JSON response to workflow-analysis.json, then: {baseDir}/scripts/loom-workflow generate ./output/workflow-analysis.json

Lobster Integration

Generated workflows use: approve gates for destructive/external actions llm-task for classification/decision steps Resume tokens for interrupted workflows JSON piping between steps

Requirements

yt-dlp - Video download ffmpeg - Frame extraction + scene detection whisper - Audio transcription Vision-capable LLM for analysis step

Multilingual Support

Works with any language - Whisper auto-detects and transcribes. Analysis should be prompted in the video's language for best results.

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Scripts2 Docs1 Config
  • SKILL.md Primary doc
  • DESIGN.md Docs
  • scripts/analyze-workflow.py Scripts
  • scripts/generate-lobster.py Scripts
  • scripts/smart-extract.py Scripts
  • test-output/video.info.json Config