Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Zoom RTMS Meeting Assistant — start on-demand to capture meeting audio, video, transcript, screenshare, and chat via Zoom Real-Time Media Streams. Handles meeting.rtms_started and meeting.rtms_stopped webhook events. Provides AI-powered dialog suggestions, sentiment analysis, and live summaries with WhatsApp notifications. Use when a Zoom RTMS webhook fires or the user asks to record/analyze a meeting.
Zoom RTMS Meeting Assistant — start on-demand to capture meeting audio, video, transcript, screenshare, and chat via Zoom Real-Time Media Streams. Handles meeting.rtms_started and meeting.rtms_stopped webhook events. Provides AI-powered dialog suggestions, sentiment analysis, and live summaries with WhatsApp notifications. Use when a Zoom RTMS webhook fires or the user asks to record/analyze a meeting.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Headless capture service for Zoom meetings using Real-Time Media Streams (RTMS). Receives webhook events, connects to RTMS WebSockets, records all media, and runs AI analysis via OpenClaw.
This skill processes two Zoom webhook events: meeting.rtms_started — Zoom sends this when RTMS is activated for a meeting. Contains server_urls, rtms_stream_id, and meeting_uuid needed to connect to the RTMS WebSocket. meeting.rtms_stopped — Zoom sends this when RTMS ends (meeting ended or RTMS disabled). Triggers cleanup: closes WebSocket connections, generates screenshare PDF, sends summary notification.
This skill needs a public webhook endpoint to receive these events from Zoom. Preferred: Use the ngrok-unofficial-webhook-skill (skills/ngrok-unofficial-webhook-skill). It auto-discovers this skill via webhookEvents in skill.json, notifies the user, and offers to route events here. Other webhook solutions (e.g. custom servers, cloud functions) will work but require additional integration to forward payloads to this service.
cd skills/zoom-meeting-assistance-rtms-unofficial-community npm install Requires ffmpeg for post-meeting media conversion.
Set these in the skill's .env file: Required: ZOOM_SECRET_TOKEN — Zoom webhook secret token ZOOM_CLIENT_ID — Zoom app Client ID ZOOM_CLIENT_SECRET — Zoom app Client Secret Optional: PORT — Server port (default: 3000) AI_PROCESSING_INTERVAL_MS — AI analysis frequency in ms (default: 30000) AI_FUNCTION_STAGGER_MS — Delay between AI calls in ms (default: 5000) AUDIO_DATA_OPT — 1 = mixed stream, 2 = multi-stream (default: 2) OPENCLAW_NOTIFY_CHANNEL — Notification channel (default: whatsapp) OPENCLAW_NOTIFY_TARGET — Phone number / target for notifications
cd skills/zoom-meeting-assistance-rtms-unofficial-community node index.js This starts an Express server listening for Zoom webhook events on PORT. ⚠️ Important: Before forwarding webhooks to this service, always check if it's running: # Check if service is listening on port 3000 lsof -i :3000 If nothing is returned, start the service first before forwarding any webhook events. Typical flow: Start the server as a background process Zoom sends meeting.rtms_started webhook → service connects to RTMS WebSocket Media streams in real-time: audio, video, transcript, screenshare, chat AI processing runs periodically (dialog suggestions, sentiment, summary) meeting.rtms_stopped → service closes connections, generates screenshare PDF
All recordings are stored organized by date: skills/zoom-meeting-assistance-rtms-unofficial-community/recordings/YYYY/MM/DD/{streamId}/ Each stream folder contains: FileContentSearchablemetadata.jsonMeeting metadata (UUID, stream ID, operator, start time)✅transcript.txtPlain text transcript with timestamps and speaker names✅ Best for searching — grep-friendly, one line per utterancetranscript.vttVTT format transcript with timing cues✅transcript.srtSRT format transcript✅events.logParticipant join/leave, active speaker changes (JSON lines)✅chat.txtChat messages with timestamps✅ai_summary.mdAI-generated meeting summary (markdown)✅ Key document — read this first for meeting overviewai_dialog.jsonAI dialog suggestions✅ai_sentiment.jsonSentiment analysis per participant✅mixedaudio.rawMixed audio stream (raw PCM)❌ Binaryactivespeakervideo.h264Active speaker video (raw H.264)❌ Binaryprocessed/screenshare.pdfDeduplicated screenshare frames as PDF❌ Binary All summaries are also copied to a central folder for easy access: skills/zoom-meeting-assistance-rtms-unofficial-community/summaries/summary_YYYY-MM-DDTHH-MM-SS_{streamId}.md
To find and review past meeting data: # List all recorded meetings by date ls -R recordings/ # List meetings for a specific date ls recordings/2026/01/28/ # Search across all transcripts for a keyword grep -rl "keyword" recordings/*/*/*/*/transcript.txt # Search for what a specific person said grep "Chun Siong Tan" recordings/*/*/*/*/transcript.txt # Read a meeting summary cat recordings/YYYY/MM/DD/<streamId>/ai_summary.md # Search summaries for a topic grep -rl "topic" recordings/*/*/*/*/ai_summary.md # Check who attended a meeting cat recordings/YYYY/MM/DD/<streamId>/events.log # Get sentiment for a meeting cat recordings/YYYY/MM/DD/<streamId>/ai_sentiment.json The .txt, .md, .json, and .log files are all text-based and searchable. Start with ai_summary.md for a quick overview, then drill into transcript.txt for specific quotes or details.
# Toggle WhatsApp notifications on/off curl -X POST http://localhost:3000/api/notify-toggle -H "Content-Type: application/json" -d '{"enabled": false}' # Check notification status curl http://localhost:3000/api/notify-toggle
When meeting.rtms_stopped fires, the service automatically: Generates PDF from screenshare images Converts mixedaudio.raw → mixedaudio.wav Converts activespeakervideo.h264 → activespeakervideo.mp4 Muxes mixed audio + active speaker video into final_output.mp4 Manual conversion scripts are available but note that auto-conversion runs on meeting end, so manual re-runs are rarely needed.
After or during a meeting, read files from recordings/YYYY/MM/DD/{streamId}/: # List recorded meetings by date ls -R recordings/ # Read transcript cat recordings/YYYY/MM/DD/<streamId>/transcript.txt # Read AI summary cat recordings/YYYY/MM/DD/<streamId>/ai_summary.md # Read sentiment analysis cat recordings/YYYY/MM/DD/<streamId>/ai_sentiment.json
Want different summary styles or analysis? Customize the AI prompts to fit your needs! Edit these files to change AI behavior: FilePurposeExample Customizationssummary_prompt.mdMeeting summary generationBullet points vs prose, focus areas, lengthquery_prompt.mdQuery response formattingResponse style, detail levelquery_prompt_current_meeting.mdReal-time meeting analysisWhat to highlight during meetingsquery_prompt_dialog_suggestions.mdDialog suggestion styleFormal vs casual, suggestion countquery_prompt_sentiment_analysis.mdSentiment scoring logicCustom sentiment categories, thresholds Tip: Back up the originals before editing, so you can revert if needed.
Messaging, meetings, inboxes, CRM, and teammate communication surfaces.
Largest current source with strong distribution and engagement signals.