Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Capture audio from any browser tab — meetings, YouTube, podcasts, courses, webinars — and stream to any AI agent. Zero API keys, works with any framework.
Capture audio from any browser tab — meetings, YouTube, podcasts, courses, webinars — and stream to any AI agent. Zero API keys, works with any framework.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Give any AI agent ears for the browser. One Chrome extension captures audio from any tab — meetings, YouTube, podcasts, webinars, courses, earnings calls — and streams it to your AI pipeline.
Your AI agent can't hear anything happening in your browser. This skill fixes that. Capture audio from any Chrome tab and stream it to your agent — no API keys, no OAuth, no per-platform integrations. Use cases: meeting summaries, YouTube/podcast notes, competitive intel from earnings calls, auto-notes from online courses, customer call analysis — anything that plays audio in a browser tab. Works with any AI agent — Claude, ChatGPT, OpenClaw, LangChain, CrewAI, or your own. If your agent can run shell commands or receive HTTP, it gets browser audio.
Chrome with remote debugging: # macOS /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \ --remote-debugging-port=9222 --user-data-dir=$HOME/.chrome-debug-profile & Python 3.9+ with aiohttp: pip install aiohttp
# List tabs — meetings flagged with 🎙️ python3 -m browser_capture.cli tabs # Auto-detect and capture meeting tab python3 -m browser_capture.cli capture # Continuous watch mode python3 -m browser_capture.cli watch --interval 15 # Stop python3 -m browser_capture.cli stop
chrome://extensions/ → Developer mode → Load unpacked → scripts/extension/ Join a meeting → click Percept icon → Start Capturing Close popup — capture continues in background
Google Meet • Zoom (web) • Microsoft Teams • Webex • Whereby • Around • Cal.com • Riverside • StreamYard • Ping • Daily.co • Jitsi • Discord — plus any future platform that runs in a browser.
Streams to http://127.0.0.1:8900/audio/browser as JSON: { "sessionId": "browser_1709234567890", "audio": "<base64 PCM16>", "sampleRate": 16000, "format": "pcm16", "tabUrl": "https://meet.google.com/...", "tabTitle": "Weekly Standup" } Configure endpoint in scripts/extension/offscreen.js (PERCEPT_URL). Point it at Whisper, Deepgram, NVIDIA Riva, or any transcription service.
No tabs: Chrome needs --remote-debugging-port=9222 Button won't click: Remove + re-add extension (MV3 caches aggressively) Audio not arriving: Check receiver on port 8900. Extension sends to /audio/browser
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.