Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Transcribe audio files using Sber Salute Speech async API. Russian-first STT with support for ru-RU, en-US, kk-KZ, ky-KG, uz-UZ.
Transcribe audio files using Sber Salute Speech async API. Russian-first STT with support for ru-RU, en-US, kk-KZ, ky-KG, uz-UZ.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Transcribe audio/video files to text with timestamps via Salute Speech async REST API.
API Key: Environment variable SALUTE_AUTH_DATA must be set (Base64-encoded client_id:client_secret or raw authorization key from https://developers.sber.ru/studio/). SSL note: The script disables SSL verification by default (verify_ssl=False) because Sber's certificate chain is non-standard. This is expected.
Audio encodingContent-TypeTypical extensionsMP3audio/mpeg.mp3PCM_S16LEaudio/wav.wavOPUSaudio/ogg.ogg, .opusFLACaudio/flac.flacALAWaudio/alaw.alawMULAWaudio/mulaw.mulaw
ru-RU, en-US, kk-KZ (Kazakh), ky-KG (Kyrgyz), uz-UZ (Uzbek).
Identify input files — from user request. Read API key from host environment. Run transcription — execute salute_transcribe.py with uv and appropriate arguments. Deliver results — present to user human-readable transcript with timestamps to the user and give a direct link to files.
uv run --with requests {baseDir}/salute_transcribe.py \ --file /path/to/audio.mp3 \ --output_dir ~/.openclaw/workspace/transcriptions \ --lang ru-RU
ArgumentRequiredDefaultDescription--fileYes—Path to audio/video file--output_dirNo~/.openclaw/workspace/transcribationsOutput directory for results--langNoru-RULanguage code: ru-RU, en-US, kk-KZ, ky-KG, uz-UZ--audio-encodingNoMP3Codec: MP3, PCM_S16LE, OPUS, FLAC, ALAW, MULAW--modelNogeneralRecognition model: general or callcenter--hyp-countNo1Number of alternative hypotheses: 1 or 2--max-wait-timeNo300Max seconds to wait for async result--printNooffAlso print transcription to stdout
When the file extension doesn't match audio/mpeg, adjust content_type in the script or add logic. Current default is audio/mpeg (MP3). For .wav files use audio/wav, etc.
For input file meetingABC.mp3 the script produces: FileDescriptionmeetingABC_recognition_orig.jsonRaw API response (full JSON with all hypotheses, timing, confidence)meetingABC_pretty.txtFormatted human-readable transcript with timestamps
[00:01 - 00:20]: Ну, даже если сосредоточиться на идее узкой щели. [00:20 - 00:45]: Следующий фрагмент текста здесь.
Token is valid for ~30 minutes; the script fetches a new one each run. Large files (>1 hour) may need --max-wait-time increased beyond 300s. The callcenter model is optimized for telephony audio (8kHz, mono). Profanity filter is disabled by default (enable_profanity_filter=False). The script uses normalized text by default (numbers as digits, abbreviations expanded). Raw text is also available in the JSON output.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.