← All skills
Tencent SkillHub · Communication & Collaboration

MLX Local Inference Stack

Full local AI inference stack on Apple Silicon Macs via MLX. Includes: LLM chat (Qwen3-14B, Gemma3-12B), speech-to-text ASR (Qwen3-ASR, Whisper), text embedd...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Full local AI inference stack on Apple Silicon Macs via MLX. Includes: LLM chat (Qwen3-14B, Gemma3-12B), speech-to-text ASR (Qwen3-ASR, Whisper), text embedd...

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
README.md, README_CN.md, SKILL.md, references/asr-qwen3.md, references/asr-whisper.md, references/embedding-qwen3.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
2.2.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 27 sections Open source page

MLX Local Inference Stack

Full local AI inference on Apple Silicon Macs. All services expose OpenAI-compatible APIs.

Services Overview

ServicePortAccessModelsLLM + Whisper + Embedding8787LAN (0.0.0.0)qwen3-14b, gemma-3-12b, whisper-large-v3-turbo, qwen3-embedding-0.6b/4bASR (Qwen3-ASR)8788localhost onlyQwen3-ASR-1.7B-8bitTranscribe Daemon—file-basedUses ASR + LLM LaunchAgents: com.mlx-server (8787), com.mlx-audio-server (8788), com.mlx-transcribe-daemon

Models

Model IDParamsBest Forqwen3-14b14B 4bitChinese, deep reasoning (built-in think mode)gemma-3-12b12B 4bitEnglish, code generation

API

curl -X POST http://localhost:8787/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "qwen3-14b", "messages": [{"role": "user", "content": "Hello"}], "temperature": 0.7, "max_tokens": 2048 }' Add "stream": true for streaming.

Python

from openai import OpenAI client = OpenAI(base_url="http://localhost:8787/v1", api_key="unused") response = client.chat.completions.create( model="qwen3-14b", messages=[{"role": "user", "content": "Hello"}], temperature=0.7, max_tokens=2048 ) print(response.choices[0].message.content)

Qwen3 Think Mode

Qwen3 may include <think>...</think> chain-of-thought tags. Strip them: import re text = re.sub(r'<think>.*?</think>\s*', '', text, flags=re.DOTALL)

Model Selection Guide

ScenarioRecommendedChinese textqwen3-14bCantoneseqwen3-14bEnglish writinggemma-3-12bCode generationEitherDeep reasoningqwen3-14b (think mode)Quick Q&Agemma-3-12b

Qwen3-ASR (best for Chinese/Cantonese)

curl -X POST http://127.0.0.1:8788/v1/audio/transcriptions \ -F "file=@audio.wav" \ -F "model=mlx-community/Qwen3-ASR-1.7B-8bit" \ -F "language=zh"

Whisper (multilingual, 99 languages)

curl -X POST http://localhost:8787/v1/audio/transcriptions \ -F "file=@audio.wav" \ -F "model=whisper-large-v3-turbo"

ASR Model Comparison

Qwen3-ASR (port 8788)Whisper (port 8787)Chinese/CantoneseStrongAverageMultilingualNoYes (99 langs)LAN accessNo (localhost)YesLoadingOn-demandAlways loaded

Supported audio formats

wav, mp3, m4a, flac, ogg, webm

Long audio

Split into 10-min chunks first: ffmpeg -y -ss 0 -t 600 -i long.wav -ar 16000 -ac 1 chunk_000.wav

Models

Model IDSizeUse Caseqwen3-embedding-0.6b0.6B 4bitFast retrieval, low latencyqwen3-embedding-4b4B 4bitHigh-accuracy semantic matching

API

curl -X POST http://localhost:8787/v1/embeddings \ -H "Content-Type: application/json" \ -d '{"model": "qwen3-embedding-0.6b", "input": "text to embed"}'

Batch

curl -X POST http://localhost:8787/v1/embeddings \ -H "Content-Type: application/json" \ -d '{"model": "qwen3-embedding-4b", "input": ["text 1", "text 2"]}'

Default Model: PaddleOCR-VL-1.5-6bit

ItemValueModel IDpaddleocr-vl-6bitSpeed~185 t/sMemory~3.3 GBPromptOCR:

CLI

cd ~/.mlx-server/venv python -m mlx_vlm.generate \ --model mlx-community/PaddleOCR-VL-1.5-6bit \ --image image.jpg \ --prompt "OCR:" \ --max-tokens 512 --temp 0.0

Python

from mlx_vlm import generate, load from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config model, processor = load("mlx-community/PaddleOCR-VL-1.5-6bit") config = load_config("mlx-community/PaddleOCR-VL-1.5-6bit") prompt = apply_chat_template(processor, config, "OCR:", num_images=1) out = generate(model, processor, prompt, "image.jpg", max_tokens=512, temperature=0.0, verbose=False) print(out.text if hasattr(out, "text") else out)

Notes

Prompt must be exactly OCR: for PaddleOCR-VL temperature=0.0 for deterministic output RGBA images must be converted to RGB first Venv: ~/.mlx-server/venv

Model: Qwen3-TTS (cached, not auto-served)

ItemValueModelQwen3-TTS-12Hz-1.7B-CustomVoice-8bitMemory~2GBFeatureCustom voice cloning

CLI

~/.mlx-server/venv/bin/mlx_audio.tts.generate \ --model mlx-community/Qwen3-TTS-12Hz-1.7B-CustomVoice-8bit \ --text "你好,这是一段测试语音"

As API (via mlx_audio.server on port 8788)

curl -X POST http://127.0.0.1:8788/v1/audio/speech \ -H "Content-Type: application/json" \ -d '{ "model": "mlx-community/Qwen3-TTS-12Hz-1.7B-CustomVoice-8bit", "input": "你好世界" }' --output speech.wav

6. Transcribe Daemon — Automatic Batch Transcription

Drop audio files into ~/transcribe/ for automatic processing: Daemon detects file (polls every 15s) Phase 1: Transcribe via Qwen3-ASR → filename_raw.md Phase 2: Correct via Qwen3-14B LLM → filename_corrected.md Move results to ~/transcribe/done/

LLM Correction Rules

Fix homophone errors (的/得/地, 在/再) Preserve Cantonese characters (嘅、唔、咁、喺、冇、佢) Add punctuation and paragraphs Remove filler words

Supported formats

wav, mp3, m4a, flac, ogg, webm

Service Management

# LLM + Whisper + Embedding server (port 8787) launchctl kickstart -k gui/$(id -u)/com.mlx-server # ASR server (port 8788) launchctl kickstart -k gui/$(id -u)/com.mlx-audio-server # Transcribe daemon launchctl kickstart gui/$(id -u)/com.mlx-transcribe-daemon # Logs tail -f ~/.mlx-server/logs/server.log tail -f ~/.mlx-server/logs/mlx-audio-server.err.log tail -f ~/.mlx-server/logs/transcribe-daemon.err.log

Requirements

Apple Silicon Mac (M1/M2/M3/M4) Python 3.10+ with mlx, mlx-lm, mlx-audio, mlx-vlm Recommended: 32GB+ RAM for running multiple models

Category context

Messaging, meetings, inboxes, CRM, and teammate communication surfaces.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
6 Docs
  • SKILL.md Primary doc
  • README_CN.md Docs
  • README.md Docs
  • references/asr-qwen3.md Docs
  • references/asr-whisper.md Docs
  • references/embedding-qwen3.md Docs