Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Full local AI inference stack on Apple Silicon Macs via MLX. Includes: LLM chat (Qwen3-14B, Gemma3-12B), speech-to-text ASR (Qwen3-ASR, Whisper), text embedd...
Full local AI inference stack on Apple Silicon Macs via MLX. Includes: LLM chat (Qwen3-14B, Gemma3-12B), speech-to-text ASR (Qwen3-ASR, Whisper), text embedd...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Full local AI inference on Apple Silicon Macs. All services expose OpenAI-compatible APIs.
ServicePortAccessModelsLLM + Whisper + Embedding8787LAN (0.0.0.0)qwen3-14b, gemma-3-12b, whisper-large-v3-turbo, qwen3-embedding-0.6b/4bASR (Qwen3-ASR)8788localhost onlyQwen3-ASR-1.7B-8bitTranscribe Daemon—file-basedUses ASR + LLM LaunchAgents: com.mlx-server (8787), com.mlx-audio-server (8788), com.mlx-transcribe-daemon
Model IDParamsBest Forqwen3-14b14B 4bitChinese, deep reasoning (built-in think mode)gemma-3-12b12B 4bitEnglish, code generation
curl -X POST http://localhost:8787/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "qwen3-14b", "messages": [{"role": "user", "content": "Hello"}], "temperature": 0.7, "max_tokens": 2048 }' Add "stream": true for streaming.
from openai import OpenAI client = OpenAI(base_url="http://localhost:8787/v1", api_key="unused") response = client.chat.completions.create( model="qwen3-14b", messages=[{"role": "user", "content": "Hello"}], temperature=0.7, max_tokens=2048 ) print(response.choices[0].message.content)
Qwen3 may include <think>...</think> chain-of-thought tags. Strip them: import re text = re.sub(r'<think>.*?</think>\s*', '', text, flags=re.DOTALL)
ScenarioRecommendedChinese textqwen3-14bCantoneseqwen3-14bEnglish writinggemma-3-12bCode generationEitherDeep reasoningqwen3-14b (think mode)Quick Q&Agemma-3-12b
curl -X POST http://127.0.0.1:8788/v1/audio/transcriptions \ -F "file=@audio.wav" \ -F "model=mlx-community/Qwen3-ASR-1.7B-8bit" \ -F "language=zh"
curl -X POST http://localhost:8787/v1/audio/transcriptions \ -F "file=@audio.wav" \ -F "model=whisper-large-v3-turbo"
Qwen3-ASR (port 8788)Whisper (port 8787)Chinese/CantoneseStrongAverageMultilingualNoYes (99 langs)LAN accessNo (localhost)YesLoadingOn-demandAlways loaded
wav, mp3, m4a, flac, ogg, webm
Split into 10-min chunks first: ffmpeg -y -ss 0 -t 600 -i long.wav -ar 16000 -ac 1 chunk_000.wav
Model IDSizeUse Caseqwen3-embedding-0.6b0.6B 4bitFast retrieval, low latencyqwen3-embedding-4b4B 4bitHigh-accuracy semantic matching
curl -X POST http://localhost:8787/v1/embeddings \ -H "Content-Type: application/json" \ -d '{"model": "qwen3-embedding-0.6b", "input": "text to embed"}'
curl -X POST http://localhost:8787/v1/embeddings \ -H "Content-Type: application/json" \ -d '{"model": "qwen3-embedding-4b", "input": ["text 1", "text 2"]}'
ItemValueModel IDpaddleocr-vl-6bitSpeed~185 t/sMemory~3.3 GBPromptOCR:
cd ~/.mlx-server/venv python -m mlx_vlm.generate \ --model mlx-community/PaddleOCR-VL-1.5-6bit \ --image image.jpg \ --prompt "OCR:" \ --max-tokens 512 --temp 0.0
from mlx_vlm import generate, load from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config model, processor = load("mlx-community/PaddleOCR-VL-1.5-6bit") config = load_config("mlx-community/PaddleOCR-VL-1.5-6bit") prompt = apply_chat_template(processor, config, "OCR:", num_images=1) out = generate(model, processor, prompt, "image.jpg", max_tokens=512, temperature=0.0, verbose=False) print(out.text if hasattr(out, "text") else out)
Prompt must be exactly OCR: for PaddleOCR-VL temperature=0.0 for deterministic output RGBA images must be converted to RGB first Venv: ~/.mlx-server/venv
ItemValueModelQwen3-TTS-12Hz-1.7B-CustomVoice-8bitMemory~2GBFeatureCustom voice cloning
~/.mlx-server/venv/bin/mlx_audio.tts.generate \ --model mlx-community/Qwen3-TTS-12Hz-1.7B-CustomVoice-8bit \ --text "你好,这是一段测试语音"
curl -X POST http://127.0.0.1:8788/v1/audio/speech \ -H "Content-Type: application/json" \ -d '{ "model": "mlx-community/Qwen3-TTS-12Hz-1.7B-CustomVoice-8bit", "input": "你好世界" }' --output speech.wav
Drop audio files into ~/transcribe/ for automatic processing: Daemon detects file (polls every 15s) Phase 1: Transcribe via Qwen3-ASR → filename_raw.md Phase 2: Correct via Qwen3-14B LLM → filename_corrected.md Move results to ~/transcribe/done/
Fix homophone errors (的/得/地, 在/再) Preserve Cantonese characters (嘅、唔、咁、喺、冇、佢) Add punctuation and paragraphs Remove filler words
wav, mp3, m4a, flac, ogg, webm
# LLM + Whisper + Embedding server (port 8787) launchctl kickstart -k gui/$(id -u)/com.mlx-server # ASR server (port 8788) launchctl kickstart -k gui/$(id -u)/com.mlx-audio-server # Transcribe daemon launchctl kickstart gui/$(id -u)/com.mlx-transcribe-daemon # Logs tail -f ~/.mlx-server/logs/server.log tail -f ~/.mlx-server/logs/mlx-audio-server.err.log tail -f ~/.mlx-server/logs/transcribe-daemon.err.log
Apple Silicon Mac (M1/M2/M3/M4) Python 3.10+ with mlx, mlx-lm, mlx-audio, mlx-vlm Recommended: 32GB+ RAM for running multiple models
Messaging, meetings, inboxes, CRM, and teammate communication surfaces.
Largest current source with strong distribution and engagement signals.