Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Give your AI agent a 3D VRM avatar body with animations, expressions, voice chat, and lip sync. Use when the user wants a visual avatar, VRM viewer, avatar companion, VTuber-style character, or 3D character they can talk to. Installs a web-based viewer controllable via WebSocket.
Give your AI agent a 3D VRM avatar body with animations, expressions, voice chat, and lip sync. Use when the user wants a visual avatar, VRM viewer, avatar companion, VTuber-style character, or 3D character they can talk to. Installs a web-based viewer controllable via WebSocket.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Give your AI agent a body. Web-based VRM avatar with 162 animations, expressions, TTS lip sync, and AI chat.
# Clone and install git clone https://github.com/Dongping-Chen/Clawatar.git ~/.openclaw/workspace/clawatar cd ~/.openclaw/workspace/clawatar && npm install # Start (Vite + WebSocket server) npm run start Opens at http://localhost:3000 with WS control at ws://localhost:8765. Users must provide their own VRM model (drag & drop onto page, or set model.url in clawatar.config.json).
Send JSON to ws://localhost:8765:
{"type": "play_action", "action_id": "161_Waving"}
{"type": "set_expression", "name": "happy", "weight": 0.8} Expressions: happy, angry, sad, surprised, relaxed
{"type": "speak", "text": "Hello!", "action_id": "161_Waving", "expression": "happy"}
{"type": "reset"}
MoodAction IDGreeting161_WavingHappy116_Happy Hand GestureThinking88_ThinkingAgreeing118_Head Nod YesDisagreeing144_Shaking Head NoLaughing125_LaughingSad142_Sad IdleDancing105_Dancing, 143_Samba Dancing, 164_Ymca DanceThumbs Up153_Standing Thumbs UpIdle119_Idle Full list: public/animations/catalog.json (162 animations)
cd ~/.openclaw/workspace/clawatar && node -e " const W=require('ws'),s=new W('ws://localhost:8765'); s.on('open',()=>{s.send(JSON.stringify({type:'speak',text:'Hello!',action_id:'161_Waving',expression:'happy'}));setTimeout(()=>s.close(),1000)}) "
Touch reactions: Click avatar head/body for reactions Emotion bar: Quick 😊😢😠😮😌💃 buttons Background scenes: Sakura Garden, Night Sky, Café, Sunset Camera presets: Face, Portrait, Full Body, Cinematic Voice chat: Mic input → AI response → TTS lip sync
Edit clawatar.config.json for ports, voice settings, model URL. TTS requires ElevenLabs API key in env (ELEVENLABS_API_KEY) or ~/.openclaw/openclaw.json under skills.entries.sag.apiKey.
Animations from Mixamo — credit required, non-commercial VRM model not included (BYOM — Bring Your Own Model) Works standalone without OpenClaw; AI chat is optional
Messaging, meetings, inboxes, CRM, and teammate communication surfaces.
Largest current source with strong distribution and engagement signals.