Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Routes tasks between GLM-4.7-FlashX for simple queries and GLM-5 for coding, analysis, reasoning, and complex tasks, switching automatically as needed.
Routes tasks between GLM-4.7-FlashX for simple queries and GLM-5 for coding, analysis, reasoning, and complex tasks, switching automatically as needed.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Binary model routing for ZAI GLM models - lightweight vs heavyweight tasks.
GLM-4.7 is the default model. Only spawn GLM-5 when the task actually needs it. Use sessions_spawn to run tasks with GLM-5: sessions_spawn({ task: "<the full task description>", model: "zai/glm-5", label: "<short task label>" }) After done with GLM-5, the main session continues with GLM-4.7 as default.
Use for lightweight tasks: Simple Q&A - What, When, Who, Where Casual chat - No reasoning needed Quick lookups File lookups Simple tasks - repetitive tasks, formatting Cron Jobs - if it needs reasoning, THEN ESCALATE TO GLM-5 Status checks Basic confirmations Provide concise output, just plain answer, no explaining DO NOT: β DO NOT CODE WITH GLM-4.7 β DO NOT ANALYZE USING GLM-4.7 β DO NOT ATTEMPT ANY REASONING USING GLM-4.7 β DO NOT RESEARCH USING GLM-4.7 If you think the request does not fall into point 1-8, THEN ESCALATE TO GLM-5 If you think you will violate the DO NOT list, THEN ESCALATE TO GLM-5
Use for heavyweight tasks: Coding (any complexity) Analysis & debugging Multi-step reasoning Research & investigation Critical planning Architecture decisions Complex problem solving Deep research Critical decisions Detailed explanations
TaskModelWhy"Check calendar"GLM-4.7Simple lookup"What time is it?"GLM-4.7Simple Q&A"Heartbeat check"GLM-4.7Routine"Read this file"GLM-4.7Simple lookup"Summarize this"GLM-4.7Basic task"Write Python script"GLM-5Coding"Debug this error"GLM-5Analysis"Research market trends"GLM-5Deep research"Plan migration"GLM-5Complex planning"Analyze this issue"GLM-5Analysis
When the user asks to use a specific model, use it Always mention which model is used in outputs β example: "(GLM-5)" or "(GLM-4.7)" at the end of responses After done with GLM-5 (via sessions_spawn), continue with GLM-4.7 as default If you think the request does not fall into GLM-4.7 use cases, THEN ESCALATE TO GLM-5 If you think you will violate the DO NOT list, THEN ESCALATE TO GLM-5 Coding = always GLM-5 When in doubt β GLM-5 (better safe than sorry) Heartbeat checks β always GLM-4.7 unless complex analysis needed
When spawning GLM-5 sub-agent sessions for ANY task (coding, research, analysis, planning, etc.), follow this pattern:
1. Code Output (Important) Full code ONLY in files β do NOT include in announce unless explicitly requested Provide summary: what was created, file path, status, dependencies Full code disclosure ONLY when: User explicitly requests: "Show me the code" Debugging needs code review User wants to improve/modify it 2. Full Announce for Other Results Research findings, analysis results, solutions β announce FULLY to user Do NOT shorten, summarize, or condense non-code output User gets complete findings, not a brief summary 3. Two-Layer Memory Strategy MEMORY.md (Curated Long-Term) ONLY key insights, decisions, lessons, significant findings, preferences Clean, concise, actionable Skip routine data, step-by-step reasoning, temporary thoughts Detailed Reports (Task-Specific Files) For research: research/YYYY-MM-DD-topic.md (full findings, data, analysis) For coding: add inline docs/README in code folder if needed For analysis: output files in relevant project directories
Research task: sessions_spawn({ task: "Research X. Announce full findings to user. Write full report to research/YYYY-MM-DD-X.md, then write ONLY key insights to MEMORY.md (clean, concise).", model: "zai/glm-5", label: "Research X" }) Coding task: sessions_spawn({ task: "Write Python script for X. Save full code to file. Provide summary (what created, path, status, dependencies) in announce. Write key implementation decisions to MEMORY.md (important only).", model: "zai/glm-5", label: "Python script X" }) Apply this pattern to ALL GLM-5 spawns. Code in files only, summary in announce, full disclosure on request.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.