{
  "schemaVersion": "1.0",
  "item": {
    "slug": "lm-studio-subagents",
    "name": "Offload Tasks to LM Studio Models",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/t-sinclair2500/lm-studio-subagents",
    "canonicalUrl": "https://clawhub.ai/t-sinclair2500/lm-studio-subagents",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/lm-studio-subagents",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=lm-studio-subagents",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "README.md",
      "scripts/test.mjs",
      "scripts/unload.mjs",
      "scripts/load.mjs",
      "scripts/lmstudio-api.mjs",
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/lm-studio-subagents"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/lm-studio-subagents",
    "agentPageUrl": "https://openagent3.xyz/skills/lm-studio-subagents/agent",
    "manifestUrl": "https://openagent3.xyz/skills/lm-studio-subagents/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/lm-studio-subagents/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "LM Studio Models",
        "body": "Offload tasks to local models when quality suffices. Base URL: http://127.0.0.1:1234. Auth: Authorization: Bearer lmstudio. instance_id = loaded_instances[].id (same model can have multiple, e.g. key and key:2)."
      },
      {
        "title": "Key Terms",
        "body": "model: From GET models key; use in chat and optional load.\nlm_studio_api_url: Default http://127.0.0.1:1234 (paths /api/v1/...).\nresponse_id / previous_response_id: Chat returns response_id; pass as previous_response_id for stateful.\ninstance_id: For unload, use only the value from GET /api/v1/models for that model: each loaded_instances[].id. Do not assume it equals the model key; with multiple instances ids can be like key:2. LM Studio docs: List (loaded_instances[].id), Unload (instance_id).\n\nTrigger in frontmatter; below = implementation."
      },
      {
        "title": "Prerequisites",
        "body": "LM Studio 0.4+, server :1234, models on disk; load/unload via API (JIT optional); Node for script (curl ok)."
      },
      {
        "title": "Quick start",
        "body": "Minimal path: list models, then one chat. Replace <model> with a key from GET /api/v1/models and <task> with the task text.\n\ncurl -s -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models\nnode scripts/lmstudio-api.mjs <model> '<task>' --temperature=0.5 --max-output-tokens=200\n\nStateful multi-turn: pass --previous-response-id=<id> from the prior script output. Or use --stateful to persist response_id automatically. Optional --log <path> for request/response.\n\nnode scripts/lmstudio-api.mjs <model> 'First turn...' --previous-response-id=$ID1\nnode scripts/lmstudio-api.mjs <model> 'Second turn...' --previous-response-id=$ID2"
      },
      {
        "title": "Step 0: Preflight",
        "body": "GET <base>/api/v1/models; non-200 or connection error = server not ready.\n\nexec command:\"curl -s -o /dev/null -w '%{http_code}' -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models\""
      },
      {
        "title": "Step 1: List Models and Select",
        "body": "GET /api/v1/models to list models. Parse each entry: key, type, loaded_instances, max_context_length, capabilities. If a model already has loaded_instances.length > 0 and fits the task, skip to Step 5; otherwise pick a key for chat (and optional load in Step 3). Choose by task: vision -> capabilities.vision; embedding -> type=embedding; context -> max_context_length. Prefer already-loaded; prefer smaller for speed, larger for reasoning. Note loaded_instances[].id for optional unload later.\n\nExample — list models:\n\nexec command:\"curl -s -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models\"\n\nParse models[] (key, type, loaded_instances, max_context_length, capabilities, params_string). If a model has loaded_instances.length > 0 and fits task, skip to Step 5; else pick key for chat (and optional load). Note loaded_instances[].id for optional unload."
      },
      {
        "title": "Step 2: Model Selection",
        "body": "Pick key from GET response; use as model in chat (optional load). Constraints: vision -> capabilities.vision; embedding -> type=embedding; context -> max_context_length. Prefer loaded (loaded_instances non-empty), smaller for speed/larger for reasoning; fallback primary. If unsure, use the first loaded instance for that key or the smallest loaded model that fits the task. Optional POST load; else JIT on first chat."
      },
      {
        "title": "Step 3: Load Model (optional)",
        "body": "Optional: POST /api/v1/models/load { model, context_length?, ... }. Or run scripts/load.mjs <model>. JIT: first chat loads; explicit load only for specific options."
      },
      {
        "title": "Step 4: Verify Loaded (optional)",
        "body": "If explicit load: GET models, confirm loaded_instances. If JIT: no verify; first chat returns model_instance_id, stats.model_load_time_seconds."
      },
      {
        "title": "Step 5: Call API",
        "body": "From the skill folder: node scripts/lmstudio-api.mjs <model> '<task>' [options].\n\nexec command:\"node scripts/lmstudio-api.mjs <model> '<task>' --temperature=0.7 --max-output-tokens=2000\"\n\nStateful: add --previous-response-id=<response_id>. Curl: POST <base>/api/v1/chat, body model, input, store, temperature, max_output_tokens; optional previous_response_id. Parse: output (type message) -> content; response_id, model_instance_id, stats. Script outputs content, model_instance_id, response_id, usage."
      },
      {
        "title": "Step 6: Unload (optional)",
        "body": "For the model key you used: GET /api/v1/models, then for each loaded_instances[].id for that model, POST /api/v1/models/unload with body {\"instance_id\": \"<that id>\"}. Use the id from the response only (do not send the model key unless it exactly equals that id). Or run scripts/unload.mjs <model_key> (script does GET then unloads each instance id). Optional --unload-after (default off); use --keep to leave loaded. Unload only that model's instances. JIT+TTL auto-unload; explicit when needed.\n\n# One unload per instance_id; repeat for each id in that model's loaded_instances\nexec command:\"curl -s -X POST http://127.0.0.1:1234/api/v1/models/unload -H 'Content-Type: application/json' -H 'Authorization: Bearer lmstudio' -d '{\\\"instance_id\\\": \\\"<instance_id>\\\"}'\""
      },
      {
        "title": "Step 7: Verify unload (optional)",
        "body": "After unloading, confirm no instances remain for that model key. Run the jq check below; result must be 0. If non-zero, unload the remaining instance_id(s) from that model and re-run the check. Do not infer from \"model object exists\"; the object still exists with an empty loaded_instances array.\n\nexec command:\"curl -s -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models | jq '.models[]|select(.key==\\\"<model_key>\\\")|.loaded_instances|length'\"\n\nExpect output 0. If not, unload remaining instance_ids and re-run."
      },
      {
        "title": "Error Handling",
        "body": "Script retries on transient failure (2-3 attempts with backoff).\nModel not found -> pick another model from GET response.\nAPI/server errors -> GET models, check URL.\nInvalid output -> retry.\nMemory -> unload or smaller model.\nUnload fails -> instance_id must be exactly from GET /api/v1/models for that model's loaded_instances[].id (not the model key unless it matches)."
      },
      {
        "title": "Copy-paste",
        "body": "Replace <model> with a key from GET /api/v1/models and <task> with the task text. Optional unload per Step 6 (instance_id from GET models for that key).\n\nexec command:\"curl -s -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models\"\nexec command:\"node scripts/lmstudio-api.mjs <model> '<task>' --temperature=0.7 --max-output-tokens=2000\""
      },
      {
        "title": "LM Studio API Details",
        "body": "Helper/API: see Step 5. Output: content, model_instance_id, response_id, usage. Auth: Bearer lmstudio. List GET /api/v1/models. Load POST /api/v1/models/load (optional). Unload POST /api/v1/models/unload { instance_id }."
      },
      {
        "title": "Scripts",
        "body": "lmstudio-api.mjs: chat; options --stateful, --unload-after, --keep, --log <path>, --previous-response-id, --temperature, --max-output-tokens. load.mjs: load model by key. unload.mjs: unload by model key (all instances). test.mjs: smoke test (load, chat, unload one model)."
      },
      {
        "title": "Notes",
        "body": "LM Studio 0.4+.\nJIT (first chat loads; model_load_time_seconds in stats); stateful (response_id / previous_response_id)."
      }
    ],
    "body": "LM Studio Models\n\nOffload tasks to local models when quality suffices. Base URL: http://127.0.0.1:1234. Auth: Authorization: Bearer lmstudio. instance_id = loaded_instances[].id (same model can have multiple, e.g. key and key:2).\n\nKey Terms\nmodel: From GET models key; use in chat and optional load.\nlm_studio_api_url: Default http://127.0.0.1:1234 (paths /api/v1/...).\nresponse_id / previous_response_id: Chat returns response_id; pass as previous_response_id for stateful.\ninstance_id: For unload, use only the value from GET /api/v1/models for that model: each loaded_instances[].id. Do not assume it equals the model key; with multiple instances ids can be like key:2. LM Studio docs: List (loaded_instances[].id), Unload (instance_id).\n\nTrigger in frontmatter; below = implementation.\n\nPrerequisites\n\nLM Studio 0.4+, server :1234, models on disk; load/unload via API (JIT optional); Node for script (curl ok).\n\nQuick start\n\nMinimal path: list models, then one chat. Replace <model> with a key from GET /api/v1/models and <task> with the task text.\n\ncurl -s -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models\nnode scripts/lmstudio-api.mjs <model> '<task>' --temperature=0.5 --max-output-tokens=200\n\n\nStateful multi-turn: pass --previous-response-id=<id> from the prior script output. Or use --stateful to persist response_id automatically. Optional --log <path> for request/response.\n\nnode scripts/lmstudio-api.mjs <model> 'First turn...' --previous-response-id=$ID1\nnode scripts/lmstudio-api.mjs <model> 'Second turn...' --previous-response-id=$ID2\n\nComplete Workflow\nStep 0: Preflight\n\nGET <base>/api/v1/models; non-200 or connection error = server not ready.\n\nexec command:\"curl -s -o /dev/null -w '%{http_code}' -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models\"\n\nStep 1: List Models and Select\n\nGET /api/v1/models to list models. Parse each entry: key, type, loaded_instances, max_context_length, capabilities. If a model already has loaded_instances.length > 0 and fits the task, skip to Step 5; otherwise pick a key for chat (and optional load in Step 3). Choose by task: vision -> capabilities.vision; embedding -> type=embedding; context -> max_context_length. Prefer already-loaded; prefer smaller for speed, larger for reasoning. Note loaded_instances[].id for optional unload later.\n\nExample — list models:\n\nexec command:\"curl -s -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models\"\n\n\nParse models[] (key, type, loaded_instances, max_context_length, capabilities, params_string). If a model has loaded_instances.length > 0 and fits task, skip to Step 5; else pick key for chat (and optional load). Note loaded_instances[].id for optional unload.\n\nStep 2: Model Selection\n\nPick key from GET response; use as model in chat (optional load). Constraints: vision -> capabilities.vision; embedding -> type=embedding; context -> max_context_length. Prefer loaded (loaded_instances non-empty), smaller for speed/larger for reasoning; fallback primary. If unsure, use the first loaded instance for that key or the smallest loaded model that fits the task. Optional POST load; else JIT on first chat.\n\nStep 3: Load Model (optional)\n\nOptional: POST /api/v1/models/load { model, context_length?, ... }. Or run scripts/load.mjs <model>. JIT: first chat loads; explicit load only for specific options.\n\nStep 4: Verify Loaded (optional)\n\nIf explicit load: GET models, confirm loaded_instances. If JIT: no verify; first chat returns model_instance_id, stats.model_load_time_seconds.\n\nStep 5: Call API\n\nFrom the skill folder: node scripts/lmstudio-api.mjs <model> '<task>' [options].\n\nexec command:\"node scripts/lmstudio-api.mjs <model> '<task>' --temperature=0.7 --max-output-tokens=2000\"\n\n\nStateful: add --previous-response-id=<response_id>. Curl: POST <base>/api/v1/chat, body model, input, store, temperature, max_output_tokens; optional previous_response_id. Parse: output (type message) -> content; response_id, model_instance_id, stats. Script outputs content, model_instance_id, response_id, usage.\n\nStep 6: Unload (optional)\n\nFor the model key you used: GET /api/v1/models, then for each loaded_instances[].id for that model, POST /api/v1/models/unload with body {\"instance_id\": \"<that id>\"}. Use the id from the response only (do not send the model key unless it exactly equals that id). Or run scripts/unload.mjs <model_key> (script does GET then unloads each instance id). Optional --unload-after (default off); use --keep to leave loaded. Unload only that model's instances. JIT+TTL auto-unload; explicit when needed.\n\n# One unload per instance_id; repeat for each id in that model's loaded_instances\nexec command:\"curl -s -X POST http://127.0.0.1:1234/api/v1/models/unload -H 'Content-Type: application/json' -H 'Authorization: Bearer lmstudio' -d '{\\\"instance_id\\\": \\\"<instance_id>\\\"}'\"\n\nStep 7: Verify unload (optional)\n\nAfter unloading, confirm no instances remain for that model key. Run the jq check below; result must be 0. If non-zero, unload the remaining instance_id(s) from that model and re-run the check. Do not infer from \"model object exists\"; the object still exists with an empty loaded_instances array.\n\nexec command:\"curl -s -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models | jq '.models[]|select(.key==\\\"<model_key>\\\")|.loaded_instances|length'\"\n\n\nExpect output 0. If not, unload remaining instance_ids and re-run.\n\nError Handling\nScript retries on transient failure (2-3 attempts with backoff).\nModel not found -> pick another model from GET response.\nAPI/server errors -> GET models, check URL.\nInvalid output -> retry.\nMemory -> unload or smaller model.\nUnload fails -> instance_id must be exactly from GET /api/v1/models for that model's loaded_instances[].id (not the model key unless it matches).\nCopy-paste\n\nReplace <model> with a key from GET /api/v1/models and <task> with the task text. Optional unload per Step 6 (instance_id from GET models for that key).\n\nexec command:\"curl -s -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models\"\nexec command:\"node scripts/lmstudio-api.mjs <model> '<task>' --temperature=0.7 --max-output-tokens=2000\"\n\nLM Studio API Details\n\nHelper/API: see Step 5. Output: content, model_instance_id, response_id, usage. Auth: Bearer lmstudio. List GET /api/v1/models. Load POST /api/v1/models/load (optional). Unload POST /api/v1/models/unload { instance_id }.\n\nScripts\n\nlmstudio-api.mjs: chat; options --stateful, --unload-after, --keep, --log <path>, --previous-response-id, --temperature, --max-output-tokens. load.mjs: load model by key. unload.mjs: unload by model key (all instances). test.mjs: smoke test (load, chat, unload one model).\n\nNotes\nLM Studio 0.4+.\nJIT (first chat loads; model_load_time_seconds in stats); stateful (response_id / previous_response_id)."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/t-sinclair2500/lm-studio-subagents",
    "publisherUrl": "https://clawhub.ai/t-sinclair2500/lm-studio-subagents",
    "owner": "t-sinclair2500",
    "version": "1.0.3",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/lm-studio-subagents",
    "downloadUrl": "https://openagent3.xyz/downloads/lm-studio-subagents",
    "agentUrl": "https://openagent3.xyz/skills/lm-studio-subagents/agent",
    "manifestUrl": "https://openagent3.xyz/skills/lm-studio-subagents/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/lm-studio-subagents/agent.md"
  }
}