{
  "schemaVersion": "1.0",
  "item": {
    "slug": "midscene-ios-automation",
    "name": "Midscene Automations Skills for iOS",
    "source": "tencent",
    "type": "skill",
    "category": "效率提升",
    "sourceUrl": "https://clawhub.ai/quanru/midscene-ios-automation",
    "canonicalUrl": "https://clawhub.ai/quanru/midscene-ios-automation",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/midscene-ios-automation",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=midscene-ios-automation",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/midscene-ios-automation"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/midscene-ios-automation",
    "agentPageUrl": "https://openagent3.xyz/skills/midscene-ios-automation/agent",
    "manifestUrl": "https://openagent3.xyz/skills/midscene-ios-automation/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/midscene-ios-automation/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "iOS Device Automation",
        "body": "CRITICAL RULES — VIOLATIONS WILL BREAK THE WORKFLOW:\n\nNever run midscene commands in the background. Each command must run synchronously so you can read its output (especially screenshots) before deciding the next action. Background execution breaks the screenshot-analyze-act loop.\nRun only one midscene command at a time. Wait for the previous command to finish, read the screenshot, then decide the next action. Never chain multiple commands together.\nAllow enough time for each command to complete. Midscene commands involve AI inference and screen interaction, which can take longer than typical shell commands. A typical command needs about 1 minute; complex act commands may need even longer.\nAlways report task results before finishing. After completing the automation task, you MUST proactively summarize the results to the user — including key data found, actions completed, screenshots taken, and any relevant findings. Never silently end after the last automation step; the user expects a complete response in a single interaction.\n\nAutomate iOS devices using npx @midscene/ios@1. Each CLI command maps directly to an MCP tool — you (the AI agent) act as the brain, deciding which actions to take based on screenshots."
      },
      {
        "title": "Prerequisites",
        "body": "Midscene requires models with strong visual grounding capabilities. The following environment variables must be configured — either as system environment variables or in a .env file in the current working directory (Midscene loads .env automatically):\n\nMIDSCENE_MODEL_API_KEY=\"your-api-key\"\nMIDSCENE_MODEL_NAME=\"model-name\"\nMIDSCENE_MODEL_BASE_URL=\"https://...\"\nMIDSCENE_MODEL_FAMILY=\"family-identifier\"\n\n⚠️ Security: Add .env to your .gitignore to prevent API keys from being accidentally committed to version control.\nOnly use official, trusted provider URLs for MIDSCENE_MODEL_BASE_URL.\n\nExample: Gemini (Gemini-3-Flash)\n\nMIDSCENE_MODEL_API_KEY=\"your-google-api-key\"\nMIDSCENE_MODEL_NAME=\"gemini-3-flash\"\nMIDSCENE_MODEL_BASE_URL=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\nMIDSCENE_MODEL_FAMILY=\"gemini\"\n\nExample: Qwen 3.5\n\nMIDSCENE_MODEL_API_KEY=\"your-aliyun-api-key\"\nMIDSCENE_MODEL_NAME=\"qwen3.5-plus\"\nMIDSCENE_MODEL_BASE_URL=\"https://dashscope.aliyuncs.com/compatible-mode/v1\"\nMIDSCENE_MODEL_FAMILY=\"qwen3.5\"\nMIDSCENE_MODEL_REASONING_ENABLED=\"false\"\n# If using OpenRouter, set:\n# MIDSCENE_MODEL_API_KEY=\"your-openrouter-api-key\"\n# MIDSCENE_MODEL_NAME=\"qwen/qwen3.5-plus\"\n# MIDSCENE_MODEL_BASE_URL=\"https://openrouter.ai/api/v1\"\n\nExample: Doubao Seed 2.0 Lite\n\nMIDSCENE_MODEL_API_KEY=\"your-doubao-api-key\"\nMIDSCENE_MODEL_NAME=\"doubao-seed-2-0-lite\"\nMIDSCENE_MODEL_BASE_URL=\"https://ark.cn-beijing.volces.com/api/v3\"\nMIDSCENE_MODEL_FAMILY=\"doubao-seed\"\n\nCommonly used models: Doubao Seed 2.0 Lite, Qwen 3.5, Zhipu GLM-4.6V, Gemini-3-Pro, Gemini-3-Flash.\n\nIf the model is not configured, ask the user to set it up. See Model Configuration for supported providers."
      },
      {
        "title": "Connect to Device",
        "body": "npx @midscene/ios@1 connect"
      },
      {
        "title": "Take Screenshot",
        "body": "npx @midscene/ios@1 take_screenshot\n\nAfter taking a screenshot, read the saved image file to understand the current screen state before deciding the next action."
      },
      {
        "title": "Perform Action",
        "body": "Use act to interact with the device and get the result. It autonomously handles all UI interactions internally — tapping, typing, scrolling, swiping, waiting, and navigating — so you should give it complex, high-level tasks as a whole rather than breaking them into small steps. Describe what you want to do and the desired effect in natural language:\n\n# specific instructions\nnpx @midscene/ios@1 act --prompt \"type hello world in the search field and press Enter\"\nnpx @midscene/ios@1 act --prompt \"tap Delete, then confirm in the alert dialog\"\n\n# or target-driven instructions\nnpx @midscene/ios@1 act --prompt \"open Settings and navigate to Wi-Fi, tell me the connected network name\""
      },
      {
        "title": "Disconnect",
        "body": "npx @midscene/ios@1 disconnect"
      },
      {
        "title": "Workflow Pattern",
        "body": "Since CLI commands are stateless between invocations, follow this pattern:\n\nConnect to establish a session\nLaunch the target app and take screenshot to see the current state, make sure the app is launched and visible on the screen.\nExecute action using act to perform the desired action or target-driven instructions.\nDisconnect when done\nReport results — summarize what was accomplished, present key findings and data extracted during the task, and list any generated files (screenshots, logs, etc.) with their paths"
      },
      {
        "title": "Best Practices",
        "body": "Be specific about UI elements: Instead of vague descriptions, provide clear, specific details. Say \"the Settings icon in the top-right corner\" instead of \"the icon\".\nDescribe locations when possible: Help target elements by describing their position (e.g., \"the search icon at the top right\", \"the third item in the list\").\nNever run in background: Every midscene command must run synchronously — background execution breaks the screenshot-analyze-act loop.\nBatch related operations into a single act command: When performing consecutive operations within the same app, combine them into one act prompt instead of splitting them into separate commands. For example, \"open Settings, tap Wi-Fi, and check the connected network\" should be a single act call, not three. This reduces round-trips, avoids unnecessary screenshot-analyze cycles, and is significantly faster.\nAlways report results after completion: After finishing the automation task, you MUST proactively present the results to the user without waiting for them to ask. This includes: (1) the answer to the user's original question or the outcome of the requested task, (2) key data extracted or observed during execution, (3) screenshots and other generated files with their paths, (4) a brief summary of steps taken. Do NOT silently finish after the last automation command — the user expects complete results in a single interaction.\n\nExample — Alert dialog interaction:\n\nnpx @midscene/ios@1 act --prompt \"tap the Delete button and confirm in the alert dialog\"\nnpx @midscene/ios@1 take_screenshot\n\nExample — Form interaction:\n\nnpx @midscene/ios@1 act --prompt \"fill in the username field with 'testuser' and the password field with 'pass123', then tap the Login button\"\nnpx @midscene/ios@1 take_screenshot"
      },
      {
        "title": "WebDriverAgent Not Running",
        "body": "Symptom: Connection refused or timeout errors.\nSolution:\n\nEnsure WebDriverAgent is installed and running on the device.\nSee https://midscenejs.com/zh/usage-ios.html for setup instructions."
      },
      {
        "title": "Device Not Found",
        "body": "Symptom: No device detected or connection errors.\nSolution:\n\nEnsure the device is connected via USB and trusted."
      },
      {
        "title": "API Key Issues",
        "body": "Symptom: Authentication or model errors.\nSolution:\n\nCheck .env file contains MIDSCENE_MODEL_API_KEY=<your-key>.\nSee https://midscenejs.com/zh/model-common-config.html for details."
      }
    ],
    "body": "iOS Device Automation\n\nCRITICAL RULES — VIOLATIONS WILL BREAK THE WORKFLOW:\n\nNever run midscene commands in the background. Each command must run synchronously so you can read its output (especially screenshots) before deciding the next action. Background execution breaks the screenshot-analyze-act loop.\nRun only one midscene command at a time. Wait for the previous command to finish, read the screenshot, then decide the next action. Never chain multiple commands together.\nAllow enough time for each command to complete. Midscene commands involve AI inference and screen interaction, which can take longer than typical shell commands. A typical command needs about 1 minute; complex act commands may need even longer.\nAlways report task results before finishing. After completing the automation task, you MUST proactively summarize the results to the user — including key data found, actions completed, screenshots taken, and any relevant findings. Never silently end after the last automation step; the user expects a complete response in a single interaction.\n\nAutomate iOS devices using npx @midscene/ios@1. Each CLI command maps directly to an MCP tool — you (the AI agent) act as the brain, deciding which actions to take based on screenshots.\n\nPrerequisites\n\nMidscene requires models with strong visual grounding capabilities. The following environment variables must be configured — either as system environment variables or in a .env file in the current working directory (Midscene loads .env automatically):\n\nMIDSCENE_MODEL_API_KEY=\"your-api-key\"\nMIDSCENE_MODEL_NAME=\"model-name\"\nMIDSCENE_MODEL_BASE_URL=\"https://...\"\nMIDSCENE_MODEL_FAMILY=\"family-identifier\"\n\n\n⚠️ Security: Add .env to your .gitignore to prevent API keys from being accidentally committed to version control. Only use official, trusted provider URLs for MIDSCENE_MODEL_BASE_URL.\n\nExample: Gemini (Gemini-3-Flash)\n\nMIDSCENE_MODEL_API_KEY=\"your-google-api-key\"\nMIDSCENE_MODEL_NAME=\"gemini-3-flash\"\nMIDSCENE_MODEL_BASE_URL=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\nMIDSCENE_MODEL_FAMILY=\"gemini\"\n\n\nExample: Qwen 3.5\n\nMIDSCENE_MODEL_API_KEY=\"your-aliyun-api-key\"\nMIDSCENE_MODEL_NAME=\"qwen3.5-plus\"\nMIDSCENE_MODEL_BASE_URL=\"https://dashscope.aliyuncs.com/compatible-mode/v1\"\nMIDSCENE_MODEL_FAMILY=\"qwen3.5\"\nMIDSCENE_MODEL_REASONING_ENABLED=\"false\"\n# If using OpenRouter, set:\n# MIDSCENE_MODEL_API_KEY=\"your-openrouter-api-key\"\n# MIDSCENE_MODEL_NAME=\"qwen/qwen3.5-plus\"\n# MIDSCENE_MODEL_BASE_URL=\"https://openrouter.ai/api/v1\"\n\n\nExample: Doubao Seed 2.0 Lite\n\nMIDSCENE_MODEL_API_KEY=\"your-doubao-api-key\"\nMIDSCENE_MODEL_NAME=\"doubao-seed-2-0-lite\"\nMIDSCENE_MODEL_BASE_URL=\"https://ark.cn-beijing.volces.com/api/v3\"\nMIDSCENE_MODEL_FAMILY=\"doubao-seed\"\n\n\nCommonly used models: Doubao Seed 2.0 Lite, Qwen 3.5, Zhipu GLM-4.6V, Gemini-3-Pro, Gemini-3-Flash.\n\nIf the model is not configured, ask the user to set it up. See Model Configuration for supported providers.\n\nCommands\nConnect to Device\nnpx @midscene/ios@1 connect\n\nTake Screenshot\nnpx @midscene/ios@1 take_screenshot\n\n\nAfter taking a screenshot, read the saved image file to understand the current screen state before deciding the next action.\n\nPerform Action\n\nUse act to interact with the device and get the result. It autonomously handles all UI interactions internally — tapping, typing, scrolling, swiping, waiting, and navigating — so you should give it complex, high-level tasks as a whole rather than breaking them into small steps. Describe what you want to do and the desired effect in natural language:\n\n# specific instructions\nnpx @midscene/ios@1 act --prompt \"type hello world in the search field and press Enter\"\nnpx @midscene/ios@1 act --prompt \"tap Delete, then confirm in the alert dialog\"\n\n# or target-driven instructions\nnpx @midscene/ios@1 act --prompt \"open Settings and navigate to Wi-Fi, tell me the connected network name\"\n\nDisconnect\nnpx @midscene/ios@1 disconnect\n\nWorkflow Pattern\n\nSince CLI commands are stateless between invocations, follow this pattern:\n\nConnect to establish a session\nLaunch the target app and take screenshot to see the current state, make sure the app is launched and visible on the screen.\nExecute action using act to perform the desired action or target-driven instructions.\nDisconnect when done\nReport results — summarize what was accomplished, present key findings and data extracted during the task, and list any generated files (screenshots, logs, etc.) with their paths\nBest Practices\nBe specific about UI elements: Instead of vague descriptions, provide clear, specific details. Say \"the Settings icon in the top-right corner\" instead of \"the icon\".\nDescribe locations when possible: Help target elements by describing their position (e.g., \"the search icon at the top right\", \"the third item in the list\").\nNever run in background: Every midscene command must run synchronously — background execution breaks the screenshot-analyze-act loop.\nBatch related operations into a single act command: When performing consecutive operations within the same app, combine them into one act prompt instead of splitting them into separate commands. For example, \"open Settings, tap Wi-Fi, and check the connected network\" should be a single act call, not three. This reduces round-trips, avoids unnecessary screenshot-analyze cycles, and is significantly faster.\nAlways report results after completion: After finishing the automation task, you MUST proactively present the results to the user without waiting for them to ask. This includes: (1) the answer to the user's original question or the outcome of the requested task, (2) key data extracted or observed during execution, (3) screenshots and other generated files with their paths, (4) a brief summary of steps taken. Do NOT silently finish after the last automation command — the user expects complete results in a single interaction.\n\nExample — Alert dialog interaction:\n\nnpx @midscene/ios@1 act --prompt \"tap the Delete button and confirm in the alert dialog\"\nnpx @midscene/ios@1 take_screenshot\n\n\nExample — Form interaction:\n\nnpx @midscene/ios@1 act --prompt \"fill in the username field with 'testuser' and the password field with 'pass123', then tap the Login button\"\nnpx @midscene/ios@1 take_screenshot\n\nTroubleshooting\nWebDriverAgent Not Running\n\nSymptom: Connection refused or timeout errors. Solution:\n\nEnsure WebDriverAgent is installed and running on the device.\nSee https://midscenejs.com/zh/usage-ios.html for setup instructions.\nDevice Not Found\n\nSymptom: No device detected or connection errors. Solution:\n\nEnsure the device is connected via USB and trusted.\nAPI Key Issues\n\nSymptom: Authentication or model errors. Solution:\n\nCheck .env file contains MIDSCENE_MODEL_API_KEY=<your-key>.\nSee https://midscenejs.com/zh/model-common-config.html for details."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/quanru/midscene-ios-automation",
    "publisherUrl": "https://clawhub.ai/quanru/midscene-ios-automation",
    "owner": "quanru",
    "version": "1.0.4",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/midscene-ios-automation",
    "downloadUrl": "https://openagent3.xyz/downloads/midscene-ios-automation",
    "agentUrl": "https://openagent3.xyz/skills/midscene-ios-automation/agent",
    "manifestUrl": "https://openagent3.xyz/skills/midscene-ios-automation/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/midscene-ios-automation/agent.md"
  }
}