{
  "schemaVersion": "1.0",
  "item": {
    "slug": "airpoint",
    "name": "Airpoint",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/MarioAndF/airpoint",
    "canonicalUrl": "https://clawhub.ai/MarioAndF/airpoint",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/airpoint",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=airpoint",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/airpoint"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/airpoint",
    "agentPageUrl": "https://openagent3.xyz/skills/airpoint/agent",
    "manifestUrl": "https://openagent3.xyz/skills/airpoint/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/airpoint/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Airpoint — AI Computer Use for macOS",
        "body": "Airpoint gives you an AI agent that can see and control a Mac — open apps,\nclick UI elements, read on-screen text, type, scroll, drag, and manage windows.\nYou give it a natural-language instruction and it carries out the task\nautonomously by perceiving the screen (accessibility tree + screenshots + visual\nlocator), planning actions, executing them, and verifying the result.\n\nEverything runs through the airpoint CLI."
      },
      {
        "title": "Requirements",
        "body": "macOS (Apple Silicon or Intel)\nAirpoint app — must be running. Download from airpoint.app.\nAirpoint CLI — the airpoint command must be on PATH. Install it from the Airpoint app: Settings → Plugins → Install CLI."
      },
      {
        "title": "Setup",
        "body": "Before using Airpoint's AI agent, the user must configure it in the Airpoint\napp (Settings → Assistant):\n\nAI model API key (required). Set an API key for the chosen provider:\n\nOpenAI (recommended): model gpt-5.1 with reasoning effort low gives\nthe best balance of cost, speed, and quality.\nAnthropic and Google Gemini are also supported.\n\n\nGemini API key (recommended). Even when using OpenAI or Anthropic as the\nprimary model, a Google Gemini API key enables the visual locator — a\nsecondary model (gemini-3-flash-preview) that finds UI targets on screen\nby analyzing screenshots. Without it, the agent relies on the accessibility\ntree only.\nmacOS permissions. The app prompts on first launch, but verify these are\ngranted in System Settings → Privacy & Security:\n\nAccessibility — required for mouse/keyboard control.\nScreen Recording — required for screenshots and screen perception.\nCamera is only needed for hand tracking (not for the AI agent).\n\n\nCustom instructions (optional). In Settings → Assistant, add custom\ninstructions to tailor the agent's behavior (e.g., preferred language,\napps to avoid, workflows to follow).\n\nIf the user reports that airpoint ask fails or the agent can't see the\nscreen, ask them to verify steps 1–3 above."
      },
      {
        "title": "How to use",
        "body": "Run airpoint ask \"<your instruction>\" to send a task to the on-device agent.\nThe command blocks until the agent finishes (up to 5 minutes) and returns:\n\nA text summary of what the agent did and the result.\nOne or more screenshot file paths showing the screen state after the task.\n\n\nRead the text output to confirm whether the task succeeded.\nIf screenshots were returned, show the last screenshot to the user as\nvisual confirmation of the result.\nIf something went wrong or the task is stuck, run airpoint stop to cancel.\n\nExample flow:\n\n> airpoint ask \"open Safari and search for 'OpenClaw'\"\nOpened Safari, typed 'OpenClaw' into the address bar, and pressed Enter.\nThe search results page is now displayed.\n\n1 screenshot(s) saved to session abc123\n  └ screenshots/step_3.png (/Users/you/Library/Application Support/com.medhuelabs.airpoint/sessions/abc123/screenshots/step_3.png)\n\nAfter receiving this, show the screenshot to the user so they can see what happened."
      },
      {
        "title": "Ask the AI agent to do something (primary command)",
        "body": "This is the most important command. It sends a natural-language task to\nAirpoint's built-in computer-use agent which can see the screen, move the\nmouse, click, type, scroll, open apps via Spotlight, manage windows, and verify\nits own actions.\n\n# Synchronous — waits for the agent to finish (up to 5 min) and returns output\nairpoint ask \"open Safari and go to github.com\"\nairpoint ask \"what's on my screen right now?\"\nairpoint ask \"find the Slack notification and read it\"\nairpoint ask \"open System Settings and enable Dark Mode\"\nairpoint ask \"open Mail, find the latest email from John, and summarize it\"\n\n# Fire-and-forget — returns immediately\nairpoint ask \"open Spotify and play my liked songs\" --no-wait\n\n# Show the assistant panel on screen while running\nairpoint ask \"open System Settings and enable Dark Mode\" --show-panel"
      },
      {
        "title": "Stop a running task",
        "body": "airpoint stop\n\nCancels the currently running assistant task. Use this if a task is stuck or\ntaking too long."
      },
      {
        "title": "Capture a screenshot",
        "body": "airpoint see\n\nReturns a screenshot of the current display. Useful for verifying state before\nor after issuing an ask command."
      },
      {
        "title": "Check status",
        "body": "airpoint status\nairpoint status --json\n\nReturns app version and current state (tracking active, etc.)."
      },
      {
        "title": "Hand tracking (secondary)",
        "body": "Airpoint also supports hands-free cursor control via camera-based hand tracking.\nThese commands start/stop that feature:\n\nairpoint tracking on\nairpoint tracking off\nairpoint tracking        # show current state"
      },
      {
        "title": "Read or change settings",
        "body": "airpoint settings list             # all current settings\nairpoint settings list --json      # machine-readable\nairpoint settings get cursor.sensitivity\nairpoint settings set cursor.sensitivity 1.5\n\nCommon settings: cursor.sensitivity (default 1.0), cursor.acceleration\n(default true), scroll.sensitivity (default 1.0), scroll.inertia\n(default true)."
      },
      {
        "title": "System vitals",
        "body": "airpoint vitals          # CPU, RAM, temperature\nairpoint vitals --json"
      },
      {
        "title": "Launch the app",
        "body": "airpoint open            # opens/focuses the Airpoint macOS app"
      },
      {
        "title": "Tips",
        "body": "Use airpoint ask for almost everything. The agent can read the screen,\ninteract with any app, and chain multi-step workflows autonomously.\nAlways use --json when you need to parse output programmatically.\nThe agent can answer questions about what's on screen (\"what app is in the\nforeground?\", \"read the error message in this dialog\").\nAirpoint is a notarized, code-signed macOS app. Download it from\nairpoint.app."
      }
    ],
    "body": "Airpoint — AI Computer Use for macOS\n\nAirpoint gives you an AI agent that can see and control a Mac — open apps, click UI elements, read on-screen text, type, scroll, drag, and manage windows. You give it a natural-language instruction and it carries out the task autonomously by perceiving the screen (accessibility tree + screenshots + visual locator), planning actions, executing them, and verifying the result.\n\nEverything runs through the airpoint CLI.\n\nRequirements\nmacOS (Apple Silicon or Intel)\nAirpoint app — must be running. Download from airpoint.app.\nAirpoint CLI — the airpoint command must be on PATH. Install it from the Airpoint app: Settings → Plugins → Install CLI.\nSetup\n\nBefore using Airpoint's AI agent, the user must configure it in the Airpoint app (Settings → Assistant):\n\nAI model API key (required). Set an API key for the chosen provider:\nOpenAI (recommended): model gpt-5.1 with reasoning effort low gives the best balance of cost, speed, and quality.\nAnthropic and Google Gemini are also supported.\nGemini API key (recommended). Even when using OpenAI or Anthropic as the primary model, a Google Gemini API key enables the visual locator — a secondary model (gemini-3-flash-preview) that finds UI targets on screen by analyzing screenshots. Without it, the agent relies on the accessibility tree only.\nmacOS permissions. The app prompts on first launch, but verify these are granted in System Settings → Privacy & Security:\nAccessibility — required for mouse/keyboard control.\nScreen Recording — required for screenshots and screen perception.\nCamera is only needed for hand tracking (not for the AI agent).\nCustom instructions (optional). In Settings → Assistant, add custom instructions to tailor the agent's behavior (e.g., preferred language, apps to avoid, workflows to follow).\n\nIf the user reports that airpoint ask fails or the agent can't see the screen, ask them to verify steps 1–3 above.\n\nHow to use\nRun airpoint ask \"<your instruction>\" to send a task to the on-device agent.\nThe command blocks until the agent finishes (up to 5 minutes) and returns:\nA text summary of what the agent did and the result.\nOne or more screenshot file paths showing the screen state after the task.\nRead the text output to confirm whether the task succeeded.\nIf screenshots were returned, show the last screenshot to the user as visual confirmation of the result.\nIf something went wrong or the task is stuck, run airpoint stop to cancel.\n\nExample flow:\n\n> airpoint ask \"open Safari and search for 'OpenClaw'\"\nOpened Safari, typed 'OpenClaw' into the address bar, and pressed Enter.\nThe search results page is now displayed.\n\n1 screenshot(s) saved to session abc123\n  └ screenshots/step_3.png (/Users/you/Library/Application Support/com.medhuelabs.airpoint/sessions/abc123/screenshots/step_3.png)\n\n\nAfter receiving this, show the screenshot to the user so they can see what happened.\n\nCommands\nAsk the AI agent to do something (primary command)\n\nThis is the most important command. It sends a natural-language task to Airpoint's built-in computer-use agent which can see the screen, move the mouse, click, type, scroll, open apps via Spotlight, manage windows, and verify its own actions.\n\n# Synchronous — waits for the agent to finish (up to 5 min) and returns output\nairpoint ask \"open Safari and go to github.com\"\nairpoint ask \"what's on my screen right now?\"\nairpoint ask \"find the Slack notification and read it\"\nairpoint ask \"open System Settings and enable Dark Mode\"\nairpoint ask \"open Mail, find the latest email from John, and summarize it\"\n\n# Fire-and-forget — returns immediately\nairpoint ask \"open Spotify and play my liked songs\" --no-wait\n\n# Show the assistant panel on screen while running\nairpoint ask \"open System Settings and enable Dark Mode\" --show-panel\n\nStop a running task\nairpoint stop\n\n\nCancels the currently running assistant task. Use this if a task is stuck or taking too long.\n\nCapture a screenshot\nairpoint see\n\n\nReturns a screenshot of the current display. Useful for verifying state before or after issuing an ask command.\n\nCheck status\nairpoint status\nairpoint status --json\n\n\nReturns app version and current state (tracking active, etc.).\n\nHand tracking (secondary)\n\nAirpoint also supports hands-free cursor control via camera-based hand tracking. These commands start/stop that feature:\n\nairpoint tracking on\nairpoint tracking off\nairpoint tracking        # show current state\n\nRead or change settings\nairpoint settings list             # all current settings\nairpoint settings list --json      # machine-readable\nairpoint settings get cursor.sensitivity\nairpoint settings set cursor.sensitivity 1.5\n\n\nCommon settings: cursor.sensitivity (default 1.0), cursor.acceleration (default true), scroll.sensitivity (default 1.0), scroll.inertia (default true).\n\nSystem vitals\nairpoint vitals          # CPU, RAM, temperature\nairpoint vitals --json\n\nLaunch the app\nairpoint open            # opens/focuses the Airpoint macOS app\n\nTips\nUse airpoint ask for almost everything. The agent can read the screen, interact with any app, and chain multi-step workflows autonomously.\nAlways use --json when you need to parse output programmatically.\nThe agent can answer questions about what's on screen (\"what app is in the foreground?\", \"read the error message in this dialog\").\nAirpoint is a notarized, code-signed macOS app. Download it from airpoint.app."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/MarioAndF/airpoint",
    "publisherUrl": "https://clawhub.ai/MarioAndF/airpoint",
    "owner": "MarioAndF",
    "version": "1.3.16",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/airpoint",
    "downloadUrl": "https://openagent3.xyz/downloads/airpoint",
    "agentUrl": "https://openagent3.xyz/skills/airpoint/agent",
    "manifestUrl": "https://openagent3.xyz/skills/airpoint/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/airpoint/agent.md"
  }
}