Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Control a real Android phone via USB or network using GPT-4o vision to run tasks like opening apps, typing, tapping, and automation scripts.
Control a real Android phone via USB or network using GPT-4o vision to run tasks like opening apps, typing, tapping, and automation scripts.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Plug your old Android phone into your Mac/PC. Now your AI assistant can use it. Got an old Android in a drawer? Connect it to any machine running OpenClaw β your gateway, a Mac Mini, a Raspberry Pi. Your AI can now open apps, tap buttons, type text, and complete tasks on a real phone. Book a cab, order food, check your bank app β anything you'd do with your thumbs.
Your AI agent sees the phone screen (via screenshots), decides what to tap/type/swipe, and executes actions via ADB. Under the hood it uses DroidRun with GPT-4o vision. βββββββββββββββ screenshots ββββββββββββββββ ADB commands βββββββββββββββ β GPT-4o ββββββββββββββββββββ DroidRun ββββββββββββββββββββΊβ Android β β Vision βββββββββββββββββββΊβ Agent βββββββββββββββββββββ Phone β β β tap/type/swipe β β screen state β β βββββββββββββββ ββββββββββββββββ βββββββββββββββ
Phone plugged into your OpenClaw gateway machine via USB. Zero networking required. [Gateway Machine] ββUSBβββΊ [Android Phone]
Phone plugged into a Mac Mini, Raspberry Pi, or any OpenClaw node. The gateway controls it over the network. [Gateway] ββnetworkβββΊ [Mac Mini / Pi node] ββUSBβββΊ [Android Phone] For Node mode, connect ADB over TCP/WiFi so the node can forward commands.
On your Android phone: Go to Settings β About Phone Tap Build Number 7 times to enable Developer Options Go to Settings β Developer Options Enable USB Debugging
# Plug phone in via USB, then: pip install -r requirements.txt adb devices # verify phone shows up β authorize on phone if prompted
export OPENAI_API_KEY="sk-..." python scripts/run-task.py "Open Settings and turn on Dark Mode" That's it. The script handles everything: waking the screen, unlocking, keeping the display on, and running your task.
python scripts/run-task.py "Order an Uber to the airport" python scripts/run-task.py "Set an alarm for 6 AM tomorrow" python scripts/run-task.py "Check my bank balance on PhonePe" python scripts/run-task.py "Open Google Maps and navigate to the nearest coffee shop"
python scripts/run-task.py "Send a WhatsApp message to Mom saying I'll be late" python scripts/run-task.py "Read my latest SMS messages" python scripts/run-task.py "Open Telegram and check unread messages"
python scripts/run-task.py "Open Amazon and search for wireless earbuds under 2000 rupees" python scripts/run-task.py "Add milk and bread to my Instamart cart"
python scripts/run-task.py "Open Google Calendar and check my schedule for tomorrow" python scripts/run-task.py "Create a new note in Google Keep: Buy groceries"
python scripts/run-task.py "Play my Discover Weekly playlist on Spotify" python scripts/run-task.py "Open YouTube and search for lo-fi study music"
python scripts/run-task.py "Turn on Dark Mode" python scripts/run-task.py "Connect to my home WiFi network" python scripts/run-task.py "Enable Do Not Disturb mode" python scripts/run-task.py "Turn off Bluetooth"
python scripts/run-task.py "Take a screenshot" python scripts/run-task.py "Open the camera and take a photo" python scripts/run-task.py "Clear all notifications"
VariableRequiredDescriptionOPENAI_API_KEYYesAPI key for GPT-4o visionANDROID_SERIALNoDevice serial number. Auto-detected if only one device is connectedANDROID_PINNoPhone PIN/password for auto-unlock. If not set, unlock is skippedDROIDRUN_TIMEOUTNoTask timeout in seconds (default: 120)
Install ADB: # macOS brew install android-platform-tools # Ubuntu/Debian sudo apt install android-tools-adb # Windows # Download from https://developer.android.com/tools/releases/platform-tools Connect phone via USB and verify: ./scripts/connect.sh usb Install DroidRun Portal APK on the phone: Download from DroidRun releases Or sideload: adb install droidrun-portal.apk Open the Portal app on the phone and grant accessibility permissions Install Python dependencies: pip install -r requirements.txt
On the node machine (Mac Mini, Pi, etc.), connect the phone via USB and enable WiFi ADB: adb tcpip 5555 adb connect <phone-ip>:5555 From your gateway, connect to the node's ADB: # If using SSH tunnel: ssh -L 15555:<phone-ip>:5555 user@node-ip export ANDROID_SERIAL="127.0.0.1:15555" # Or direct WiFi (same network): ./scripts/connect.sh wifi <phone-ip> Run tasks as normal β the script uses whatever ANDROID_SERIAL points to.
The DroidRun Portal APK must be installed and running on the phone. It provides the accessibility service that allows DroidRun to read screen content and interact with UI elements. Install the APK (download from DroidRun GitHub releases) Open the Portal app Grant Accessibility Service permission when prompted Keep it running in the background
# Basic usage python scripts/run-task.py "Your task description here" # With options python scripts/run-task.py --timeout 180 "Install Spotify from Play Store" python scripts/run-task.py --model gpt-4o "Open Chrome and search for weather" python scripts/run-task.py --no-unlock "Take a screenshot" python scripts/run-task.py --serial 127.0.0.1:15555 "Check notifications" python scripts/run-task.py --verbose "Open Settings" Options: FlagDescriptiongoalTask description (positional, required)--timeoutTimeout in seconds (default: 120, or DROIDRUN_TIMEOUT env)--modelLLM model to use (default: gpt-4o)--no-unlockSkip the auto-unlock step--serialDevice serial (default: ANDROID_SERIAL env or auto-detect)--verboseShow detailed debug output
./scripts/connect.sh # Auto-detect USB device ./scripts/connect.sh usb # USB mode (explicit) ./scripts/connect.sh wifi 192.168.1.100 # WiFi/TCP mode
DroidRunβs internal screenshot sometimes fails on certain devices. Use this to bypass DroidRun and capture a PNG directly via ADB. # Save to /tmp/android-screenshot.png ./scripts/screenshot.sh # Explicit serial + output path ./scripts/screenshot.sh 127.0.0.1:15555 /tmp/a03.png You can also do it from Python: python scripts/run-task.py --screenshot --serial 127.0.0.1:15555 --screenshot-path /tmp/a03.png
./scripts/status.sh # Output: # π± Device: Samsung Galaxy A03 (SM-A035F) # π€ Android: 11 (API 30) # π Battery: 87% # πΊ Screen: ON (unlocked) # π Connection: USB # π¦ DroidRun Portal: installed (v0.5.5)
Check USB cable (use a data cable, not charge-only) Authorize the computer on your phone's USB debugging prompt Try adb kill-server && adb start-server
Disconnect and reconnect USB Check the phone screen for an authorization dialog If no dialog appears, revoke USB debugging authorizations in Developer Options and reconnect
The script sets keep-awake mode automatically, but some phones override this Manually: Settings β Developer Options β Stay Awake (while charging)
The script tries to dismiss common dialogs automatically For persistent popups, dismiss them manually first, then retry Use --verbose to see what the agent is seeing
WiFi ADB mode resets on phone reboot β you need to re-enable it via USB Run ./scripts/connect.sh usb first, then ./scripts/connect.sh wifi <ip>
Make sure DroidRun Portal is running and accessibility service is enabled Close unnecessary apps to reduce screen complexity Try a simpler task first to verify the setup works
PIN pad button coordinates vary by device and screen resolution To find your device's coordinates: adb shell getevent -l and tap each digit Or use adb shell input text <PIN> as a fallback on some devices Set ANDROID_PIN environment variable (never hardcode it)
ADB grants full device access β only connect devices you trust and own Screenshots are sent to your LLM provider (OpenAI by default) β be mindful of sensitive content on screen (banking apps, private messages) PIN is read from environment variable only β never stored in files or logs WiFi ADB is unencrypted β use USB or an SSH tunnel on untrusted networks DroidRun Portal requires accessibility permissions β this is powerful access; understand what it enables
Python 3.10+ ADB (Android Debug Bridge) Android 8.0+ phone with Developer Options and USB Debugging enabled DroidRun Portal APK installed on phone OpenAI API key (GPT-4o for vision capabilities) USB data cable (not charge-only)
Use a dedicated test device, not your primary phone. Screenshots & screen text go to OpenAI. Every screenshot the agent takes is sent to GPT-4o for vision processing. Don't run this on devices with sensitive data visible β banking apps, 2FA tokens, private messages, medical info. If it's on screen, it's sent to the cloud. ANDROID_PIN is stored as an environment variable. While it's never written to files or logs, anyone with access to the host's environment can read it. Use a disposable device PIN you don't use elsewhere, or accept the risk. Only install DroidRun Portal from official sources. Download the APK exclusively from DroidRun GitHub releases. Never sideload APKs from third-party sites. ADB grants full device access. Combined with accessibility permissions, this is effectively root-level control. Only connect devices you own and are comfortable exposing. WiFi ADB is unencrypted. If using TCP/WiFi mode on an untrusted network, wrap it in an SSH tunnel. Bottom line: Treat the connected phone as a "work device for AI." Don't leave personal accounts logged in. Don't store secrets on it. If you wouldn't hand your unlocked phone to a stranger, don't point this skill at it.
MIT β see LICENSE
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.