Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Manage RunPod GPU cloud instances - create, start, stop, connect to pods via SSH and API. Use when working with RunPod infrastructure, GPU instances, or need SSH access to remote GPU machines. Handles pod lifecycle, SSH proxy connections, filesystem mounting, and API queries. Requires runpodctl (brew install runpod/runpodctl/runpodctl).
Manage RunPod GPU cloud instances - create, start, stop, connect to pods via SSH and API. Use when working with RunPod infrastructure, GPU instances, or need SSH access to remote GPU machines. Handles pod lifecycle, SSH proxy connections, filesystem mounting, and API queries. Requires runpodctl (brew install runpod/runpodctl/runpodctl).
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Manage RunPod GPU cloud instances, SSH connections, and filesystem access.
brew install runpod/runpodctl/runpodctl runpodctl config --apiKey "your-api-key" SSH Key: runpodctl manages SSH keys in ~/.runpod/ssh/: runpodctl ssh add-key View and manage keys at: https://console.runpod.io/user/settings Mount script configuration: The mount script checks ~/.ssh/runpod_key first, then falls back to runpodctl's default key. Override with: export RUNPOD_SSH_KEY="$HOME/.runpod/ssh/RunPod-Key" Host keys are stored separately in ~/.runpod/ssh/known_hosts (isolated from your main SSH config). Uses StrictHostKeyChecking=accept-new to verify hosts on reconnect while allowing new RunPod instances.
runpodctl get pod # List pods runpodctl get pod <id> # Get pod details runpodctl start pod <id> # Start pod runpodctl stop pod <id> # Stop pod runpodctl ssh connect <id> # Get SSH command runpodctl send <file> # Send file to pod runpodctl receive <code> # Receive file from pod
# Without volume runpodctl create pod --name "my-pod" --gpuType "NVIDIA GeForce RTX 4090" --imageName "runpod/pytorch:1.0.2-cu1281-torch280-ubuntu2404" # With volume (100GB at /workspace) runpodctl create pod --name "my-pod" --gpuType "NVIDIA GeForce RTX 4090" --imageName "runpod/pytorch:1.0.2-cu1281-torch280-ubuntu2404" --volumeSize 100 --volumePath "/workspace" Important: When using a volume (--volumeSize), always specify --volumePath too. Without it: error creating container: ... invalid mount config for type "volume": field Target must not be empty
# Get SSH command runpodctl ssh connect <pod_id> # Connect directly (copy command from above) ssh -p <port> root@<ip> -i ~/.ssh/runpod_key
./scripts/mount_pod.sh <pod_id> [base_dir] Mounts pod to ~/pods/<pod_id> by default. Access files: ls ~/pods/<pod_id>/ cat ~/pods/<pod_id>/workspace/my-project/train.py Unmount: fusermount -u ~/pods/<pod_id>
ScriptPurposemount_pod.shMount pod filesystem via SSHFS (no runpodctl equivalent)
Proxy URLs: https://<pod_id>-<port>.proxy.runpod.net Common ports: 8188: ComfyUI 7860: Gradio 8888: Jupyter 8080: Dev tools
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.