← All skills
Tencent SkillHub Β· AI

GPU Bridge

Offload GPU-intensive ML tasks (BERTScore, embeddings) to one or multiple remote GPU machines

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Offload GPU-intensive ML tasks (BERTScore, embeddings) to one or multiple remote GPU machines

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Known item issue.

This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.

Quick setup
  1. Open the source page and confirm the package flow manually.
  2. Review SKILL.md if you can obtain the files.
  3. Treat this source as manual setup until the download is verified.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Manual review
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
CHANGELOG.md, README.md, SKILL.md, gpu-service/README.md, gpu-service/__init__.py, gpu-service/device.py

Validation

  • Open the source listing and confirm there is a real package or setup artifact available.
  • Review SKILL.md before asking your agent to continue.
  • Treat this source as manual setup until the upstream download flow is fixed.

Install with your agent

Agent handoff

Use the source page and any available docs to guide the install because the item currently does not return a direct package file.

  1. Open the source page via Open source listing.
  2. If you can obtain the package, extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the source page and extracted files.
New install

I tried to install a skill package from Yavira, but the item currently does not return a direct package file. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required. Then review README.md for any prerequisites, environment setup, or post-install checks.

Upgrade existing

I tried to upgrade a skill package from Yavira, but the item currently does not return a direct package file. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need. Then review README.md for any prerequisites, environment setup, or post-install checks.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
0.2.1

Documentation

ClawHub primary doc Primary doc: SKILL.md 15 sections Open source page

@elvatis_com/openclaw-gpu-bridge

OpenClaw plugin to offload ML tasks (BERTScore + embeddings) to one or many remote GPU hosts.

v0.2 Highlights

Multi-GPU host pool (hosts[]) with: round-robin or least-busy load balancing automatic failover periodic host health checks Backward compatibility with v0.1 (serviceUrl / url) Flexible model selection per request (model / model_type) GPU service model caching (on-demand loading) Optional transfer visibility via /status endpoint + batch progress logs

Tools

gpu_health gpu_info gpu_status (new in v0.2) gpu_bertscore gpu_embed

v0.2 (recommended)

{ "plugins": { "@elvatis_com/openclaw-gpu-bridge": { "hosts": [ { "name": "rtx-2080ti", "url": "http://your-gpu-host:8765", "apiKey": "gpu-key-1" }, { "name": "rtx-3090", "url": "http://your-second-gpu-host:8765", "apiKey": "gpu-key-2" } ], "loadBalancing": "least-busy", "healthCheckIntervalSeconds": 30, "timeout": 45, "models": { "embed": "all-MiniLM-L6-v2", "bertscore": "microsoft/deberta-xlarge-mnli" } } } }

v0.1 compatibility

{ "plugins": { "@elvatis_com/openclaw-gpu-bridge": { "serviceUrl": "http://your-gpu-host:8765", "apiKey": "gpu-key", "timeout": 45 } } }

Config reference

hosts: array of GPU hosts (v0.2) serviceUrl / url: legacy single-host config loadBalancing: round-robin or least-busy healthCheckIntervalSeconds: host health polling interval timeout: request timeout for compute endpoints apiKey: fallback API key for hosts that do not define per-host key models.embed, models.bertscore: plugin-side default models

GPU Service (Python) Setup

cd gpu-service pip install -r requirements.txt uvicorn gpu_service:app --host 0.0.0.0 --port 8765 Default models are warmed on startup: Embed: all-MiniLM-L6-v2 BERTScore: microsoft/deberta-xlarge-mnli Additional models are loaded on-demand and cached in memory.

Environment variables

API_KEY: require X-API-Key for all endpoints except /health GPU_MAX_CONCURRENT: max parallel jobs (default 2) GPU_EMBED_BATCH: embedding chunk size for progress logging (default 32) MODEL_BERTSCORE: default warm model for BERTScore MODEL_EMBED: default warm model for embeddings TORCH_DEVICE: force device (cuda, cpu, cuda:1)

API Endpoints (GPU Service)

GET /health GET /info GET /status (queue + active jobs + progress) POST /bertscore POST /embed

Request-level model override

/bertscore: { "candidates": ["a"], "references": ["b"], "model_type": "microsoft/deberta-xlarge-mnli" } /embed: { "texts": ["hello world"], "model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2" }

Exposing to the Internet

If you expose your GPU service outside LAN, use defense-in-depth: Pre-shared key auth (required) Set API_KEY on service Configure same key in plugin host config (apiKey) Requests must include X-API-Key TLS/HTTPS (required on public internet) Recommended: nginx reverse proxy with Let’s Encrypt certs Alternative: run uvicorn with SSL cert/key directly

nginx reverse proxy example

server { listen 443 ssl http2; server_name gpu.example.com; ssl_certificate /etc/letsencrypt/live/gpu.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/gpu.example.com/privkey.pem; location / { proxy_pass http://127.0.0.1:8765; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }

uvicorn SSL example

uvicorn gpu_service:app --host 0.0.0.0 --port 8765 \ --ssl-keyfile /path/key.pem \ --ssl-certfile /path/cert.pem Optional: WireGuard VPN instead of public exposure Keep service private behind VPN Prefer private WireGuard IPs in plugin hosts[].url Operational hardening Firewall allowlist only OpenClaw server IP Rate limiting at reverse proxy Monitor logs and rotate keys periodically

Development

npm run build npm test TypeScript runs in strict mode.

License

MIT

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
4 Docs2 Scripts
  • SKILL.md Primary doc
  • CHANGELOG.md Docs
  • gpu-service/README.md Docs
  • README.md Docs
  • gpu-service/__init__.py Scripts
  • gpu-service/device.py Scripts