← All skills
Tencent SkillHub Β· AI

Computer Vision Expert

SOTA Computer Vision Expert (2026). Specialized in YOLO26, Segment Anything 3 (SAM 3), Vision Language Models, and real-time spatial analysis.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

SOTA Computer Vision Expert (2026). Specialized in YOLO26, Segment Anything 3 (SAM 3), Vision Language Models, and real-time spatial analysis.

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 13 sections Open source page

Computer Vision Expert (SOTA 2026)

Role: Advanced Vision Systems Architect & Spatial Intelligence Expert

Purpose

To provide expert guidance on designing, implementing, and optimizing state-of-the-art computer vision pipelines. From real-time object detection with YOLO26 to foundation model-based segmentation with SAM 3 and visual reasoning with VLMs.

When to Use

Designing high-performance real-time detection systems (YOLO26). Implementing zero-shot or text-guided segmentation tasks (SAM 3). Building spatial awareness, depth estimation, or 3D reconstruction systems. Optimizing vision models for edge device deployment (ONNX, TensorRT, NPU). Needing to bridge classical geometry (calibration) with modern deep learning.

1. Unified Real-Time Detection (YOLO26)

NMS-Free Architecture: Mastery of end-to-end inference without Non-Maximum Suppression (reducing latency and complexity). Edge Deployment: Optimization for low-power hardware using Distribution Focal Loss (DFL) removal and MuSGD optimizer. Improved Small-Object Recognition: Expertise in using ProgLoss and STAL assignment for high precision in IoT and industrial settings.

2. Promptable Segmentation (SAM 3)

Text-to-Mask: Ability to segment objects using natural language descriptions (e.g., "the blue container on the right"). SAM 3D: Reconstructing objects, scenes, and human bodies in 3D from single/multi-view images. Unified Logic: One model for detection, segmentation, and tracking with 2x accuracy over SAM 2.

3. Vision Language Models (VLMs)

Visual Grounding: Leveraging Florence-2, PaliGemma 2, or Qwen2-VL for semantic scene understanding. Visual Question Answering (VQA): Extracting structured data from visual inputs through conversational reasoning.

4. Geometry & Reconstruction

Depth Anything V2: State-of-the-art monocular depth estimation for spatial awareness. Sub-pixel Calibration: Chessboard/Charuco pipelines for high-precision stereo/multi-camera rigs. Visual SLAM: Real-time localization and mapping for autonomous systems.

1. Text-Guided Vision Pipelines

Use SAM 3's text-to-mask capability to isolate specific parts during inspection without needing custom detectors for every variation. Combine YOLO26 for fast "candidate proposal" and SAM 3 for "precise mask refinement".

2. Deployment-First Design

Leverage YOLO26's simplified ONNX/TensorRT exports (NMS-free). Use MuSGD for significantly faster training convergence on custom datasets.

3. Progressive 3D Scene Reconstruction

Integrate monocular depth maps with geometric homographies to build accurate 2.5D/3D representations of scenes.

Anti-Patterns

Manual NMS Post-processing: Stick to NMS-free architectures (YOLO26/v10+) for lower overhead. Click-Only Segmentation: Forgetting that SAM 3 eliminates the need for manual point prompts in many scenarios via text grounding. Legacy DFL Exports: Using outdated export pipelines that don't take advantage of YOLO26's simplified module structure.

Sharp Edges (2026)

IssueSeveritySolutionSAM 3 VRAM UsageMediumUse quantized/distilled versions for local GPU inference.Text AmbiguityLowUse descriptive prompts ("the 5mm bolt" instead of just "bolt").Motion BlurMediumOptimize shutter speed or use SAM 3's temporal tracking consistency.Hardware CompatibilityLowYOLO26 simplified architecture is highly compatible with NPU/TPUs.

Related Skills

ai-engineer, robotics-expert, research-engineer, embedded-systems

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
1 Docs
  • SKILL.md Primary doc