← All skills
Tencent SkillHub · Security & Compliance

vibe-check

Audit code for "vibe coding sins" — patterns that indicate AI-generated code was accepted without proper review. Produces a scored report card with fix sugge...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Audit code for "vibe coding sins" — patterns that indicate AI-generated code was accepted without proper review. Produces a scored report card with fix sugge...

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
CHANGELOG.md, README.md, SECURITY.md, SKILL.md, TESTING.md, scripts/analyze.sh

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
0.2.1

Documentation

ClawHub primary doc Primary doc: SKILL.md 14 sections Open source page

🎭 Vibe Check

Audit code for "vibe coding" — AI-generated code accepted without proper human review. Get a scored report card with specific findings and fix suggestions.

Trigger

Activate when the user mentions any of: "vibe check" "vibe-check" "audit code" "code quality" "vibe score" "check my code" "review this code for vibe coding" "code review" "vibe audit"

1. Determine the Target

Ask the user what code to analyze. Accepted inputs: Single file: app.py, src/utils.ts Directory: src/, ., my-project/ Git diff: last N commits, staged changes, or branch comparison

2. Run the Analysis

# Single file or directory bash "$SKILL_DIR/scripts/vibe-check.sh" TARGET # With fix suggestions bash "$SKILL_DIR/scripts/vibe-check.sh" --fix TARGET # Git diff (last 3 commits) bash "$SKILL_DIR/scripts/vibe-check.sh" --diff HEAD~3 # Staged changes with fixes bash "$SKILL_DIR/scripts/vibe-check.sh" --staged --fix # Save to file bash "$SKILL_DIR/scripts/vibe-check.sh" --fix --output report.md TARGET

3. Present the Report

The output is a Markdown report. Present it directly — it's designed to be screenshot-worthy.

Discord v2 Delivery Mode (OpenClaw v2026.2.14+)

When the conversation is happening in a Discord channel: Send a compact summary first (grade, score, file count, top 3 findings), then ask if the user wants the full report. Keep the first message under ~1200 characters and avoid wide Markdown tables in the first response. If Discord components are available, include quick actions: Show Top Findings Show Fix Suggestions Run Diff Mode If components are not available, provide the same follow-ups as a numbered list. Prefer short follow-up chunks (<=15 lines per message) when sending the full report.

Quick Reference

CommandDescriptionvibe-check FILEAnalyze a single filevibe-check DIRScan directory recursivelyvibe-check --diffCheck last commit's changesvibe-check --diff HEAD~5Check last 5 commitsvibe-check --stagedCheck staged changesvibe-check --fix DIRInclude fix suggestionsvibe-check --output report.md DIRSave report to file

Sin Categories (what it checks)

CategoryWeightWhat It CatchesError Handling20%Missing try/catch, bare exceptions, no edge casesInput Validation15%No type checks, no bounds checks, trusting all inputDuplication15%Copy-pasted logic, DRY violationsDead Code10%Unused imports, commented-out blocks, unreachable codeMagic Values10%Hardcoded strings/numbers/URLs without constantsTest Coverage10%No test files, no test patterns, no assertionsNaming Quality10%Vague names (data, result, temp, x), misleading namesSecurity10%eval(), exec(), hardcoded secrets, SQL injection

Scoring

A (90-100): Pristine code, minimal issues B (80-89): Clean code with minor issues C (70-79): Decent but lazy patterns crept in D (60-69): Needs human attention F (<60): Heavy vibe coding detected

Notes for the Agent

The report is the star. Present it in full — it's designed to look great. After presenting, offer to run --fix mode if they didn't already. Suggest the README badge: ![Vibe Score](https://img.shields.io/badge/vibe--score-XX%2F100-COLOR) For large codebases, suggest focusing on specific directories or using --diff mode. If no LLM API key is set, the tool falls back to heuristic analysis (less accurate but still useful). Supported languages (v1): Python, TypeScript, JavaScript only.

References

scripts/vibe-check.sh — Main entry point scripts/analyze.sh — LLM code analysis engine (with heuristic fallback) scripts/git-diff.sh — Git diff file extractor scripts/report.sh — Markdown report generator scripts/common.sh — Shared utilities and constants

Example 1: Audit a Directory

User: "Vibe check my src directory" Agent runs: bash "$SKILL_DIR/scripts/vibe-check.sh" src/ Output: Full scorecard with per-file breakdown, category scores, and top findings.

Example 2: Check with Fixes

User: "Review this code for vibe coding and suggest fixes" Agent runs: bash "$SKILL_DIR/scripts/vibe-check.sh" --fix src/ Output: Scorecard + unified diff patches for each finding.

Example 3: Git Diff Mode

User: "Check the code quality of my last 3 commits" Agent runs: bash "$SKILL_DIR/scripts/vibe-check.sh" --diff HEAD~3 Output: Scorecard focused only on recently changed files.

Category context

Identity, auth, scanning, governance, audit, and operational guardrails.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
5 Docs1 Scripts
  • SKILL.md Primary doc
  • CHANGELOG.md Docs
  • README.md Docs
  • SECURITY.md Docs
  • TESTING.md Docs
  • scripts/analyze.sh Scripts