Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Audit code for "vibe coding sins" — patterns that indicate AI-generated code was accepted without proper review. Produces a scored report card with fix sugge...
Audit code for "vibe coding sins" — patterns that indicate AI-generated code was accepted without proper review. Produces a scored report card with fix sugge...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Audit code for "vibe coding" — AI-generated code accepted without proper human review. Get a scored report card with specific findings and fix suggestions.
Activate when the user mentions any of: "vibe check" "vibe-check" "audit code" "code quality" "vibe score" "check my code" "review this code for vibe coding" "code review" "vibe audit"
Ask the user what code to analyze. Accepted inputs: Single file: app.py, src/utils.ts Directory: src/, ., my-project/ Git diff: last N commits, staged changes, or branch comparison
# Single file or directory bash "$SKILL_DIR/scripts/vibe-check.sh" TARGET # With fix suggestions bash "$SKILL_DIR/scripts/vibe-check.sh" --fix TARGET # Git diff (last 3 commits) bash "$SKILL_DIR/scripts/vibe-check.sh" --diff HEAD~3 # Staged changes with fixes bash "$SKILL_DIR/scripts/vibe-check.sh" --staged --fix # Save to file bash "$SKILL_DIR/scripts/vibe-check.sh" --fix --output report.md TARGET
The output is a Markdown report. Present it directly — it's designed to be screenshot-worthy.
When the conversation is happening in a Discord channel: Send a compact summary first (grade, score, file count, top 3 findings), then ask if the user wants the full report. Keep the first message under ~1200 characters and avoid wide Markdown tables in the first response. If Discord components are available, include quick actions: Show Top Findings Show Fix Suggestions Run Diff Mode If components are not available, provide the same follow-ups as a numbered list. Prefer short follow-up chunks (<=15 lines per message) when sending the full report.
CommandDescriptionvibe-check FILEAnalyze a single filevibe-check DIRScan directory recursivelyvibe-check --diffCheck last commit's changesvibe-check --diff HEAD~5Check last 5 commitsvibe-check --stagedCheck staged changesvibe-check --fix DIRInclude fix suggestionsvibe-check --output report.md DIRSave report to file
CategoryWeightWhat It CatchesError Handling20%Missing try/catch, bare exceptions, no edge casesInput Validation15%No type checks, no bounds checks, trusting all inputDuplication15%Copy-pasted logic, DRY violationsDead Code10%Unused imports, commented-out blocks, unreachable codeMagic Values10%Hardcoded strings/numbers/URLs without constantsTest Coverage10%No test files, no test patterns, no assertionsNaming Quality10%Vague names (data, result, temp, x), misleading namesSecurity10%eval(), exec(), hardcoded secrets, SQL injection
A (90-100): Pristine code, minimal issues B (80-89): Clean code with minor issues C (70-79): Decent but lazy patterns crept in D (60-69): Needs human attention F (<60): Heavy vibe coding detected
The report is the star. Present it in full — it's designed to look great. After presenting, offer to run --fix mode if they didn't already. Suggest the README badge:  For large codebases, suggest focusing on specific directories or using --diff mode. If no LLM API key is set, the tool falls back to heuristic analysis (less accurate but still useful). Supported languages (v1): Python, TypeScript, JavaScript only.
scripts/vibe-check.sh — Main entry point scripts/analyze.sh — LLM code analysis engine (with heuristic fallback) scripts/git-diff.sh — Git diff file extractor scripts/report.sh — Markdown report generator scripts/common.sh — Shared utilities and constants
User: "Vibe check my src directory" Agent runs: bash "$SKILL_DIR/scripts/vibe-check.sh" src/ Output: Full scorecard with per-file breakdown, category scores, and top findings.
User: "Review this code for vibe coding and suggest fixes" Agent runs: bash "$SKILL_DIR/scripts/vibe-check.sh" --fix src/ Output: Scorecard + unified diff patches for each finding.
User: "Check the code quality of my last 3 commits" Agent runs: bash "$SKILL_DIR/scripts/vibe-check.sh" --diff HEAD~3 Output: Scorecard focused only on recently changed files.
Identity, auth, scanning, governance, audit, and operational guardrails.
Largest current source with strong distribution and engagement signals.