Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Run PinchBench benchmarks to evaluate OpenClaw agent performance across real-world tasks. Use when testing model capabilities, comparing models, submitting b...
Run PinchBench benchmarks to evaluate OpenClaw agent performance across real-world tasks. Use when testing model capabilities, comparing models, submitting b...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
PinchBench measures how well LLM models perform as the brain of an OpenClaw agent. Results are collected on a public leaderboard at pinchbench.com.
Python 3.10+ uv package manager OpenClaw instance (this agent)
cd <skill_directory> # Run benchmark with a specific model uv run benchmark.py --model anthropic/claude-sonnet-4 # Run only automated tasks (faster) uv run benchmark.py --model anthropic/claude-sonnet-4 --suite automated-only # Run specific tasks uv run benchmark.py --model anthropic/claude-sonnet-4 --suite task_01_calendar,task_02_stock # Skip uploading results uv run benchmark.py --model anthropic/claude-sonnet-4 --no-upload
TaskCategoryDescriptiontask_00_sanityBasicVerify agent workstask_01_calendarProductivityCalendar event creationtask_02_stockResearchStock price lookuptask_03_blogWritingBlog post creationtask_04_weatherCodingWeather scripttask_05_summaryAnalysisDocument summarizationtask_06_eventsResearchConference researchtask_07_emailWritingEmail draftingtask_08_memoryMemoryContext retrievaltask_09_filesFilesFile structure creationtask_10_workflowIntegrationMulti-step API workflowtask_11_clawdhubSkillsClawHub interactiontask_12_skill_searchSkillsSkill discoverytask_13_image_genCreativeImage generationtask_14_humanizerWritingText humanizationtask_15_daily_summaryProductivityDaily digesttask_16_email_triageEmailInbox triagetask_17_email_searchEmailEmail searchtask_18_market_researchResearchMarket analysistask_19_spreadsheet_summaryAnalysisSpreadsheet analysistask_20_eli5_pdf_summaryAnalysisPDF simplificationtask_21_openclaw_comprehensionKnowledgeOpenClaw docs comprehensiontask_22_second_brainMemoryKnowledge management
OptionDescription--modelModel identifier (e.g., anthropic/claude-sonnet-4)--suiteall, automated-only, or comma-separated task IDs--output-dirResults directory (default: results/)--timeout-multiplierScale task timeouts for slower models--runsNumber of runs per task for averaging--no-uploadSkip uploading to leaderboard--registerRequest new API token for submissions--upload FILEUpload previous results JSON
To submit results to the leaderboard: # Register for an API token (one-time) uv run benchmark.py --register # Run benchmark (auto-uploads with token) uv run benchmark.py --model anthropic/claude-sonnet-4
Results are saved as JSON in the output directory: # View task scores jq '.tasks[] | {task_id, score: .grading.mean}' results/0001_anthropic-claude-sonnet-4.json # Show failed tasks jq '.tasks[] | select(.grading.mean < 0.5)' results/*.json # Calculate overall score jq '{average: ([.tasks[].grading.mean] | add / length)}' results/*.json
Create a markdown file in tasks/ following TASK_TEMPLATE.md. Each task needs: YAML frontmatter (id, name, category, grading_type, timeout) Prompt section Expected behavior Grading criteria Automated checks (Python grading function)
View results at pinchbench.com. The leaderboard shows: Model rankings by overall score Per-task breakdowns Historical performance trends
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.