Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Use when working with Crunch competitions - setting up workspaces, exploring quickstarters, testing solutions locally, or submitting entries.
Use when working with Crunch competitions - setting up workspaces, exploring quickstarters, testing solutions locally, or submitting entries.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Guides users through Crunch competition lifecycle: setup, quickstarter discovery, solution development, local testing, and submission.
Python 3.9+ with venv module (included in standard Python) pip for package installation
This skill installs Python packages from PyPI into isolated virtual environments: PackageSourcePurposecrunch-cliPyPICrunchDAO competition CLI (setup, test, submit)jupyterPyPINotebook support (optional)ipykernelPyPIJupyter kernel registration (optional)Competition SDKs (e.g. crunch-synth, birdgame)PyPICompetition-specific libraries (varies) Agent rules for package installation: Always use a virtual environment โ never install into system Python Only install known packages listed above or referenced in competition docs (PACKAGES.md) Ask the user before installing any package not listed here All packages are from PyPI โ no custom URLs, no --index-url overrides, no .whl files from unknown sources
How to get: User logs into CrunchDAO Hub, navigates to the competition's submit page (/competitions/<competition>/submit), and copies their token How it's used: Passed once via --token <TOKEN> during crunch setup Persistence: After setup, the CLI stores the token in the project's .crunch/ config directory. All subsequent commands (crunch test, crunch push, crunch download) authenticate automatically โ no need to pass the token again If token expires: Run crunch update-token inside the project directory to refresh it Agent rules for tokens: Always ask the user to provide the token โ never assume, guess, or reuse tokens from other projects Never write tokens into source files, scripts, notebooks, or any committed file Never log or echo tokens in shell output (use --token <TOKEN> placeholder in examples shown to user) Tokens are user-specific and project-scoped โ each crunch setup call requires the user to supply one
Used only for browsing quickstarter listings via api.github.com (public repo, no auth needed) Rate-limited to 60 requests/hour per IP; sufficient for normal use
OperationRequires networkEndpointcrunch setupYeshub.crunchdao.comcrunch pushYeshub.crunchdao.comcrunch downloadYeshub.crunchdao.comcrunch testNoLocal onlycrunch listYeshub.crunchdao.compip installYespypi.orgQuickstarter browsingYesapi.github.com
Each competition needs its own virtual environment (dependencies can conflict). mkdir -p ~/.crunch/workspace/competitions/<competition> cd ~/.crunch/workspace/competitions/<competition> python -m venv .venv && source .venv/bin/activate pip install crunch-cli jupyter ipykernel --upgrade --quiet --progress-bar=off python -m ipykernel install --user --name <competition> --display-name "Crunch - <competition>" # Get token from: https://hub.crunchdao.com/competitions/<competition>/submit crunch setup <competition> <project-name> --token <TOKEN> cd <competition>-<project-name> For competition-specific packages and full examples, see references/competition-setup.md.
crunch list # List competitions
Read the quickstarter code (main.py or notebook) and competition's SKILL.md/README.md. Provide walkthrough covering: Goal, Interface, Data flow, Approach, Scoring, Constraints, Limitations, Improvement ideas.
Analyze current approach, cross-reference competition docs (SKILL.md, LITERATURE.md, PACKAGES.md), generate concrete code suggestions: Model: mixture densities, NGBoost, quantile regression, ensembles Features: volatility regimes, cross-asset correlation, seasonality Architecture: online learning, Bayesian updating, horizon-specific models
crunch test # Test solution locally
crunch test # Always test first crunch push -m "Description" # Submit
User saysActionwhat competitions are availablecrunch listshow quickstarters for <name>Fetch from GitHub APIset up <competition>Full workspace setupdownload the datacrunch downloadget the <name> quickstartercrunch quickstarter --nameexplain this quickstarterStructured code walkthroughpropose improvementsAnalyze and suggest code improvementstest my solutioncrunch testcompare with baselineRun both, side-by-side resultssubmit my solutioncrunch push
Entrypoint must be main.py (default for crunch push/crunch test) Model files go in resources/ directory Respect competition interface and constraints (time limits, output format) Ask before installing new packages
CLI commands: references/cli-reference.md Setup examples: references/competition-setup.md
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.