Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Use when generating BAML code for type-safe LLM extraction, classification, RAG, or agent workflows - creates complete .baml files with types, functions, clients, tests, and framework integrations from natural language requirements. Queries official BoundaryML repositories via MCP for real-time patterns. Supports multimodal inputs (images, audio), Python/TypeScript/Ruby/Go, 10+ frameworks, 50-70% token optimization, 95%+ compilation success.
Use when generating BAML code for type-safe LLM extraction, classification, RAG, or agent workflows - creates complete .baml files with types, functions, clients, tests, and framework integrations from natural language requirements. Queries official BoundaryML repositories via MCP for real-time patterns. Supports multimodal inputs (images, audio), Python/TypeScript/Ruby/Go, 10+ frameworks, 50-70% token optimization, 95%+ compilation success.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Generate type-safe LLM extraction code. Use when creating structured outputs, classification, RAG, or agent workflows.
NEVER edit baml_client/ - 100% generated, overwritten on every baml-cli generate; check baml_src/generators.baml for output_type (python, typescript, ruby, go) ALWAYS edit baml_src/ - Source of truth for all BAML code Run baml-cli generate after changes - Regenerates typed client code for target language
Schema Is The Prompt - Define data models first, compiler injects types Types Over Strings - Use enums/classes/unions, not string parsing Fuzzy Parsing Is BAML's Job - BAML extracts valid JSON from messy LLM output Transpiler Not Library - Write .baml โ generate native code (Python/TypeScript/Ruby/Go), no runtime dependency Test-Driven Prompting - Use VS Code playground or baml-cli test to iterate
Analyze โ Pattern Match (MCP) โ Validate โ Generate โ Test โ Deliver โ [IF ERRORS] Error Recovery (MCP) โ Retry
ElementExampleClassclass Invoice { total float @description("Amount") @assert(this > 0) @alias("amt") }Enumenum Category { Tech @alias("technology") @description("Tech sector"), Finance, Other }Functionfunction Extract(text: string, img: image?) -> Invoice { client GPT5 prompt #"{{ text }} {{ img }} {{ ctx.output_format }}"# }Clientclient<llm> GPT5 { provider openai options { model gpt-5 } retry_policy Exponential }Fallbackclient<llm> Resilient { provider fallback options { strategy [FastModel, SlowModel] } }
Primitives: string, int, float, bool | Multimodal: image, audio Containers: Type[] (array), Type? (optional), map<string, Type> (key-value) Composite: Type1 | Type2 (union), nested classes Annotations: @description("..."), @assert(condition), @alias("json_name"), @check(name, condition)
openai, anthropic, gemini, vertex, bedrock, ollama + any OpenAI-compatible via openai-generic
PatternUse CaseModelFramework MarkersExtractionUnstructured โ structuredGPT-5fastapi, next.jsClassificationCategorizationGPT-5-minianyRAGAnswers with citationsGPT-5langgraphAgentsMulti-step reasoningGPT-5langgraphVisionImage/audio data extractionGPT-5-Visionmultimodal
retry_policy: retry_policy Exp { max_retries 3 strategy { type exponential_backoff } } fallback client: Chain models [FastCheap, SlowReliable] for cost/reliability tradeoff
Found patterns from baml-examples | Validated against BoundaryML/baml | Fixed errors using docs | MCP unavailable, using fallback
BAML Code - Complete .baml files (types, functions, clients, retry_policy) Tests - pytest/Jest with 100% function coverage Integration - Framework-specific client code (LangGraph nodes, FastAPI endpoints, Next.js API routes) Metadata - Pattern used, token count, cost estimate
providers.md - OpenAI, Anthropic, Google, Ollama, Azure, Bedrock, openai-generic types-and-schemas.md - Full type system, classes, enums, unions, map, image, audio validation.md - @assert, @check, @alias, block-level @@assert patterns.md - Pattern library with code examples philosophy.md - BAML principles, golden rules mcp-interface.md - Query workflow, caching languages-python.md - Python/Pydantic, async languages-typescript.md - TypeScript, React/Next.js frameworks-langgraph.md - LangGraph integration
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.