Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Build and debug Groq API chat and speech workflows with low-latency routing, structured outputs, and production-safe patterns.
Build and debug Groq API chat and speech workflows with low-latency routing, structured outputs, and production-safe patterns.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
On first use, read setup.md for activation preferences, credential verification, and default workflow setup.
User needs to build, integrate, or troubleshoot Groq API inference for chat, tool calling, or speech transcription. Agent handles request shaping, model routing, failure recovery, and safe production patterns.
Memory lives in ~/groq-api/. See memory-template.md for structure. ~/groq-api/ โโโ memory.md # Status, activation preference, and defaults โโโ requests/ # Reusable payload snippets โโโ logs/ # Optional debug snapshots โโโ experiments/ # Prompt/model A-B notes
Use these files as decision aids, not as static docs: pick the smallest file that resolves the current blocker. TopicFileSetup processsetup.mdMemory templatememory-template.mdRequest patternsapi-patterns.mdModel routingmodel-selection.mdFailures and recoverytroubleshooting.md
Check GROQ_API_KEY first and use Authorization: Bearer $GROQ_API_KEY for every request. Use https://api.groq.com/openai/v1 as the base URL and confirm access with /models. curl -s https://api.groq.com/openai/v1/models \ -H "Authorization: Bearer $GROQ_API_KEY" | jq '.data[0].id'
Begin with small prompts and explicit format instructions. Add complexity only after the baseline call is stable.
Use separate model choices for: Fast interactive chat High-accuracy reasoning Speech transcription Choose from live /models output instead of hardcoding assumptions.
For 429 and 5xx, retry with exponential backoff and capped attempts. If a model is overloaded, fail over to a compatible backup model and log the swap.
If output feeds code execution or data writes, enforce JSON schema or strict parsing before acting. Reject malformed output early.
Speech uploads have different failure modes than chat. Validate input format, check file size, and surface transcription confidence when available.
Never store API keys in files. Keep request logs sanitized and avoid persisting full sensitive prompts unless the user explicitly asks.
Using stale model IDs copied from old examples -> call /models and select available IDs at runtime. Sending giant prompts without truncation -> latency spikes and timeout risk. Ignoring 429 backoff guidance -> repeated failures under load. Mixing chat and transcription assumptions -> wrong endpoint and payload format. Trusting free-form text for automation -> parse and validate before executing.
All network traffic should be limited to these Groq endpoints for explicit inference tasks requested by the user. EndpointData SentPurposehttps://api.groq.com/openai/v1/modelsNone (GET)Discover available modelshttps://api.groq.com/openai/v1/chat/completionsPrompt messages and optionsChat completionshttps://api.groq.com/openai/v1/audio/transcriptionsAudio file and transcription paramsSpeech-to-text No other data is sent externally.
Data that leaves your machine: Prompt content sent to Groq inference endpoints Audio content sent to Groq transcription endpoint when requested Data that stays local: Workflow preferences in ~/groq-api/memory.md Optional local debug notes in ~/groq-api/logs/ This skill does NOT: Store GROQ_API_KEY in project files Access files outside ~/groq-api/ for persistence Call undeclared third-party endpoints Modify itself or other skills
By using this skill, prompts and optional audio content are sent to Groq. Only install if you trust Groq with that data.
Install with clawhub install <slug> if user confirms: api โ reusable REST patterns, auth, and error handling models โ model comparison and selection heuristics ai โ current AI landscape checks before implementation decisions fine-tuning โ adaptation workflows when prompting is not enough langchain โ orchestration patterns for multi-step LLM pipelines
If useful: clawhub star groq-api Stay updated: clawhub sync
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.