Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Secure API key and secrets management for agent skills. Use this skill whenever a task requires authenticating with an external service, reading or writing A...
Secure API key and secrets management for agent skills. Use this skill whenever a task requires authenticating with an external service, reading or writing A...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Why this skill exists: Snyk researchers found that 7.1% of all ClawHub skills instruct agents to handle API keys through the LLM context โ making every secret an active exfiltration channel. This skill teaches the correct pattern.
A secret must never appear in: The LLM prompt or system context Claude's response or reasoning Logs, session exports, or .jsonl history files File artifacts created by the agent Error messages echoed back to the user A secret must only flow through: process.env (injected by OpenClaw before the agent turn) The shell environment of a subprocess the agent spawns A secrets manager CLI (read at subprocess level, not piped back into context)
This is OpenClaw's native, secure path. Use it for any skill that needs an API key.
--- name: my-service-skill description: Interact with MyService API. metadata: {"openclaw": {"requires": {"env": ["MY_SERVICE_API_KEY"]}, "primaryEnv": "MY_SERVICE_API_KEY"}} --- The requires.env gate ensures the skill will not load if the key isn't present โ no silent failures, no prompting the user to paste a key mid-conversation. The primaryEnv field links to skills.entries.<n>.apiKey in openclaw.json, so the user configures it once in their config file, never in chat.
## Authentication The API key is available as `$MY_SERVICE_API_KEY` in the shell environment. Pass it to CLI tools or curl as an environment variable โ never echo it or include it in any output returned to the user.
# CORRECT โ key stays in environment, never in command string visible to LLM MY_SERVICE_API_KEY="$MY_SERVICE_API_KEY" curl -s \ -H "Authorization: Bearer $MY_SERVICE_API_KEY" \ https://api.myservice.com/v1/data Never instruct the agent to do this: # WRONG โ key is visible in LLM context, command history, and logs curl -H "Authorization: Bearer sk-abc123realkeyhere" https://api.myservice.com/
For production setups or team environments, read secrets from a manager at subprocess level.
ManagerCLIEnv var patternmacOS Keychainsecurity find-generic-password -wN/A1Password CLIop read op://vault/item/fieldOP_SERVICE_ACCOUNT_TOKENDopplerdoppler run --DOPPLER_TOKENHashiCorp Vaultvault kv get -field=valueVAULT_TOKENBitwarden CLIbw get password item-nameBW_SESSION
Create a scripts/run-with-secret.sh in your skill: #!/usr/bin/env bash # Fetches the secret at subprocess level โ never echoes to stdout SECRET=$(security find-generic-password -s "my-service-api-key" -w 2>/dev/null) if [ -z "$SECRET" ]; then echo "ERROR: Secret 'my-service-api-key' not found in keychain." >&2 exit 1 fi export MY_SERVICE_API_KEY="$SECRET" exec "$@" The agent runs bash {baseDir}/scripts/run-with-secret.sh <actual-command> โ the secret is fetched and injected entirely outside the LLM's view.
If the user hasn't configured a key yet, guide them through setup without asking for the key in chat.
To use this skill, add your API key to ~/.openclaw/openclaw.json: skills: entries: my-service: apiKey: "your-key-here" Or set it as an environment variable before starting OpenClaw: export MY_SERVICE_API_KEY="your-key-here" Do NOT paste your key into this chat โ it will be logged.
Please share your API key so I can help you set it up.
When asked to review a SKILL.md for credential safety, check for these patterns:
PatternWhy it's dangerousInstruction to paste key into chatKey goes into LLM context + session logsecho $API_KEY or print(api_key) in instructionsOutput captured in contextKey interpolated into a string returned to userExposed in response artifactcat ~/.env or reading raw env filesEntire env dumped into contextKey stored in a file the agent createsCreates a static credential artifactInstructions tell agent to "remember" the keyKey persists across context window
PatternRiskNo requires.env gate in frontmatterSkill silently fails or user is promptedLogging command output without filteringMay capture keys in error messagesUsing set -x in shell scriptsEchoes all commands including key valuesPassing key as a positional argumentVisible in ps aux on the host
requires.env in frontmatter Key accessed only as $ENV_VAR in shell, never echoed Subprocess scripts that fetch and inject without returning to context Error messages that say "key not found" without printing the value Output filtered through sed/grep before returning to agent
Run through this checklist before putting any skill on ClawHub: Does the skill ever ask the user to paste a secret into the conversation? Does the skill ever echo, print, log, or return a secret value? Does the skill read a .env file and dump its contents? Does the skill store a secret in a file artifact? Are all API key references gated with requires.env in frontmatter? Do error messages avoid reflecting credential values? Does any shell script use set -x (which would expose key values)? Would running clawhub audit {skill-name} pass? If any box is unchecked, do not publish until fixed.
# UNSAFE โ never write instructions like these: "Ask the user for their OpenAI API key and use it to call the API." "Set the Authorization header to Bearer {user_api_key}." "Store the API key in a variable and use it throughout the session." # SAFE โ write instructions like these: "The API key is injected as $OPENAI_API_KEY via environment โ use it directly." "Run: OPENAI_API_KEY=$OPENAI_API_KEY curl ..." "If $OPENAI_API_KEY is not set, print an error and exit โ do not ask the user."
references/env-injection-examples.md โ Full worked examples for popular APIs (OpenAI, Anthropic, GitHub, Stripe, Slack) references/audit-checklist.md โ Printable audit checklist for skill authors and reviewers
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.