Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Test and optimize prompts for cost, token use, and performance with detailed reports using single shot queries across multiple providers and models.
Test and optimize prompts for cost, token use, and performance with detailed reports using single shot queries across multiple providers and models.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Prompt cost testing with single shot
brew tap vincentzhangz/singleshot brew install singleshot Or: cargo install singleshot
Testing new prompts before openclaw implementation Benchmarking prompt variations for token efficiency Comparing model performance and costs Validating prompt outputs before production
Always use -d (detail) and -r (report) flags for efficiency analysis: # Basic test with full metrics singleshot chat -p "Your prompt" -P openai -d -r report.md # Test with config file singleshot chat -l config.md -d -r report.md # Compare providers singleshot chat -p "Test" -P openai -m gpt-4o-mini -d -r openai.md singleshot chat -p "Test" -P anthropic -m claude-sonnet-4-20250514 -d -r anthropic.md # Batch test variations for config in *.md; do singleshot chat -l "$config" -d -r "report-${config%.md}.md" done
singleshot chat -p "Your prompt" -P openai -d -r baseline.md cat baseline.md
# Create optimized version, test, and compare cat > optimized.md << 'EOF' ---provider--- openai ---model--- gpt-4o-mini ---max_tokens--- 200 ---system--- Expert. Be concise. ---prompt--- Your optimized prompt EOF singleshot chat -l optimized.md -d -r optimized-report.md # Compare metrics echo "Baseline:" && grep -E "(Tokens|Cost)" baseline.md echo "Optimized:" && grep -E "(Tokens|Cost)" optimized-report.md
Test with cheaper models first: singleshot chat -p "Test" -P openai -m gpt-4o-mini -d -r report.md Reduce tokens: Shorten system prompts Use --max-tokens to limit output Add "be concise" to system prompt Test locally (free): singleshot chat -p "Test" -P ollama -m llama3.2 -d -r report.md
# Step 1: Baseline (verbose) singleshot chat \ -p "How do I write a Rust function to add two numbers?" \ -s "You are an expert Rust programmer with 10 years experience" \ -P openai -d -r v1.md # Step 2: Read metrics cat v1.md # Expected: ~130 input tokens, ~400 output tokens # Step 3: Optimized version singleshot chat \ -p "Rust function: add(a: i32, b: i32) -> i32" \ -s "Rust expert. Code only." \ -P openai --max-tokens 100 -d -r v2.md # Step 4: Compare echo "=== COMPARISON ===" grep "Total Cost" v1.md v2.md grep "Total Tokens" v1.md v2.md
# Test with full details singleshot chat -p "prompt" -P openai -d -r report.md # Extract metrics grep -E "(Input|Output|Total)" report.md # Compare reports diff report1.md report2.md # Vision test singleshot chat -p "Describe" -i image.jpg -P openai -d -r report.md # List models singleshot models -P openai # Test connection singleshot ping -P openai
export OPENAI_API_KEY="sk-..." export ANTHROPIC_API_KEY="sk-ant-..." export OPENROUTER_API_KEY="sk-or-..."
Always use -d for detailed token metrics Always use -r to save reports Always cat reports to analyze metrics Test variations and compare costs Set --max-tokens to control costs Use gpt-4o-mini for testing (cheaper)
No metrics: Ensure -d flag is used No report file: Ensure -r flag is used High costs: Switch to gpt-4o-mini or Ollama Connection issues: Run singleshot ping -P <provider>
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.