Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Pay-per-request AI model access via Bitcoin Lightning using prepaid spend tokens. Query Claude and GPT models without API keys. Deterministic, budget-control...
Pay-per-request AI model access via Bitcoin Lightning using prepaid spend tokens. Query Claude and GPT models without API keys. Deterministic, budget-control...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Pay-per-use access to 19 AI models across 5 providers via Bitcoin Lightning micropayments. No API keys. No subscriptions. No accounts. Pay sats, get inference.
Accessing AI models without provider API keys Autonomous agent inference with Lightning payments Comparing responses across multiple providers Low-cost inference via open models (Llama 4, Mistral, DeepSeek) Vision tasks (Pixtral) Code generation (Codestral, Devstral) Reasoning tasks (Magistral)
ModelProviderTypeclaude-opus-4-5-20251101AnthropicChatgpt-4-turboOpenAIChatmeta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8Together.aiChatmeta-llama/Llama-3.3-70B-Instruct-TurboTogether.aiChatmeta-llama/Meta-Llama-3.1-70B-Instruct-TurboTogether.aiChatmistralai/Mixtral-8x7B-Instruct-v0.1Together.aiChatdeepseek-ai/DeepSeek-V3Together.aiChatmistral-large-latestMistralChatmistral-medium-latestMistralChatmistral-small-latestMistralChatopen-mistral-nemoMistralChatcodestral-latestMistralCodedevstral-latestMistralAgentic Codepixtral-large-latestMistralVisionmagistral-medium-latestMistralReasoninggemini-2.5-flashGoogleChatgemini-2.5-proGoogleChatgemini-3-flash-previewGoogleChatgemini-3-pro-previewGoogleChat
# 1. Top up at lightningprox.com/topup โ pay Lightning invoice, get token # 2. Use token directly curl -X POST https://lightningprox.com/v1/messages \ -H "Content-Type: application/json" \ -H "X-Spend-Token: $LIGHTNINGPROX_SPEND_TOKEN" \ -d '{ "model": "claude-opus-4-5-20251101", "messages": [{"role": "user", "content": "Hello"}], "max_tokens": 1000 }'
# 1. Send request without token โ get invoice curl -X POST https://lightningprox.com/v1/messages \ -H "Content-Type: application/json" \ -d '{"model": "gemini-2.5-flash", "messages": [{"role": "user", "content": "Hello"}], "max_tokens": 100}' # 2. Pay the Lightning invoice returned # 3. Retry with X-Payment-Hash header
npm install lightningprox-openai // Before: import OpenAI from 'openai' import OpenAI from 'lightningprox-openai' const client = new OpenAI({ apiKey: process.env.LIGHTNINGPROX_SPEND_TOKEN }) // Everything else stays identical: const response = await client.chat.completions.create({ model: 'claude-opus-4-5-20251101', messages: [{ role: 'user', content: 'Hello' }] }) Two lines change. Nothing else does.
curl https://lightningprox.com/api/capabilities
PermissionScopeReasonNetworklightningprox.comAPI calls for AI inferenceEnv ReadLIGHTNINGPROX_SPEND_TOKENAuthentication for prepaid requests
LightningProx is operated by LPX Digital Group LLC. Payment = authentication. No data stored beyond request logs. No accounts, no KYC. Operated at lightningprox.com.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.