Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Unified LLM Gateway - One API for 70+ AI models. Route to GPT, Claude, Gemini, Qwen, Deepseek, Grok and more with a single API key.
Unified LLM Gateway - One API for 70+ AI models. Route to GPT, Claude, Gemini, Qwen, Deepseek, Grok and more with a single API key.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Unified LLM Gateway for autonomous agents. Powered by AIsa. One API key. 70+ models. OpenAI-compatible. Replace 100+ API keys with one. Access GPT-4, Claude-3, Gemini, Qwen, Deepseek, Grok, and more through a unified, OpenAI-compatible API.
"Chat with GPT-4 for reasoning, switch to Claude for creative writing"
"Compare responses from GPT-4, Claude, and Gemini for the same question"
"Analyze this image with GPT-4o - what objects are in it?"
"Route simple queries to fast/cheap models, complex queries to GPT-4"
"If GPT-4 fails, automatically try Claude, then Gemini"
FeatureLLM RouterDirect APIsAPI Keys110+SDK CompatibilityOpenAI SDKMultiple SDKsBillingUnifiedPer-providerModel SwitchingChange stringCode rewriteFallback RoutingBuilt-inDIYCost TrackingUnifiedFragmented
FamilyDeveloperExample ModelsGPTOpenAIgpt-4.1, gpt-4o, gpt-4o-mini, o1, o1-mini, o3-miniClaudeAnthropicclaude-3-5-sonnet, claude-3-opus, claude-3-sonnetGeminiGooglegemini-2.0-flash, gemini-1.5-pro, gemini-1.5-flashQwenAlibabaqwen-max, qwen-plus, qwen2.5-72b-instructDeepseekDeepseekdeepseek-chat, deepseek-coder, deepseek-v3, deepseek-r1GrokxAIgrok-2, grok-beta Note: Model availability may vary. Check marketplace.aisa.one/pricing for the full list of currently available models and pricing.
export AISA_API_KEY="your-key"
POST https://api.aisa.one/v1/chat/completions Request curl -X POST "https://api.aisa.one/v1/chat/completions" \ -H "Authorization: Bearer $AISA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4.1", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Explain quantum computing in simple terms."} ], "temperature": 0.7, "max_tokens": 1000 }' Parameters ParameterTypeRequiredDescriptionmodelstringYesModel identifier (e.g., gpt-4.1, claude-3-sonnet)messagesarrayYesConversation messagestemperaturenumberNoRandomness (0-2, default: 1)max_tokensintegerNoMaximum response tokensstreambooleanNoEnable streaming (default: false)top_pnumberNoNucleus sampling (0-1)frequency_penaltynumberNoFrequency penalty (-2 to 2)presence_penaltynumberNoPresence penalty (-2 to 2)stopstring/arrayNoStop sequences Message Format { "role": "user|assistant|system", "content": "message text or array for multimodal" } Response { "id": "chatcmpl-xxx", "object": "chat.completion", "created": 1234567890, "model": "gpt-4.1", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Quantum computing uses..." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 50, "completion_tokens": 200, "total_tokens": 250, "cost": 0.0025 } }
curl -X POST "https://api.aisa.one/v1/chat/completions" \ -H "Authorization: Bearer $AISA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "claude-3-sonnet", "messages": [{"role": "user", "content": "Write a poem about AI."}], "stream": true }' Streaming returns Server-Sent Events (SSE): data: {"id":"chatcmpl-xxx","choices":[{"delta":{"content":"In"}}]} data: {"id":"chatcmpl-xxx","choices":[{"delta":{"content":" circuits"}}]} ... data: [DONE]
Analyze images by passing image URLs or base64 data: curl -X POST "https://api.aisa.one/v1/chat/completions" \ -H "Authorization: Bearer $AISA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o", "messages": [ { "role": "user", "content": [ {"type": "text", "text": "What is in this image?"}, {"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}} ] } ] }'
Enable tools/functions for structured outputs: curl -X POST "https://api.aisa.one/v1/chat/completions" \ -H "Authorization: Bearer $AISA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4.1", "messages": [{"role": "user", "content": "What is the weather in Tokyo?"}], "functions": [ { "name": "get_weather", "description": "Get current weather for a location", "parameters": { "type": "object", "properties": { "location": {"type": "string", "description": "City name"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]} }, "required": ["location"] } } ], "function_call": "auto" }'
For Gemini models, you can also use the native format: POST https://api.aisa.one/v1/models/{model}:generateContent curl -X POST "https://api.aisa.one/v1/models/gemini-2.0-flash:generateContent" \ -H "Authorization: Bearer $AISA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "contents": [ { "role": "user", "parts": [{"text": "Explain machine learning."}] } ], "generationConfig": { "temperature": 0.7, "maxOutputTokens": 1000 } }'
No installation required - uses standard library only.
# Basic completion python3 {baseDir}/scripts/llm_router_client.py chat --model gpt-4.1 --message "Hello, world!" # With system prompt python3 {baseDir}/scripts/llm_router_client.py chat --model claude-3-sonnet --system "You are a poet" --message "Write about the moon" # Streaming python3 {baseDir}/scripts/llm_router_client.py chat --model gpt-4o --message "Tell me a story" --stream # Multi-turn conversation python3 {baseDir}/scripts/llm_router_client.py chat --model qwen-max --messages '[{"role":"user","content":"Hi"},{"role":"assistant","content":"Hello!"},{"role":"user","content":"How are you?"}]' # Vision analysis python3 {baseDir}/scripts/llm_router_client.py vision --model gpt-4o --image "https://example.com/image.jpg" --prompt "Describe this image" # List supported models python3 {baseDir}/scripts/llm_router_client.py models # Compare models python3 {baseDir}/scripts/llm_router_client.py compare --models "gpt-4.1,claude-3-sonnet,gemini-2.0-flash" --message "What is 2+2?"
from llm_router_client import LLMRouterClient client = LLMRouterClient() # Uses AISA_API_KEY env var # Simple chat response = client.chat( model="gpt-4.1", messages=[{"role": "user", "content": "Hello!"}] ) print(response["choices"][0]["message"]["content"]) # With options response = client.chat( model="claude-3-sonnet", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Explain relativity."} ], temperature=0.7, max_tokens=500 ) # Streaming for chunk in client.chat_stream( model="gpt-4o", messages=[{"role": "user", "content": "Write a story."}] ): print(chunk, end="", flush=True) # Vision response = client.vision( model="gpt-4o", image_url="https://example.com/image.jpg", prompt="What's in this image?" ) # Compare models results = client.compare_models( models=["gpt-4.1", "claude-3-sonnet", "gemini-2.0-flash"], message="Explain quantum computing" ) for model, result in results.items(): print(f"{model}: {result['response'][:100]}...")
Use cheaper models for simple tasks: def smart_route(message: str) -> str: # Simple queries -> fast/cheap model if len(message) < 50: model = "gpt-3.5-turbo" # Complex reasoning -> powerful model else: model = "gpt-4.1" return client.chat(model=model, messages=[{"role": "user", "content": message}])
Automatic fallback on failure: def chat_with_fallback(message: str) -> str: models = ["gpt-4.1", "claude-3-sonnet", "gemini-2.0-flash"] for model in models: try: return client.chat(model=model, messages=[{"role": "user", "content": message}]) except Exception: continue raise Exception("All models failed")
Compare model outputs: results = client.compare_models( models=["gpt-4.1", "claude-3-opus"], message="Analyze this quarterly report..." ) # Log for analysis for model, result in results.items(): log_response(model=model, latency=result["latency"], cost=result["cost"])
Choose the best model for each task: MODEL_MAP = { "code": "deepseek-coder", "creative": "claude-3-opus", "fast": "gpt-3.5-turbo", "vision": "gpt-4o", "chinese": "qwen-max", "reasoning": "gpt-4.1" } def route_by_task(task_type: str, message: str) -> str: model = MODEL_MAP.get(task_type, "gpt-4.1") return client.chat(model=model, messages=[{"role": "user", "content": message}])
Errors return JSON with error field: { "error": { "code": "model_not_found", "message": "Model 'xyz' is not available" } } Common error codes: 401 - Invalid or missing API key 402 - Insufficient credits 404 - Model not found 429 - Rate limit exceeded 500 - Server error
Use streaming for long responses to improve UX Set max_tokens to control costs Implement fallback for production reliability Cache responses for repeated queries Monitor usage via response metadata Use appropriate models - don't use GPT-4 for simple tasks
Just change the base URL and key: import os from openai import OpenAI client = OpenAI( api_key=os.environ["AISA_API_KEY"], base_url="https://api.aisa.one/v1" ) response = client.chat.completions.create( model="gpt-4.1", messages=[{"role": "user", "content": "Hello!"}] ) print(response.choices[0].message.content)
Token-based pricing varies by model. Check marketplace.aisa.one/pricing for current rates. Model FamilyApproximate CostGPT-4.1 / GPT-4o~$0.01 / 1K tokensClaude-3-Sonnet~$0.01 / 1K tokensGemini-2.0-Flash~$0.001 / 1K tokensQwen-Max~$0.005 / 1K tokensDeepSeek-V3~$0.002 / 1K tokens Every response includes usage.cost and usage.credits_remaining.
Sign up at aisa.one Get your API key from the dashboard Add credits (pay-as-you-go) Set environment variable: export AISA_API_KEY="your-key"
See API Reference for complete endpoint documentation.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.