Introducing LLM Gateway
One API for 180+ models across 60+ providers. Route requests, track costs, and switch models without changing your code.

LLM Gateway
LLM Gateway is an open-source API gateway that sits between your apps and LLM providers. One integration gives you access to 180+ models from 60+ providers—and the visibility to control costs.
- Route: Switch between OpenAI, Anthropic, Google, and 60+ other providers without changing your code
- Manage: One dashboard for all your API keys—no more scattered credentials
- Observe: Track every request's cost, latency, and token usage in real-time
- Optimize: Compare models side-by-side to find the best price-to-performance ratio
Why LLM Gateway?
If you've built with multiple LLM providers, you know the pain: different SDKs, scattered API keys, no unified view of what you're spending. LLM Gateway gives you a single API that works with any provider—and a dashboard that shows exactly where your money goes.
One Compatible Endpoint
Already using OpenAI's SDK? Keep your code. Just change the base URL:
1curl -X POST https://api.llmgateway.io/v1/chat/completions \2 -H "Content-Type: application/json" \3 -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \4 -d '{5 "model": "gpt-4o",6 "messages": [{"role": "user", "content": "Hello, how are you?"}]7 }'
1curl -X POST https://api.llmgateway.io/v1/chat/completions \2 -H "Content-Type: application/json" \3 -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \4 -d '{5 "model": "gpt-4o",6 "messages": [{"role": "user", "content": "Hello, how are you?"}]7 }'
See Every Request, Every Dollar
Every API call is tracked with:
- Cost per request — Know exactly what each prompt costs
- Latency breakdowns — See response times by model and provider
- Error rates — Spot reliability issues before they hit production
- Token usage — Track input and output tokens across all requests
No more guessing where your AI spend goes. Compare models head-to-head and make data-driven decisions.
Ready to try it? Get started free — no credit card required.