Migrate from OpenRouter
Step-by-step guide to migrate from OpenRouter to LLM Gateway with minimal code changes
LLM Gateway provides a drop-in replacement for OpenRouter with OpenAI-compatible endpoints. Since both services follow the OpenAI API format, migration requires minimal changes to your existing code.
Quick Migration
Replace your OpenRouter configuration with LLM Gateway:
1- const baseURL = "https://openrouter.ai/api/v1";2- const apiKey = process.env.OPENROUTER_API_KEY;3+ const baseURL = "https://api.llmgateway.io/v1";4+ const apiKey = process.env.LLM_GATEWAY_API_KEY;
1- const baseURL = "https://openrouter.ai/api/v1";2- const apiKey = process.env.OPENROUTER_API_KEY;3+ const baseURL = "https://api.llmgateway.io/v1";4+ const apiKey = process.env.LLM_GATEWAY_API_KEY;
Why Migrate to LLM Gateway?
Both OpenRouter and LLM Gateway offer robust LLM gateway solutions. Here's how they compare:
| Feature | OpenRouter | LLM Gateway |
|---|---|---|
| OpenAI-compatible API | Yes | Yes |
| Multiple providers | Yes (300+ models) | Yes |
| Native AI SDK provider | Yes | Yes |
| Response caching | Yes (prompt caching) | Yes |
| Analytics dashboard | Via third-party integrations | Built-in |
| Cost tracking | Yes | Yes (per-request detail) |
| Provider key management | Yes (BYOK) | Yes (Pro) |
| Self-hosting option | No | Yes (AGPLv3) |
| Simpler API (no headers) | Requires HTTP-Referer/X-Title | Just Authorization |
| Anthropic-compatible API | No | Yes (/v1/messages) |
For a detailed feature-by-feature comparison, see LLM Gateway vs OpenRouter.
Migration Steps
1. Get Your LLM Gateway API Key
Sign up at llmgateway.io/signup and create an API key from your dashboard.
2. Update Environment Variables
1# Remove OpenRouter credentials2# OPENROUTER_API_KEY=sk-or-...34# Add LLM Gateway credentials5export LLM_GATEWAY_API_KEY=llmgtwy_your_key_here
1# Remove OpenRouter credentials2# OPENROUTER_API_KEY=sk-or-...34# Add LLM Gateway credentials5export LLM_GATEWAY_API_KEY=llmgtwy_your_key_here
3. Update Your Code
Using fetch/axios
1// Before (OpenRouter)2const response = await fetch("https://openrouter.ai/api/v1/chat/completions", {3 method: "POST",4 headers: {5 Authorization: `Bearer ${process.env.OPENROUTER_API_KEY}`,6 "Content-Type": "application/json",7 "HTTP-Referer": "https://your-site.com",8 "X-Title": "Your App Name",9 },10 body: JSON.stringify({11 model: "anthropic/claude-3-5-sonnet",12 messages: [{ role: "user", content: "Hello!" }],13 }),14});1516// After (LLM Gateway)17const response = await fetch("https://api.llmgateway.io/v1/chat/completions", {18 method: "POST",19 headers: {20 Authorization: `Bearer ${process.env.LLM_GATEWAY_API_KEY}`,21 "Content-Type": "application/json",22 },23 body: JSON.stringify({24 model: "anthropic/claude-3-5-sonnet-20241022",25 messages: [{ role: "user", content: "Hello!" }],26 }),27});
1// Before (OpenRouter)2const response = await fetch("https://openrouter.ai/api/v1/chat/completions", {3 method: "POST",4 headers: {5 Authorization: `Bearer ${process.env.OPENROUTER_API_KEY}`,6 "Content-Type": "application/json",7 "HTTP-Referer": "https://your-site.com",8 "X-Title": "Your App Name",9 },10 body: JSON.stringify({11 model: "anthropic/claude-3-5-sonnet",12 messages: [{ role: "user", content: "Hello!" }],13 }),14});1516// After (LLM Gateway)17const response = await fetch("https://api.llmgateway.io/v1/chat/completions", {18 method: "POST",19 headers: {20 Authorization: `Bearer ${process.env.LLM_GATEWAY_API_KEY}`,21 "Content-Type": "application/json",22 },23 body: JSON.stringify({24 model: "anthropic/claude-3-5-sonnet-20241022",25 messages: [{ role: "user", content: "Hello!" }],26 }),27});
Using OpenAI SDK
1import OpenAI from "openai";23// Before (OpenRouter)4const client = new OpenAI({5 baseURL: "https://openrouter.ai/api/v1",6 apiKey: process.env.OPENROUTER_API_KEY,7 defaultHeaders: {8 "HTTP-Referer": "https://your-site.com",9 "X-Title": "Your App Name",10 },11});1213// After (LLM Gateway)14const client = new OpenAI({15 baseURL: "https://api.llmgateway.io/v1",16 apiKey: process.env.LLM_GATEWAY_API_KEY,17});1819// Usage remains the same20const completion = await client.chat.completions.create({21 model: "anthropic/claude-3-5-sonnet-20241022",22 messages: [{ role: "user", content: "Hello!" }],23});
1import OpenAI from "openai";23// Before (OpenRouter)4const client = new OpenAI({5 baseURL: "https://openrouter.ai/api/v1",6 apiKey: process.env.OPENROUTER_API_KEY,7 defaultHeaders: {8 "HTTP-Referer": "https://your-site.com",9 "X-Title": "Your App Name",10 },11});1213// After (LLM Gateway)14const client = new OpenAI({15 baseURL: "https://api.llmgateway.io/v1",16 apiKey: process.env.LLM_GATEWAY_API_KEY,17});1819// Usage remains the same20const completion = await client.chat.completions.create({21 model: "anthropic/claude-3-5-sonnet-20241022",22 messages: [{ role: "user", content: "Hello!" }],23});
Using Vercel AI SDK
Both OpenRouter and LLM Gateway have native AI SDK providers, making migration straightforward:
1import { generateText } from "ai";23// Before (OpenRouter AI SDK Provider)4import { createOpenRouter } from "@openrouter/ai-sdk-provider";56const openrouter = createOpenRouter({7 apiKey: process.env.OPENROUTER_API_KEY,8});910const { text } = await generateText({11 model: openrouter("gpt-5.2"),12 prompt: "Hello!",13});1415// After (LLM Gateway AI SDK Provider)16import { createLLMGateway } from "@llmgateway/ai-sdk-provider";1718const llmgateway = createLLMGateway({19 apiKey: process.env.LLMGATEWAY_API_KEY,20});2122const { text } = await generateText({23 model: llmgateway("gpt-5.2"),24 prompt: "Hello!",25});
1import { generateText } from "ai";23// Before (OpenRouter AI SDK Provider)4import { createOpenRouter } from "@openrouter/ai-sdk-provider";56const openrouter = createOpenRouter({7 apiKey: process.env.OPENROUTER_API_KEY,8});910const { text } = await generateText({11 model: openrouter("gpt-5.2"),12 prompt: "Hello!",13});1415// After (LLM Gateway AI SDK Provider)16import { createLLMGateway } from "@llmgateway/ai-sdk-provider";1718const llmgateway = createLLMGateway({19 apiKey: process.env.LLMGATEWAY_API_KEY,20});2122const { text } = await generateText({23 model: llmgateway("gpt-5.2"),24 prompt: "Hello!",25});
Model Name Mapping
Most model names are compatible, but here are some common mappings:
| OpenRouter Model | LLM Gateway Model |
|---|---|
| gpt-5.2 | gpt-5.2 or openai/gpt-5.2 |
| claude-opus-4-5-20251101 | claude-opus-4-5-20251101 or anthropic/claude-opus-4-5-20251101 |
| gemini/gemini-3-flash-preview | gemini-3-flash-preview or google-ai-studio/gemini-3-flash-preview |
| bedrock/claude-opus-4-5-20251101 | claude-opus-4-5-20251101 or aws-bedrock/claude-opus-4-5-20251101 |
Check the models page for the full list of available models.
Streaming Support
LLM Gateway supports streaming responses identically to OpenRouter:
1const stream = await client.chat.completions.create({2 model: "anthropic/claude-3-5-sonnet-20241022",3 messages: [{ role: "user", content: "Write a story" }],4 stream: true,5});67for await (const chunk of stream) {8 process.stdout.write(chunk.choices[0]?.delta?.content || "");9}
1const stream = await client.chat.completions.create({2 model: "anthropic/claude-3-5-sonnet-20241022",3 messages: [{ role: "user", content: "Write a story" }],4 stream: true,5});67for await (const chunk of stream) {8 process.stdout.write(chunk.choices[0]?.delta?.content || "");9}
Additional Benefits
After migrating to LLM Gateway, you get access to:
- Response Caching: Automatic caching for identical requests to reduce costs
- Detailed Analytics: Per-request cost tracking, latency metrics, and usage patterns
- Provider Key Management: Use your own API keys for providers (Pro plan)
- Self-Hosting: Deploy LLM Gateway on your own infrastructure
Full Comparison
Want to see a detailed breakdown of all features? Check out our LLM Gateway vs OpenRouter comparison page.
Need Help?
- Browse available models at llmgateway.io/models
- Read the API documentation
- Contact support at [email protected]