GPT-5.4 and GPT-5.4 Pro Now Available

Access OpenAI's most capable models — GPT-5.4 for complex professional work and GPT-5.4 Pro for smarter, more precise responses — with 1.05M context windows and reasoning support.

GPT-5.4 and GPT-5.4 Pro models now available on LLM Gateway

OpenAI's GPT-5.4 and GPT-5.4 Pro are now available on LLM Gateway. Both models feature a massive 1,050,000-token context window, 128K max output tokens, and reasoning support.

New Models

GPT-5.4 — Frontier Model for Professional Work

1openai/gpt-5.4
  • Context Window: 1,050,000 tokens
  • Max Output: 128,000 tokens
  • Pricing: $2.50 per 1M input tokens / $15.00 per 1M output tokens
  • Cached Input: $0.25 per 1M tokens
  • Knowledge Cutoff: August 31, 2025
  • Reasoning support with effort levels: none (default), low, medium, high, xhigh
  • Vision, tool calling, web search, and JSON output support

GPT-5.4 Pro — Smarter and More Precise

1openai/gpt-5.4-pro
  • Context Window: 1,050,000 tokens
  • Max Output: 128,000 tokens
  • Pricing: $30.00 per 1M input tokens / $180.00 per 1M output tokens
  • Knowledge Cutoff: August 31, 2025
  • Uses more compute for consistently better answers on tough problems
  • Reasoning support with effort levels: medium, high, xhigh
  • Available via Responses API only

Getting Started

Both models are available immediately through the unified API:

1curl -X POST https://api.llmgateway.io/v1/chat/completions \
2 -H "Authorization: Bearer YOUR_API_KEY" \
3 -H "Content-Type: application/json" \
4 -d '{
5 "model": "openai/gpt-5.4",
6 "messages": [{"role": "user", "content": "Hello GPT-5.4!"}]
7 }'
1import { llmgateway } from "@llmgateway/ai-sdk-provider";
2import { generateText } from "ai";
3
4const { text } = await generateText({
5 model: llmgateway("openai/gpt-5.4"),
6 prompt: "Analyze this complex document...",
7});