Gemini 3 Pro Preview: 20% Off Launch Discount
Google's latest Gemini 3 Pro Preview is now available with an exclusive 20% launch discount, featuring 1M context window and prompt caching.

We're excited to announce support for Gemini 3 Pro Preview from Google with an exclusive 20% launch discount!
🎯 New Model Available
Gemini 3 Pro Preview - Next-Generation AI Model
Model ID: gemini-3-pro-preview
Context Window: 1,000,000 tokens (1M)
Max Output: 65,000 tokens
Pricing with 20% OFF:
- Input:
$2.00$1.60 per 1M tokens (20% off) - Output:
$12.00$9.60 per 1M tokens (20% off) - Cached Input:
$0.20$0.16 per 1M tokens (20% off)
Providers: Available on both Google AI Studio and Google Vertex AI
✨ Features
Gemini 3 Pro Preview comes with comprehensive capabilities:
✅ Streaming - Real-time response streaming
✅ Vision - Advanced image understanding
✅ Tools - Function calling support
✅ JSON Output - Structured output mode
✅ Prompt Caching - Save up to 90% on repeated prompts
🚀 Getting Started
Using Google AI Studio
curl -X POST https://api.llmgateway.io/v1/chat/completions \-H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \-H "Content-Type: application/json" \-d '{"model": "google-ai-studio/gemini-3-pro-preview","messages": [{"role": "user", "content": "Explain machine learning"}]}'
curl -X POST https://api.llmgateway.io/v1/chat/completions \-H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \-H "Content-Type: application/json" \-d '{"model": "google-ai-studio/gemini-3-pro-preview","messages": [{"role": "user", "content": "Explain machine learning"}]}'
Using Google Vertex AI
curl -X POST https://api.llmgateway.io/v1/chat/completions \-H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \-H "Content-Type: application/json" \-d '{"model": "google-vertex/gemini-3-pro-preview","messages": [{"role": "user", "content": "Explain machine learning"}]}'
curl -X POST https://api.llmgateway.io/v1/chat/completions \-H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \-H "Content-Type: application/json" \-d '{"model": "google-vertex/gemini-3-pro-preview","messages": [{"role": "user", "content": "Explain machine learning"}]}'
🎁 Why Gemini 3 Pro Preview?
- Massive Context: 1M token context window for complex tasks
- Large Output: Up to 65k tokens output
- Cost Efficient: 20% discount on all token types
- Smart Caching: Significant savings with prompt caching
- Multimodal: Text and vision capabilities included
- Dual Providers: Choose between AI Studio or Vertex AI