Cerebras: Ultra-Fast Inference with 6 New Models

New Cerebras provider with six high-performance models, including GPT-OSS 120B and Qwen 3, now available through LLM Gateway.

Cerebras: Ultra-Fast Inference with 6 New Models

We're excited to announce support for Cerebras as a new provider in LLM Gateway, offering ultra-fast, high-throughput inference with six powerful models.

Cerebras is available via the LLM Gateway with the provider ID cerebras. Learn more about the Cerebras inference platform at cerebras.ai.

🎯 New Cerebras Models

Cerebras models

🚀 Getting Started with Cerebras

All Cerebras models are available via the OpenAI-compatible chat completions API:

curl -X POST https://api.llmgateway.io/v1/chat/completions \
-H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "cerebras/gpt-oss-120b",
"messages": [{"role": "user", "content": "Explain how Cerebras inference works"}]
}'

Try Cerebras models in the Playground 🚀

Get started now 🚀

    Cerebras: Ultra-Fast Inference with 6 New Models - Changelog - LLM Gateway