Compact reasoning model with strong performance and efficient inference.
o4-mini
LLM Gateway routes requests to the best providers that are able to handle your prompt size and parameters.
openai/o4-mini
azure/o4-mini