Mimo V2 Flash

Mimo V2 Flash is a 309B MoE reasoning model optimized for high-speed inference and agentic workflows.

mimo-v2-flash
STABLEGet Started
256,000 context
Starting at $0.08/M input tokens
Starting at $0.24/M output tokens
Streaming

Select Provider

All Providers for Mimo V2 Flash

LLM Gateway routes requests to the best providers that are able to handle your prompt size and parameters.

CanopyWave
Context: 256k
Input
$0.08
/M tokens
Cached
$0.04
/M tokens
Output
$0.24
/M tokens
Get Started