Xiaomi's high-efficiency inference model with hybrid architecture, 3 MTP layers for 2.5-3.7x faster inference, and 256K context.
mimo-v2-flash
View detailed pricing and capabilities for this provider.
AI-powered help
Please introduce yourself before we start.