MiniMax-01 is a combines MiniMax-Text-01 for text generation and MiniMax-VL-01 for image understanding. It has 456 billion parameters, with 45.9 billion parameters activated per inference, and can handle a context of up to 4 million tokens. The text model adopts a hybrid architecture that combines Lightning Attention, Softmax Attention, and Mixture-of-Experts (MoE). The image model adopts the “ViT-MLP-LLM” framework and is trained on top of the text model. To read more about the release, see: https://www.minimaxi.com/en/news/minimax-01-series-2
Input
Output
Context
1000K
Max Output
1000K
Parameters
—
Input Modalities
Output Modalities
Input
$0.200
Output
$1.10
| Platform | Input | Output |
|---|---|---|
OpenRouter | $0.200 | $1.10 |
Data sourced from official provider APIs and documentation
Last updated: Mar 24, 2026
Explore models, compare pricing and benchmarks, and right-size your infrastructure — all in one place.