ERNIE-4.5-VL-424B-A47B is a multimodal Mixture-of-Experts (MoE) model from Baidu’s ERNIE 4.5 series, featuring 424B total parameters with 47B active per token. It is trained jointly on text and image data using a heterogeneous MoE architecture and modality-isolated routing to enable high-fidelity cross-modal reasoning, image understanding, and long-context generation (up to 131k tokens). Fine-tuned with techniques like SFT, DPO, UPO, and RLVR, this model supports both “thinking” and non-thinking inference modes. Designed for vision-language tasks in English and Chinese, it is optimized for efficient scaling and can operate under 4-bit/8-bit quantization.
Input
Output
Context
123K
Max Output
16K
Parameters
424B
Input Modalities
Output Modalities
Input
$0.420
Output
$1.25
| Platform | Input | Output |
|---|---|---|
OpenRouter | $0.420 | $1.25 |
Data sourced from official provider APIs and documentation
Last updated: Mar 24, 2026
Explore models, compare pricing and benchmarks, and right-size your infrastructure — all in one place.