The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. Compared to the 3 series, these models deliver a leap forward in performance for both pure text and multimodal tasks, offering fast response times while balancing inference speed and overall performance.
Input
Output
Context
1000K
Max Output
66K
Parameters
—
Input Modalities
Output Modalities
Data sourced from official provider APIs and documentation
Last updated: Mar 17, 2026
Explore models, compare pricing and benchmarks, and right-size your infrastructure — all in one place.