Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-context reasoning over repositories. The model features 480 billion total parameters, with 35 billion active per forward pass (8 out of 160 experts). Pricing for the Alibaba endpoints varies by context length. Once a request is greater than 128k input tokens, the higher pricing is used.
Input
Output
Context
262K
Max Output
—
Parameters
480B
Input Modalities
Output Modalities
Input
$0.220
Output
$1.00
| Platform | Input | Output |
|---|---|---|
OpenRouter | $0.220 | $1.00 |
Data sourced from official provider APIs and documentation
Last updated: Mar 17, 2026
Explore models, compare pricing and benchmarks, and right-size your infrastructure — all in one place.