The MI300X is AMD's flagship AI accelerator with an industry-leading 192GB HBM3 memory. Built on CDNA 3 architecture, it targets large language model training and inference workloads.
VRAM
192 GB
Memory
HBM3
Bandwidth
5300 GB/s
TDP
750W
Large Language Models
Training and inference for models like GPT-4, Llama 70B+
Deep Learning Training
High-performance training for neural networks
High-Throughput Inference
Optimized for batched inference workloads
Enterprise Deployment
Designed for 24/7 datacenter operations
Flagship AI accelerator with 192GB HBM3. Competes with NVIDIA H100.
Estimates based on INT8 quantization. Actual fit depends on framework and batch size.
Added Jan 27, 2026
Last updated: Jan 27, 2026
Explore models, compare pricing and benchmarks, and right-size your infrastructure — all in one place.