Liquid-cooled flagship CDNA 4 accelerator with 288 GB HBM3e, 8 TB/s bandwidth, and a 1400 W power envelope for full-rack deployments. Targets H200 / B200 directly on memory capacity and bandwidth.
VRAM
288 GB
Memory
HBM3e
Bandwidth
8000 GB/s
TDP
1400W
Large Language Models
Training and inference for models like GPT-4, Llama 70B+
Deep Learning Training
High-performance training for neural networks
Distributed Training
Multi-node training with fast interconnects
High-Throughput Inference
Optimized for batched inference workloads
Spec verification recommended: numbers reflect AMD's Advancing AI 2025 announcement; double-check against the published MI355X datasheet before relying for production sizing.
Estimates based on INT8 quantization. Actual fit depends on framework and batch size.
Added Apr 30, 2026
Last updated: Apr 30, 2026
From model selection to production, one platform, no fragmentation.