Air-cooled HGX variant of the Blackwell architecture with 192 GB HBM3e memory. Lower power envelope than B200 (700 W vs 1000 W) at roughly 78% of the compute, suited to existing HGX H100 chassis upgrades.
VRAM
192 GB
Memory
HBM3e
Bandwidth
8000 GB/s
TDP
700W
Large Language Models
Training and inference for models like GPT-4, Llama 70B+
Deep Learning Training
High-performance training for neural networks
Distributed Training
Multi-node training with fast interconnects
High-Throughput Inference
Optimized for batched inference workloads
Compute throughput shown with 2:4 structured sparsity. Drop-in replacement for HGX H100 8-GPU baseboards.
Estimates based on INT8 quantization. Actual fit depends on framework and batch size.
Added Apr 30, 2026
Last updated: Apr 30, 2026
From model selection to production, one platform, no fragmentation.