The H100 SXM is NVIDIA's flagship datacenter GPU for AI training, built on the Hopper architecture with 4th-gen Tensor Cores. With 80GB HBM3 and 900GB/s NVLink, it excels at large-scale distributed training.
VRAM
80 GB
Memory
HBM3
Bandwidth
3350 GB/s
TDP
700W
Large Language Models
Training and inference for models like GPT-4, Llama 70B+
Deep Learning Training
High-performance training for neural networks
Distributed Training
Multi-node training with fast interconnects
High-Throughput Inference
Optimized for batched inference workloads
Flagship datacenter GPU for AI training
Estimates based on INT8 quantization. Actual fit depends on framework and batch size.
Added Jan 25, 2026
Last updated: Jan 25, 2026
Explore models, compare pricing and benchmarks, and right-size your infrastructure — all in one place.