The H100 PCIe brings Hopper architecture to standard server form factors with 80GB HBM3 memory. Ideal for enterprises requiring high AI performance without SXM infrastructure.
VRAM
80 GB
Memory
HBM3
Bandwidth
2000 GB/s
TDP
350W
Large Language Models
Training and inference for models like GPT-4, Llama 70B+
Deep Learning Training
High-performance training for neural networks
Distributed Training
Multi-node training with fast interconnects
High-Throughput Inference
Optimized for batched inference workloads
PCIe form factor
Estimates based on INT8 quantization. Actual fit depends on framework and batch size.
Added Jan 25, 2026
Last updated: Jan 25, 2026
Explore models, compare pricing and benchmarks, and right-size your infrastructure — all in one place.