Skip to main content

NVIDIA® H100 SXM

The NVIDIA H100 SXM is the industry-standard GPU for AI training and inference, featuring the Hopper architecture with SXM5 form factor for maximum performance.

Availability

Available on Rackrr through our marketplace providers. Check the platform for real-time pricing and availability.

Specifications

ComponentDetails
GPU ModelNVIDIA® H100 SXM5
ArchitectureNVIDIA Hopper
CUDA Cores16,896
Tensor Cores528 (4th Gen)
VRAM80 GB
VRAM TypeHBM3
Memory Bandwidth3.35 TB/s
TDP (Thermal Design Power)700W
Form FactorSXM5
InterconnectNVLink (900 GB/s)
FP8 Performance3,958 TFLOPS
FP16 Performance1,979 TFLOPS
Deep Learning FrameworksPyTorch, TensorFlow, JAX

Use Cases

  • Large-scale AI model training
  • Large language model (LLM) inference
  • High-performance computing (HPC)
  • Multi-node distributed training