Skip to main content

NVIDIA® H100 NVL

The NVIDIA H100 NVL pairs two H100 GPUs via NVLink Bridge in a PCIe form factor, delivering 188 GB of combined HBM3 memory for large model inference.

Availability

Available on Rackrr through our marketplace providers. Check the platform for real-time pricing and availability.

Specifications

ComponentDetails
GPU ModelNVIDIA® H100 NVL
ArchitectureNVIDIA Hopper
CUDA Cores16,896
Tensor Cores528 (4th Gen)
VRAM94 GB (per GPU)
VRAM TypeHBM3
Memory Bandwidth3.9 TB/s
TDP (Thermal Design Power)400W
Form FactorPCIe
InterconnectNVLink Bridge (600 GB/s)
FP8 Performance3,958 TFLOPS
FP16 Performance1,979 TFLOPS
Deep Learning FrameworksPyTorch, TensorFlow, JAX

Use Cases

  • LLM inference with large context windows
  • Generative AI inference
  • AI model fine-tuning
  • High-performance computing (HPC)