NVIDIA® H100 NVL
The NVIDIA H100 NVL pairs two H100 GPUs via NVLink Bridge in a PCIe form factor, delivering 188 GB of combined HBM3 memory for large model inference.
Availability
Available on Rackrr through our marketplace providers. Check the platform for real-time pricing and availability.
Specifications
| Component | Details |
|---|---|
| GPU Model | NVIDIA® H100 NVL |
| Architecture | NVIDIA Hopper |
| CUDA Cores | 16,896 |
| Tensor Cores | 528 (4th Gen) |
| VRAM | 94 GB (per GPU) |
| VRAM Type | HBM3 |
| Memory Bandwidth | 3.9 TB/s |
| TDP (Thermal Design Power) | 400W |
| Form Factor | PCIe |
| Interconnect | NVLink Bridge (600 GB/s) |
| FP8 Performance | 3,958 TFLOPS |
| FP16 Performance | 1,979 TFLOPS |
| Deep Learning Frameworks | PyTorch, TensorFlow, JAX |
Use Cases
- LLM inference with large context windows
- Generative AI inference
- AI model fine-tuning
- High-performance computing (HPC)