NVIDIA® H100 SXM
The NVIDIA H100 SXM is the industry-standard GPU for AI training and inference, featuring the Hopper architecture with SXM5 form factor for maximum performance.
Availability
Available on Rackrr through our marketplace providers. Check the platform for real-time pricing and availability.
Specifications
| Component | Details |
|---|---|
| GPU Model | NVIDIA® H100 SXM5 |
| Architecture | NVIDIA Hopper |
| CUDA Cores | 16,896 |
| Tensor Cores | 528 (4th Gen) |
| VRAM | 80 GB |
| VRAM Type | HBM3 |
| Memory Bandwidth | 3.35 TB/s |
| TDP (Thermal Design Power) | 700W |
| Form Factor | SXM5 |
| Interconnect | NVLink (900 GB/s) |
| FP8 Performance | 3,958 TFLOPS |
| FP16 Performance | 1,979 TFLOPS |
| Deep Learning Frameworks | PyTorch, TensorFlow, JAX |
Use Cases
- Large-scale AI model training
- Large language model (LLM) inference
- High-performance computing (HPC)
- Multi-node distributed training