NVIDIA® A100 80GB SXM4
The NVIDIA A100 is a versatile data center GPU based on the Ampere architecture, widely adopted for AI training, inference, and HPC workloads.
Availability
Available on Rackrr through our marketplace providers. Check the platform for real-time pricing and availability.
Specifications
| Component | Details |
|---|---|
| GPU Model | NVIDIA® A100 80GB SXM4 |
| Architecture | NVIDIA Ampere |
| CUDA Cores | 6,912 |
| Tensor Cores | 432 (3rd Gen) |
| VRAM | 80 GB |
| VRAM Type | HBM2e |
| Memory Bandwidth | 2.0 TB/s |
| TDP (Thermal Design Power) | 400W |
| Form Factor | SXM4 |
| Interconnect | NVLink (600 GB/s) |
| FP16 Performance | 312 TFLOPS |
| TF32 Performance | 156 TFLOPS |
| Deep Learning Frameworks | PyTorch, TensorFlow, JAX |
Use Cases
- AI model training and inference
- Natural language processing
- High-performance computing (HPC)
- Data analytics and scientific computing