Skip to main content

NVIDIA® A100 80GB SXM4

The NVIDIA A100 is a versatile data center GPU based on the Ampere architecture, widely adopted for AI training, inference, and HPC workloads.

Availability

Available on Rackrr through our marketplace providers. Check the platform for real-time pricing and availability.

Specifications

ComponentDetails
GPU ModelNVIDIA® A100 80GB SXM4
ArchitectureNVIDIA Ampere
CUDA Cores6,912
Tensor Cores432 (3rd Gen)
VRAM80 GB
VRAM TypeHBM2e
Memory Bandwidth2.0 TB/s
TDP (Thermal Design Power)400W
Form FactorSXM4
InterconnectNVLink (600 GB/s)
FP16 Performance312 TFLOPS
TF32 Performance156 TFLOPS
Deep Learning FrameworksPyTorch, TensorFlow, JAX

Use Cases

  • AI model training and inference
  • Natural language processing
  • High-performance computing (HPC)
  • Data analytics and scientific computing