NVIDIA ships the A100 Ampere GPU with 312 TFLOPS of FP16 compute and 40GB HBM2. It becomes the workhorse of every hyperscale AI training cluster. PrevMain BlogNext