Description
NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform.
The Most Powerful End-to-End AI and HPC Data Center Platform
A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC™. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to rapidly deliver real-world results and deploy solutions into production at scale.
Deep Learning Training
NVIDIA A100 Tensor Cores with Tensor Float (TF32) provide up to 20X higher performance over the NVIDIA Volta with zero code changes and an additional 2X boost with automatic mixed precision and FP16. When combined with NVIDIA® NVLink®, NVIDIA NVSwitch™, PCI Gen4, NVIDIA® InfiniBand®, and the NVIDIA Magnum IO™ SDK, it’s possible to scale to thousands of A100 GPUs.
High-Performance Computing
NVIDIA A100 introduces double precision Tensor Cores to deliver the biggest leap in HPC performance since the introduction of GPUs. Combined with 80GB of the fastest GPU memory, researchers can reduce a 10-hour, double-precision simulation to under four hours on A100. HPC applications can also leverage TF32 to achieve up to 11X higher throughput for single-precision, dense matrix-multiply operations
High-Performance Data Analytics
Data scientists need to be able to analyze, visualize, and turn massive datasets into insights. But scale-out solutions are often bogged down by datasets scattered across multiple servers.
Enterprise-Ready Utilization
A100 with MIG maximizes the utilization of GPU-accelerated infrastructure. With MIG, an A100 GPU can be partitioned into as many as seven independent instances, giving multiple users access to GPU acceleration. With A100 40GB, each MIG instance can be allocated up to 5GB, and with A100 80GB’s increased memory capacity, that size is doubled to 10GB.
Specifications:
GPU Model | NVIDIA A100 |
Stream Processors | 6912 CUDA Cores |
Boost Clock Speed | 1410 MHz |
Base Clock Speed | 1065 MHz |
Floating Point Performance | Double Precision Performance: 19.5 TFLOPS Single Precision Performance: 156.00 TFLOPS Half Precision Performance: 312.00 TFLOPS |
Interface | PCI Express 4.0 x16 |
Supported APIs | OpenCL DirectCompute OpenACC |
OS Compatibility | Windows Linux *As of March, 2023: Check with manufacturer for the most up-to-date compatibility |
Memory Speed | 1512 MHz (Effective) |
Memory Configuration | 80 GB |
Memory Interface | HBM2e |
Memory Interface Width | 5120-Bit |
Memory Bandwidth | 1935 GB/s |
Power Requirements
Max Power Consumption | 300 W |
PCI Power Connectors | 1 x 8-Pin |
Expansion Slot Compatibility | Full Height |
Height | 4.13″ / 104.86 mm |
Length | 10.5″ / 267.7 mm |
Width | Dual-Slot |
Cooler Type | Passive Heatsink |
Weight | 2.6 lb / 1170 g |
Reviews
There are no reviews yet.