NVIDIA DGX GB200 NVL72
Full rackShippingBlackwell2025
NVIDIA's flagship liquid-cooled rack. 72 Blackwell GPUs + 36 Grace CPUs connected via NVLink 5.0 in a single 72-GPU domain. Designed for trillion-parameter model training. The most powerful commercially available AI system, requiring liquid cooling infrastructure and 120+ kW per rack.
72
GPUs per system
720 FP8 PFLOPS
Total HBM
13.8 TB
Host memory
17.3 TB
Interconnect
NVLink 5.0
130 TB/s
Networking
3200 Gbps
Storage
120 TB NVMe SSD
Form factor
Full rack (42U)
Weight
1,360 kg
Rack units
42U
Performance
Manufacturer datasheet values · aggregate system compute
| FP4 PFLOPS | 1,440 |
| FP8 PFLOPS | 720 |
| FP16 PFLOPS | 360 |
| BF16 PFLOPS | 360 |
| Training effective PFLOPS | 540 |
Power and cooling
Thermal envelope · cooling requirements · efficiency
Rack power
120 kW
Per GPU
1000 W
Cooling
liquid
PUE estimate
1.1
Power draw relative to tracked systems120 kW / 2500 kW max
6 FP8 PFLOPS per kW · average across all systems is 4.81
TCO analysis
Hardware amortized over 3 years · power at $0.05/kWh
List price
$3,000,000
Per GPU effective
$41,667
Cost per GPU per month
$1,157
TCO per PFLOPS per year
$1,469
PFLOPS per kW
6
84% below the average TCO of $9,046/PFLOPS/year across all tracked systems
Available from
D
DellH
HPES
SupermicroL
LenovoKnown deployments
Disclosed in press releases, SEC filings, and conference talks
Quantity
tens of thousands of GPUs
Source
Microsoft Build 2025Quantity
65,000 B200 GPUs
Source
Oracle Q4 2025 earningsQuantity
large-scale deployment
Source
CoreWeave IPO S-1Quantity
Colossus cluster expansion
Source
xAI Memphis datacenterSources
Every data point on this page is reproducible
Other AI systems
Compare across the system landscape
8960x Google TPU v5p · 8100 PFLOPS
Pod / clusterShipping
72x NVIDIA B300 · 1440 PFLOPS
Full rackAnnounced
256x Google TPU v6e · 230 PFLOPS
Pod / clusterShipping
32x Microsoft Maia 100 · 96 PFLOPS
Full rackRamping
8x NVIDIA B200 · 80 PFLOPS
Server nodeShipping
8x AMD MI325X · 48 PFLOPS
Server nodeShipping