NVIDIA DGX B200
Server nodeShippingBlackwell2025
8-GPU Blackwell node for enterprises that don't need the full NVL72 rack. Air-cooled with NVLink 4.0 interconnect. The workhorse for inference at scale and moderate training workloads. Fits existing datacenter infrastructure without liquid cooling.
8
GPUs per system
80 FP8 PFLOPS
Total HBM
1.5 TB
Host memory
2 TB
Interconnect
NVLink 4.0
14.4 TB/s
Networking
400 Gbps
Storage
30 TB NVMe SSD
Form factor
8U node
Weight
160 kg
Rack units
8U
Performance
Manufacturer datasheet values · aggregate system compute
| FP4 PFLOPS | 160 |
| FP8 PFLOPS | 80 |
| FP16 PFLOPS | 40 |
| BF16 PFLOPS | 40 |
| Training effective PFLOPS | 60 |
Power and cooling
Thermal envelope · cooling requirements · efficiency
Rack power
14.3 kW
Per GPU
1000 W
Cooling
air
PUE estimate
1.3
Power draw relative to tracked systems14.3 kW / 2500 kW max
5.59 FP8 PFLOPS per kW · average across all systems is 4.81
TCO analysis
Hardware amortized over 3 years · power at $0.05/kWh
List price
$400,000
Per GPU effective
$50,000
Cost per GPU per month
$1,389
TCO per PFLOPS per year
$1,768
PFLOPS per kW
5.59
80% below the average TCO of $9,046/PFLOPS/year across all tracked systems
Available from
D
DellH
HPES
SupermicroL
LenovoKnown deployments
Disclosed in press releases, SEC filings, and conference talks
Quantity
broad availability
Source
NVIDIA Q1 2026 earningsSources
Every data point on this page is reproducible
Other AI systems
Compare across the system landscape
8960x Google TPU v5p · 8100 PFLOPS
Pod / clusterShipping
72x NVIDIA B300 · 1440 PFLOPS
Full rackAnnounced
72x NVIDIA B200 · 720 PFLOPS
Full rackShipping
256x Google TPU v6e · 230 PFLOPS
Pod / clusterShipping
32x Microsoft Maia 100 · 96 PFLOPS
Full rackRamping
8x AMD MI325X · 48 PFLOPS
Server nodeShipping