Beta

NVIDIA HGX H100 8-GPU

Server nodeShippingHopper2023

The system that launched the AI infrastructure boom. 8x H100 SXM GPUs connected via NVLink 4.0. Still the most widely deployed AI training system globally. Air-cooled, fits standard datacenter racks. The reference design that Dell, HPE, Supermicro, and Lenovo all build around.

8
GPUs per system
32 FP8 PFLOPS
GPU model
NH
NVIDIA H100 SXM
GPU count
8x
CPU model
Dual Intel Xeon
2x
Memory type
H
HBM3
Total HBM
0.64 TB
Host memory
2 TB
Interconnect
NVLink 4.0
7.2 TB/s
Networking
400 Gbps
Storage
30 TB NVMe SSD
Form factor
8U node
Weight
140 kg
Rack units
8U

Manufacturer datasheet values · aggregate system compute

FP4 PFLOPSTBD
FP8 PFLOPS32
FP16 PFLOPS16
BF16 PFLOPS16
Training effective PFLOPS12

Thermal envelope · cooling requirements · efficiency

Rack power
10.2 kW
Per GPU
700 W
Cooling
air
PUE estimate
1.3
Power draw relative to tracked systems10.2 kW / 2500 kW max
3.14 FP8 PFLOPS per kW · average across all systems is 4.81

Hardware amortized over 3 years · power at $0.05/kWh

List price
$300,000
Per GPU effective
$37,500
Cost per GPU per month
$1,042
TCO per PFLOPS per year
$3,306
PFLOPS per kW
3.14
63% below the average TCO of $9,046/PFLOPS/year across all tracked systems
Available from
NVIDIA logoNVIDIA
D
Dell
H
HPE
S
Supermicro
L
Lenovo

Disclosed in press releases, SEC filings, and conference talks

Quantity
600,000+ H100 GPUs
Source
Meta Q4 2025 earnings
Quantity
hundreds of thousands
Source
Microsoft FY2025 10-K
Quantity
A3 instances
Source
Google Cloud blog

Every data point on this page is reproducible

Compare across the system landscape