DGX GB300 NVL72
NVIDIA DGX GB300 NVL72 is NVIDIA's rack AI system.
NVIDIA DGX GB300 NVL72 is NVIDIA's rack AI system.
Basic
The DGX GB300 NVL72 is a rack AI compute system from NVIDIA. It packs ? accelerators and delivers ? PFLOPS of FP8 performance. Released in recent years.
Deep
The DGX GB300 NVL72 is a rack-scale AI system. Configuration: ? accelerators, ? GB of HBM memory, ? PFLOPS FP8 aggregate. Manufactured by NVIDIA starting recent year. Deployed in frontier datacenters by hyperscalers and specialized AI clouds. BenchGecko tracks this system on /systems/nvidia-dgx-gb300-nvl72 with TCO, power, and deployment signals.
Expert
DGX GB300 NVL72 specifications: n/a× accelerator, aggregate n/a PFLOPS FP8, n/a GB HBM, system cost approx $0.0M. System-level design choices (NVLink domain size, CPU-GPU ratio, network fabric topology) drive realizable throughput well beyond aggregate FLOPS. Real-world utilization typically 40-70% of peak due to memory bandwidth bottlenecks and network contention. TCO per PFLOP-year is the operative investment metric for large-scale buyers.
Depending on why you're here
- ·DGX GB300 NVL72: ?×chip · ? PFLOPS FP8
- ·? GB HBM · released recent
- ·NVIDIA · tracked on /systems/nvidia-dgx-gb300-nvl72
- ·DGX GB300 NVL72 is what hyperscalers buy to serve frontier models
- ·Price ~$0.0M per unit at volume
- ·Most usage is via API rental · raw system purchase is hyperscaler territory
- ·DGX GB300 NVL72 orders telegraph hyperscaler AI capex intent
- ·System-level sales drive NVIDIA AI revenue
- ·Watch NVL / rack-scale order book as a leading indicator
- ·DGX GB300 NVL72 is a rack-sized AI supercomputer
- ·Packs dozens of chips in one unit
- ·Costs millions · hyperscalers buy them by the thousand
DGX GB300 NVL72 is a single-order-of-magnitude jump over the previous generation. Every new system reshuffles the cost-per-FLOPS leaderboard.