HBMVolume2024
High Bandwidth Memory 3e (Enhanced)
The dominant HBM generation for 2024-2026 AI training. Enhanced version of HBM3 with higher per-pin data rate (9.6 Gbps). Powers NVIDIA B200, GB200, GB300, AMD MI325X/MI355X, and hyperscaler ASICs. SK hynix leads production, Samsung struggling with yields, Micron validated for NVIDIA.
Bandwidth
1180 GB/s
Interface
1024-bit
Max height
12-hi
Capacity/stack
24-36 GB
Process
1b nm DRAM
Voltage
1.1V
AI chips using
8
Suppliers
3
AI demand
90%
$/GB
$25
Suppliers
3 companies producing HBM3e
SH
SK hynixVolumeMarket share
53%
Industry-leading yields, 8-hi and 12-hi volume
S
SamsungRampingMarket share
30%
Yield issues on 8-hi stacking, catching up
M
MicronVolumeMarket share
17%
8-hi validated by NVIDIA, 12-hi sampling
Demand breakdown
Severe shortage · demand exceeds supply through 2026
AI90%
Networking5%
HPC5%
Sector detail
AI training · 65%
NVIDIA Blackwell + AMD CDNA4 consume majority of output
AI inference · 20%
Growing as reasoning models require more memory
HPC / scientific · 10%
National labs, weather modeling, molecular simulation
Networking · 5%
High-end switching ASICs (Broadcom Memory)
Primary buyers
Pricing
Average selling price · year-over-year trend
Price per GB
$25
YoY change
+45%
Trend
Rising
HBM3e ASP up 45% YoY driven by AI training demand
AI chips using HBM3e
8 chips from /hardware cross-linked via memoryGenerationSlug
Memory
192 GB
BW
8 TB/s
Stacks
8
Memory
384 GB
BW
16 TB/s
Stacks
16
Memory
192 GB
BW
7.37 TB/s
Stacks
8
Memory
576 GB
BW
16 TB/s
Stacks
24
Memory
288 GB
BW
8 TB/s
Stacks
8
Memory
288 GB
BW
8 TB/s
Stacks
12
Memory
256 GB
BW
6 TB/s
Stacks
8
Memory
141 GB
BW
4.8 TB/s
Stacks
6