DDR5
DDR5 is a memory technology used in AI accelerators.
DDR5 is a memory technology used in AI accelerators.
Basic
DDR5 (DDR5) delivers ? GB/s per stack with ? GB capacity. Primary manufacturers: multiple vendors. Used in: AI accelerators.
Deep
DDR5 is a memory technology for AI and HPC workloads. Per-stack bandwidth is ? GB/s with ? GB capacity. Main manufacturers include the major memory vendors. Released around recently. Used in a range of AI accelerators.
Expert
DDR5 provides n/a GB/s per stack at n/a GB capacity. Commercial yield varies by manufacturer: SK Hynix, Samsung, Micron. Supply allocation follows long-term agreements with major GPU and accelerator vendors. Memory bandwidth is the dominant constraint for LLM inference: arithmetic intensity of transformer attention layers means memory-bound kernels gate throughput on all current accelerators.
Depending on why you're here
- ·DDR5: ? GB/s per stack, ? GB capacity
- ·Made by SK Hynix, Samsung, Micron
- ·Used in modern AI chips
- ·More bandwidth = faster LLM inference on memory-bound workloads
- ·Chips with newer memory (DDR5) hit better throughput per dollar
- ·Check which chips use DDR5 on /memory/ddr5
- ·DDR5 supply is tightly allocated · drives chip pricing power
- ·SK Hynix / Samsung capture most of the margin
- ·Watch memory shortage news as a leading indicator of GPU pricing
- ·DDR5 is ultra-fast memory for AI chips
- ·Without it, models can't feed data to the GPU fast enough
- ·Limited supply is part of why AI hardware is expensive
DDR5 supply allocation is the hidden variable behind every GPU shortage. When SK Hynix sneezes, NVIDIA catches cold.