DDR6
DDR6 is the next-generation server DRAM standard · 8.8-17.6 Gbps · used in host CPUs that feed AI GPUs · JEDEC finalized spec in 2025.
DDR6 is the next-generation server DRAM standard · 8.8-17.6 Gbps · used in host CPUs that feed AI GPUs · JEDEC finalized spec in 2025.
Basic
DDR6 is the successor to DDR5 for mainstream DRAM. Speeds start at 8.8 Gbps and scale to 17.6 Gbps (vs DDR5's 4.8-7.2 Gbps). Used in server CPUs (Intel Granite Rapids-AP, AMD Turin, EPYC 10th gen) feeding GPU clusters. First modules expected late 2026; mass adoption 2027-28. Not to be confused with HBM4 (the GPU-attached memory) · DDR6 is for CPUs.
Deep
DDR6 innovations: higher bus clock, increased burst length, on-die ECC default, and sub-channel architecture (two sub-channels per 64-bit channel) for better bandwidth utilization. Server DDR6 modules: RDIMM / LRDIMM / MCR-DIMM variants. Power-per-bit improved ~20% vs DDR5. CXL 3.0 integration allows DDR6 pools to attach to multiple CPUs.
Expert
DDR6 matters for AI mostly on the CPU side · host memory for training pipelines, model loading, data preprocessing. LLM training often constrained by CPU→GPU data pipeline bandwidth · faster DDR6 eases this. Chinese memory makers (CXMT) face sanctions-driven delays on DDR6; US + Korean + Japanese production dominates. Pricing at launch: ~2× DDR5 premium per GB, normalizing by 2027-28.
Depending on why you're here
- ·JEDEC DDR6 standard · 8.8-17.6 Gbps
- ·Sub-channel architecture · on-die ECC
- ·Mass adoption 2027-28
- ·Host-CPU memory for AI training pipelines
- ·Accelerates CPU→GPU data feed
- ·DDR6 servers available late 2026 · premium pricing
- ·Memory cycle driver · Samsung, SK Hynix, Micron main winners
- ·Chinese CXMT sanctions push share to Korean/Japanese makers
- ·CXL 3.0 unlocks memory pooling economics
- ·The next faster version of computer memory
- ·For regular computers that feed AI chips · different from AI chip memory
- ·Coming out in 2027 mass market
DDR6 is the CPU-side memory upgrade AI training waits on · faster host memory unlocks data-pipeline bottlenecks.