Beta
MemoryReading · ~3 min · 48 words deep

DDR6

DDR6 is the next-generation server DRAM standard · 8.8-17.6 Gbps · used in host CPUs that feed AI GPUs · JEDEC finalized spec in 2025.

TL;DR

DDR6 is the next-generation server DRAM standard · 8.8-17.6 Gbps · used in host CPUs that feed AI GPUs · JEDEC finalized spec in 2025.

Level 1

DDR6 is the successor to DDR5 for mainstream DRAM. Speeds start at 8.8 Gbps and scale to 17.6 Gbps (vs DDR5's 4.8-7.2 Gbps). Used in server CPUs (Intel Granite Rapids-AP, AMD Turin, EPYC 10th gen) feeding GPU clusters. First modules expected late 2026; mass adoption 2027-28. Not to be confused with HBM4 (the GPU-attached memory) · DDR6 is for CPUs.

Level 2

DDR6 innovations: higher bus clock, increased burst length, on-die ECC default, and sub-channel architecture (two sub-channels per 64-bit channel) for better bandwidth utilization. Server DDR6 modules: RDIMM / LRDIMM / MCR-DIMM variants. Power-per-bit improved ~20% vs DDR5. CXL 3.0 integration allows DDR6 pools to attach to multiple CPUs.

Level 3

DDR6 matters for AI mostly on the CPU side · host memory for training pipelines, model loading, data preprocessing. LLM training often constrained by CPU→GPU data pipeline bandwidth · faster DDR6 eases this. Chinese memory makers (CXMT) face sanctions-driven delays on DDR6; US + Korean + Japanese production dominates. Pricing at launch: ~2× DDR5 premium per GB, normalizing by 2027-28.

The takeaway for you
If you are a
Researcher
  • ·JEDEC DDR6 standard · 8.8-17.6 Gbps
  • ·Sub-channel architecture · on-die ECC
  • ·Mass adoption 2027-28
If you are a
Builder
  • ·Host-CPU memory for AI training pipelines
  • ·Accelerates CPU→GPU data feed
  • ·DDR6 servers available late 2026 · premium pricing
If you are a
Investor
  • ·Memory cycle driver · Samsung, SK Hynix, Micron main winners
  • ·Chinese CXMT sanctions push share to Korean/Japanese makers
  • ·CXL 3.0 unlocks memory pooling economics
If you are a
Curious · Normie
  • ·The next faster version of computer memory
  • ·For regular computers that feed AI chips · different from AI chip memory
  • ·Coming out in 2027 mass market
Gecko's take

DDR6 is the CPU-side memory upgrade AI training waits on · faster host memory unlocks data-pipeline bottlenecks.

DDR6 is CPU-attached DRAM. HBM4 is GPU-attached on-package memory. Different markets, different tech, complementary.