Chips
Every AI accelerator · H100, B200, MI300X, TPU, Trainium.
Top 12 terms
AWS's third-gen custom training chip · 2× perf/watt vs Trainium 2 · powering Anthropic training clusters on AWS.
AWS's third-gen inference chip · optimized for serving LLMs at scale with HBM3 and NeuronCore v3.
AMD's next-gen data-center GPU · late 2025 announcement · targets H200/Blackwell competition with HBM4.
Microsoft's second-gen custom AI chip · successor to Maia 100 · targets Azure OpenAI inference at scale.
NVIDIA's ? TFLOPS AI accelerator with ? GB HBM.
NVIDIA's ? TFLOPS AI accelerator with ? GB HBM.
NVIDIA's ? TFLOPS AI accelerator with ? GB HBM.
NVIDIA's ? TFLOPS AI accelerator with ? GB HBM.
NVIDIA's ? TFLOPS AI accelerator with ? GB HBM.
NVIDIA's ? TFLOPS AI accelerator with ? GB HBM.
NVIDIA's ? TFLOPS AI accelerator with ? GB HBM.
AMD's ? TFLOPS AI accelerator with ? GB HBM.