CS-3
Cerebras CS-3 is Cerebras's appliance AI system.
Cerebras CS-3 is Cerebras's appliance AI system.
Basic
The CS-3 is a appliance AI compute system from Cerebras. It packs ? accelerators and delivers ? PFLOPS of FP8 performance. Released in recent years.
Deep
The CS-3 is a rack-scale AI system. Configuration: ? accelerators, ? GB of HBM memory, ? PFLOPS FP8 aggregate. Manufactured by Cerebras starting recent year. Deployed in frontier datacenters by hyperscalers and specialized AI clouds. BenchGecko tracks this system on /systems/cerebras-cs3 with TCO, power, and deployment signals.
Expert
CS-3 specifications: n/a× accelerator, aggregate n/a PFLOPS FP8, n/a GB HBM, system cost approx $0.0M. System-level design choices (NVLink domain size, CPU-GPU ratio, network fabric topology) drive realizable throughput well beyond aggregate FLOPS. Real-world utilization typically 40-70% of peak due to memory bandwidth bottlenecks and network contention. TCO per PFLOP-year is the operative investment metric for large-scale buyers.
Depending on why you're here
- ·CS-3: ?×chip · ? PFLOPS FP8
- ·? GB HBM · released recent
- ·Cerebras · tracked on /systems/cerebras-cs3
- ·CS-3 is what hyperscalers buy to serve frontier models
- ·Price ~$0.0M per unit at volume
- ·Most usage is via API rental · raw system purchase is hyperscaler territory
- ·CS-3 orders telegraph hyperscaler AI capex intent
- ·System-level sales drive Cerebras AI revenue
- ·Watch NVL / rack-scale order book as a leading indicator
- ·CS-3 is a rack-sized AI supercomputer
- ·Packs dozens of chips in one unit
- ·Costs millions · hyperscalers buy them by the thousand
CS-3 is a single-order-of-magnitude jump over the previous generation. Every new system reshuffles the cost-per-FLOPS leaderboard.