Ascend 910C
Ascend 910C is Huawei's ? TFLOPS AI accelerator with ? GB HBM.
Ascend 910C is Huawei's ? TFLOPS AI accelerator with ? GB HBM.
Basic
The Ascend 910C is an AI chip from Huawei released recently. It delivers ? TFLOPS at FP8 precision and carries ? GB of HBM running at ? TB/s. Power draw is ? watts.
Deep
The Ascend 910C is Huawei's AI accelerator built on a leading process. Peak performance is ? TFLOPS at FP8 and ? TFLOPS at FP16. Memory subsystem: ? GB of HBM3 delivering ? TB/s of bandwidth. TDP lands at ? watts. At ~$N/A per chip, it competes with other accelerators on the /hardware leaderboard.
Expert
The Ascend 910C (Huawei, undefined) is fabricated on undefined with n/a TFLOPS of FP8 throughput. Memory config: n/a GB HBM at n/a TB/s aggregate bandwidth. TDP n/aW. System integration varies: hyperscaler dense racks (NVL72, HGX 8-GPU baseboards) versus rentable 8-GPU nodes. Arithmetic intensity per watt and bandwidth per watt are the dominant figures of merit for LLM inference; raw peak TFLOPS matters more for training-bound workloads.
Depending on why you're here
- ·Ascend 910C: ? TFLOPS FP8, ? GB HBM
- ·n/a process · ?W TDP · released recent
- ·Built by Huawei · track spec vs alternatives on the /hardware page
- ·Use Ascend 910C for training or inference workloads requiring ? GB VRAM
- ·Peak ? TFLOPS at FP8 · real-world throughput usually 40-70% of peak
- ·Rent through any major hyperscaler · see /hardware/ascend-910c for provider availability
- ·Ascend 910C is Huawei's bet on the current generation
- ·Supply constrained by HBM availability and foundry capacity
- ·Price point at ~$N/A per chip signals Huawei's margin position
- ·Ascend 910C is a specialized chip for running AI, made by Huawei
- ·Packs ? GB of fast memory so big models can fit
- ·Hyperscalers rent these out by the hour for training and serving models
Ascend 910C competes on a different axis than raw FLOPS · check real-world tokens/sec per dollar · cross-compare on /hardware.