Compare · ModelsLive · 3 picked · head to head
DeepSeek V3.2 Speciale vs Qwen3.5 397B A17B vs MiMo-V2-Flash
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Qwen3.5 397B A17B wins on 8/11 benchmarks
Qwen3.5 397B A17B wins 8 of 11 shared benchmarks. Leads in speed · knowledge · coding.
Category leads
speed·Qwen3.5 397B A17Bmath·DeepSeek V3.2 Specialeknowledge·Qwen3.5 397B A17Blanguage·DeepSeek V3.2 Specialecoding·Qwen3.5 397B A17Barena·Qwen3.5 397B A17B
Hype vs Reality
Attention vs performance
DeepSeek V3.2 Speciale
#4 by perf·#5 by attention
Qwen3.5 397B A17B
#3 by perf·no signal
MiMo-V2-Flash
#9 by perf·#12 by attention
Best value
MiMo-V2-Flash
3.9x better value than DeepSeek V3.2 Speciale
DeepSeek V3.2 Speciale
97.8 pts/$
$0.80/M
Qwen3.5 397B A17B
57.4 pts/$
$1.36/M
MiMo-V2-Flash
385.8 pts/$
$0.19/M
Vendor risk
Mixed exposure
One or more vendors flagged
DeepSeek
$3.4B·Tier 1
Alibaba (Qwen)
$293.0B·Tier 1
xiaomi
private · undisclosed
Head to head
11 benchmarks · 3 models
DeepSeek V3.2 SpecialeQwen3.5 397B A17BMiMo-V2-Flash
Artificial Analysis · Agentic Index
Qwen3.5 397B A17B leads by +7.1
DeepSeek V3.2 Speciale
0.0
Qwen3.5 397B A17B
55.8
MiMo-V2-Flash
48.8
Artificial Analysis · Coding Index
Qwen3.5 397B A17B leads by +3.4
DeepSeek V3.2 Speciale
37.9
Qwen3.5 397B A17B
41.3
MiMo-V2-Flash
33.5
Artificial Analysis · Quality Index
Qwen3.5 397B A17B leads by +3.6
DeepSeek V3.2 Speciale
29.4
Qwen3.5 397B A17B
45.0
MiMo-V2-Flash
41.5
OpenCompass · AIME2025
DeepSeek V3.2 Speciale leads by +3.1
DeepSeek V3.2 Speciale
96.0
Qwen3.5 397B A17B
92.3
MiMo-V2-Flash
92.9
OpenCompass · GPQA-Diamond
Qwen3.5 397B A17B leads by +1.7
DeepSeek V3.2 Speciale
86.7
Qwen3.5 397B A17B
88.4
MiMo-V2-Flash
82.1
OpenCompass · HLE
DeepSeek V3.2 Speciale leads by +1.1
DeepSeek V3.2 Speciale
28.6
Qwen3.5 397B A17B
27.5
MiMo-V2-Flash
20.5
OpenCompass · IFEval
DeepSeek V3.2 Speciale leads by +0.2
DeepSeek V3.2 Speciale
91.7
Qwen3.5 397B A17B
91.5
MiMo-V2-Flash
89.5
OpenCompass · LiveCodeBenchV6
Qwen3.5 397B A17B leads by +2.1
DeepSeek V3.2 Speciale
80.9
Qwen3.5 397B A17B
83.0
MiMo-V2-Flash
71.8
OpenCompass · MMLU-Pro
Qwen3.5 397B A17B leads by +2.1
DeepSeek V3.2 Speciale
85.5
Qwen3.5 397B A17B
87.6
MiMo-V2-Flash
83.1
Chatbot Arena Elo · Coding
Qwen3.5 397B A17B leads by +49.5
Qwen3.5 397B A17B
1386.1
MiMo-V2-Flash
1336.5
Chatbot Arena Elo · Overall
Qwen3.5 397B A17B leads by +55.7
Qwen3.5 397B A17B
1447.7
MiMo-V2-Flash
1392.1
Full benchmark table
| Benchmark | DeepSeek V3.2 Speciale | Qwen3.5 397B A17B | MiMo-V2-Flash |
|---|---|---|---|
Artificial Analysis · Agentic Index | 0.0 | 55.8 | 48.8 |
Artificial Analysis · Coding Index | 37.9 | 41.3 | 33.5 |
Artificial Analysis · Quality Index | 29.4 | 45.0 | 41.5 |
OpenCompass · AIME2025 | 96.0 | 92.3 | 92.9 |
OpenCompass · GPQA-Diamond | 86.7 | 88.4 | 82.1 |
OpenCompass · HLE | 28.6 | 27.5 | 20.5 |
OpenCompass · IFEval | 91.7 | 91.5 | 89.5 |
OpenCompass · LiveCodeBenchV6 | 80.9 | 83.0 | 71.8 |
OpenCompass · MMLU-Pro | 85.5 | 87.6 | 83.1 |
Chatbot Arena Elo · Coding | — | 1386.1 | 1336.5 |
Chatbot Arena Elo · Overall | — | 1447.7 | 1392.1 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.40 | $1.20 | 164K tokens (~82 books) | $6.00 | |
| $0.39 | $2.34 | 262K tokens (~131 books) | $8.78 | |
| $0.09 | $0.29 | 262K tokens (~131 books) | $1.40 |
People also compared