Compare · ModelsLive · 2 picked · head to head
DeepSeek V3.2 Speciale vs Qwen3.5 397B A17B
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Qwen3.5 397B A17B wins on 6/9 benchmarks
Qwen3.5 397B A17B wins 6 of 9 shared benchmarks. Leads in speed · knowledge · coding.
Category leads
speed·Qwen3.5 397B A17Bmath·DeepSeek V3.2 Specialeknowledge·Qwen3.5 397B A17Blanguage·DeepSeek V3.2 Specialecoding·Qwen3.5 397B A17B
Hype vs Reality
Attention vs performance
DeepSeek V3.2 Speciale
#4 by perf·#5 by attention
Qwen3.5 397B A17B
#3 by perf·no signal
Best value
DeepSeek V3.2 Speciale
1.7x better value than Qwen3.5 397B A17B
DeepSeek V3.2 Speciale
97.8 pts/$
$0.80/M
Qwen3.5 397B A17B
57.4 pts/$
$1.36/M
Vendor risk
Mixed exposure
One or more vendors flagged
DeepSeek
$3.4B·Tier 1
Alibaba (Qwen)
$293.0B·Tier 1
Head to head
9 benchmarks · 2 models
DeepSeek V3.2 SpecialeQwen3.5 397B A17B
Artificial Analysis · Agentic Index
Qwen3.5 397B A17B leads by +55.8
DeepSeek V3.2 Speciale
0.0
Qwen3.5 397B A17B
55.8
Artificial Analysis · Coding Index
Qwen3.5 397B A17B leads by +3.4
DeepSeek V3.2 Speciale
37.9
Qwen3.5 397B A17B
41.3
Artificial Analysis · Quality Index
Qwen3.5 397B A17B leads by +15.6
DeepSeek V3.2 Speciale
29.4
Qwen3.5 397B A17B
45.0
OpenCompass · AIME2025
DeepSeek V3.2 Speciale leads by +3.7
DeepSeek V3.2 Speciale
96.0
Qwen3.5 397B A17B
92.3
OpenCompass · GPQA-Diamond
Qwen3.5 397B A17B leads by +1.7
DeepSeek V3.2 Speciale
86.7
Qwen3.5 397B A17B
88.4
OpenCompass · HLE
DeepSeek V3.2 Speciale leads by +1.1
DeepSeek V3.2 Speciale
28.6
Qwen3.5 397B A17B
27.5
OpenCompass · IFEval
DeepSeek V3.2 Speciale leads by +0.2
DeepSeek V3.2 Speciale
91.7
Qwen3.5 397B A17B
91.5
OpenCompass · LiveCodeBenchV6
Qwen3.5 397B A17B leads by +2.1
DeepSeek V3.2 Speciale
80.9
Qwen3.5 397B A17B
83.0
OpenCompass · MMLU-Pro
Qwen3.5 397B A17B leads by +2.1
DeepSeek V3.2 Speciale
85.5
Qwen3.5 397B A17B
87.6
Full benchmark table
| Benchmark | DeepSeek V3.2 Speciale | Qwen3.5 397B A17B |
|---|---|---|
Artificial Analysis · Agentic Index | 0.0 | 55.8 |
Artificial Analysis · Coding Index | 37.9 | 41.3 |
Artificial Analysis · Quality Index | 29.4 | 45.0 |
OpenCompass · AIME2025 | 96.0 | 92.3 |
OpenCompass · GPQA-Diamond | 86.7 | 88.4 |
OpenCompass · HLE | 28.6 | 27.5 |
OpenCompass · IFEval | 91.7 | 91.5 |
OpenCompass · LiveCodeBenchV6 | 80.9 | 83.0 |
OpenCompass · MMLU-Pro | 85.5 | 87.6 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.40 | $1.20 | 164K tokens (~82 books) | $6.00 | |
| $0.39 | $2.34 | 262K tokens (~131 books) | $8.78 |
People also compared
GPT-5 Chat vs Qwen3.5 397B A17BClaude Mythos Preview vs Qwen3.5 397B A17BDeepSeek V3.2 Speciale vs GPT-5 ChatClaude Mythos Preview vs DeepSeek V3.2 SpecialeClaude Instant vs Qwen3.5 397B A17BClaude Instant vs DeepSeek V3.2 SpecialeQwen3.5 397B A17B vs Step 3.5 FlashDeepSeek V3.2 Speciale vs Step 3.5 Flash