Compare · ModelsLive · 2 picked · head to head
Qwen3.5 397B A17B vs DeepSeek V3.2 Speciale
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Qwen3.5 397B A17B wins on 6/9 benchmarks
Qwen3.5 397B A17B wins 6 of 9 shared benchmarks. Leads in speed · knowledge · coding.
Category leads
speed·Qwen3.5 397B A17Bmath·DeepSeek V3.2 Specialeknowledge·Qwen3.5 397B A17Blanguage·DeepSeek V3.2 Specialecoding·Qwen3.5 397B A17B
Hype vs Reality
Attention vs performance
Qwen3.5 397B A17B
#3 by perf·no signal
DeepSeek V3.2 Speciale
#4 by perf·#5 by attention
Best value
DeepSeek V3.2 Speciale
1.7x better value than Qwen3.5 397B A17B
Qwen3.5 397B A17B
57.4 pts/$
$1.36/M
DeepSeek V3.2 Speciale
97.8 pts/$
$0.80/M
Vendor risk
Mixed exposure
One or more vendors flagged
Alibaba (Qwen)
$293.0B·Tier 1
DeepSeek
$3.4B·Tier 1
Head to head
9 benchmarks · 2 models
Qwen3.5 397B A17BDeepSeek V3.2 Speciale
Artificial Analysis · Agentic Index
Qwen3.5 397B A17B leads by +55.8
Qwen3.5 397B A17B
55.8
DeepSeek V3.2 Speciale
0.0
Artificial Analysis · Coding Index
Qwen3.5 397B A17B leads by +3.4
Qwen3.5 397B A17B
41.3
DeepSeek V3.2 Speciale
37.9
Artificial Analysis · Quality Index
Qwen3.5 397B A17B leads by +15.6
Qwen3.5 397B A17B
45.0
DeepSeek V3.2 Speciale
29.4
OpenCompass · AIME2025
DeepSeek V3.2 Speciale leads by +3.7
Qwen3.5 397B A17B
92.3
DeepSeek V3.2 Speciale
96.0
OpenCompass · GPQA-Diamond
Qwen3.5 397B A17B leads by +1.7
Qwen3.5 397B A17B
88.4
DeepSeek V3.2 Speciale
86.7
OpenCompass · HLE
DeepSeek V3.2 Speciale leads by +1.1
Qwen3.5 397B A17B
27.5
DeepSeek V3.2 Speciale
28.6
OpenCompass · IFEval
DeepSeek V3.2 Speciale leads by +0.2
Qwen3.5 397B A17B
91.5
DeepSeek V3.2 Speciale
91.7
OpenCompass · LiveCodeBenchV6
Qwen3.5 397B A17B leads by +2.1
Qwen3.5 397B A17B
83.0
DeepSeek V3.2 Speciale
80.9
OpenCompass · MMLU-Pro
Qwen3.5 397B A17B leads by +2.1
Qwen3.5 397B A17B
87.6
DeepSeek V3.2 Speciale
85.5
Full benchmark table
| Benchmark | Qwen3.5 397B A17B | DeepSeek V3.2 Speciale |
|---|---|---|
Artificial Analysis · Agentic Index | 55.8 | 0.0 |
Artificial Analysis · Coding Index | 41.3 | 37.9 |
Artificial Analysis · Quality Index | 45.0 | 29.4 |
OpenCompass · AIME2025 | 92.3 | 96.0 |
OpenCompass · GPQA-Diamond | 88.4 | 86.7 |
OpenCompass · HLE | 27.5 | 28.6 |
OpenCompass · IFEval | 91.5 | 91.7 |
OpenCompass · LiveCodeBenchV6 | 83.0 | 80.9 |
OpenCompass · MMLU-Pro | 87.6 | 85.5 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.39 | $2.34 | 262K tokens (~131 books) | $8.78 | |
| $0.40 | $1.20 | 164K tokens (~82 books) | $6.00 |
People also compared
GPT-5 Chat vs Qwen3.5 397B A17BClaude Mythos Preview vs Qwen3.5 397B A17BDeepSeek V3.2 Speciale vs GPT-5 ChatClaude Mythos Preview vs DeepSeek V3.2 SpecialeClaude Instant vs Qwen3.5 397B A17BClaude Instant vs DeepSeek V3.2 SpecialeQwen3.5 397B A17B vs Step 3.5 FlashDeepSeek V3.2 Speciale vs Step 3.5 Flash