Compare · ModelsLive · 2 picked · head to head
DeepSeek V3.2 vs Qwen3 32B
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
DeepSeek V3.2 wins on 8/8 benchmarks
DeepSeek V3.2 wins 8 of 8 shared benchmarks. Leads in coding · arena · math.
Category leads
coding·DeepSeek V3.2arena·DeepSeek V3.2math·DeepSeek V3.2knowledge·DeepSeek V3.2language·DeepSeek V3.2
Hype vs Reality
Attention vs performance
DeepSeek V3.2
#82 by perf·no signal
Qwen3 32B
#48 by perf·no signal
Best value
Qwen3 32B
2.2x better value than DeepSeek V3.2
DeepSeek V3.2
165.6 pts/$
$0.32/M
Qwen3 32B
363.8 pts/$
$0.16/M
Vendor risk
Mixed exposure
One or more vendors flagged
DeepSeek
$3.4B·Tier 1
Alibaba (Qwen)
$293.0B·Tier 1
Head to head
8 benchmarks · 2 models
DeepSeek V3.2Qwen3 32B
Aider polyglot
DeepSeek V3.2 leads by +34.2
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
DeepSeek V3.2
74.2
Qwen3 32B
40.0
Chatbot Arena Elo · Overall
DeepSeek V3.2 leads by +77.4
DeepSeek V3.2
1424.4
Qwen3 32B
1347.0
OpenCompass · AIME2025
DeepSeek V3.2 leads by +22.7
DeepSeek V3.2
93.0
Qwen3 32B
70.3
OpenCompass · GPQA-Diamond
DeepSeek V3.2 leads by +17.3
DeepSeek V3.2
84.6
Qwen3 32B
67.3
OpenCompass · HLE
DeepSeek V3.2 leads by +14.7
DeepSeek V3.2
23.2
Qwen3 32B
8.5
OpenCompass · IFEval
DeepSeek V3.2 leads by +3.7
DeepSeek V3.2
89.7
Qwen3 32B
86.0
OpenCompass · LiveCodeBenchV6
DeepSeek V3.2 leads by +17.8
DeepSeek V3.2
75.4
Qwen3 32B
57.6
OpenCompass · MMLU-Pro
DeepSeek V3.2 leads by +7.8
DeepSeek V3.2
85.8
Qwen3 32B
78.0
Full benchmark table
| Benchmark | DeepSeek V3.2 | Qwen3 32B |
|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 74.2 | 40.0 |
Chatbot Arena Elo · Overall | 1424.4 | 1347.0 |
OpenCompass · AIME2025 | 93.0 | 70.3 |
OpenCompass · GPQA-Diamond | 84.6 | 67.3 |
OpenCompass · HLE | 23.2 | 8.5 |
OpenCompass · IFEval | 89.7 | 86.0 |
OpenCompass · LiveCodeBenchV6 | 75.4 | 57.6 |
OpenCompass · MMLU-Pro | 85.8 | 78.0 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.26 | $0.38 | 164K tokens (~82 books) | $2.90 | |
| $0.08 | $0.24 | 41K tokens (~20 books) | $1.20 |