Compare · ModelsLive · 2 picked · head to head
Gemini 2.5 Pro vs DeepSeek V3.2 Speciale
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
DeepSeek V3.2 Speciale wins on 6/9 benchmarks
DeepSeek V3.2 Speciale wins 6 of 9 shared benchmarks. Leads in math · knowledge · language.
Category leads
speed·Gemini 2.5 Promath·DeepSeek V3.2 Specialeknowledge·DeepSeek V3.2 Specialelanguage·DeepSeek V3.2 Specialecoding·DeepSeek V3.2 Speciale
Hype vs Reality
Attention vs performance
Gemini 2.5 Pro
#59 by perf·no signal
DeepSeek V3.2 Speciale
#4 by perf·#5 by attention
Best value
DeepSeek V3.2 Speciale
9.8x better value than Gemini 2.5 Pro
Gemini 2.5 Pro
10.0 pts/$
$5.63/M
DeepSeek V3.2 Speciale
97.8 pts/$
$0.80/M
Vendor risk
Mixed exposure
One or more vendors flagged
Google DeepMind
$4.00T·Tier 1
DeepSeek
$3.4B·Tier 1
Head to head
9 benchmarks · 2 models
Gemini 2.5 ProDeepSeek V3.2 Speciale
Artificial Analysis · Agentic Index
Gemini 2.5 Pro leads by +32.7
Gemini 2.5 Pro
32.7
DeepSeek V3.2 Speciale
0.0
Artificial Analysis · Coding Index
DeepSeek V3.2 Speciale leads by +5.9
Gemini 2.5 Pro
31.9
DeepSeek V3.2 Speciale
37.9
Artificial Analysis · Quality Index
Gemini 2.5 Pro leads by +5.2
Gemini 2.5 Pro
34.6
DeepSeek V3.2 Speciale
29.4
OpenCompass · AIME2025
DeepSeek V3.2 Speciale leads by +7.3
Gemini 2.5 Pro
88.7
DeepSeek V3.2 Speciale
96.0
OpenCompass · GPQA-Diamond
DeepSeek V3.2 Speciale leads by +2.0
Gemini 2.5 Pro
84.7
DeepSeek V3.2 Speciale
86.7
OpenCompass · HLE
DeepSeek V3.2 Speciale leads by +7.5
Gemini 2.5 Pro
21.1
DeepSeek V3.2 Speciale
28.6
OpenCompass · IFEval
DeepSeek V3.2 Speciale leads by +1.7
Gemini 2.5 Pro
90.0
DeepSeek V3.2 Speciale
91.7
OpenCompass · LiveCodeBenchV6
DeepSeek V3.2 Speciale leads by +9.6
Gemini 2.5 Pro
71.3
DeepSeek V3.2 Speciale
80.9
OpenCompass · MMLU-Pro
Gemini 2.5 Pro leads by +0.3
Gemini 2.5 Pro
85.8
DeepSeek V3.2 Speciale
85.5
Full benchmark table
| Benchmark | Gemini 2.5 Pro | DeepSeek V3.2 Speciale |
|---|---|---|
Artificial Analysis · Agentic Index | 32.7 | 0.0 |
Artificial Analysis · Coding Index | 31.9 | 37.9 |
Artificial Analysis · Quality Index | 34.6 | 29.4 |
OpenCompass · AIME2025 | 88.7 | 96.0 |
OpenCompass · GPQA-Diamond | 84.7 | 86.7 |
OpenCompass · HLE | 21.1 | 28.6 |
OpenCompass · IFEval | 90.0 | 91.7 |
OpenCompass · LiveCodeBenchV6 | 71.3 | 80.9 |
OpenCompass · MMLU-Pro | 85.8 | 85.5 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $1.25 | $10.00 | 1.0M tokens (~524 books) | $34.38 | |
| $0.40 | $1.20 | 164K tokens (~82 books) | $6.00 |