Compare · ModelsLive · 2 picked · head to head
DeepSeek V3.2 vs Gemini 2.5 Pro
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
DeepSeek V3.2 wins on 12/21 benchmarks
DeepSeek V3.2 wins 12 of 21 shared benchmarks. Leads in speed · coding · reasoning.
Category leads
speed·DeepSeek V3.2coding·DeepSeek V3.2reasoning·DeepSeek V3.2arena·DeepSeek V3.2knowledge·Gemini 2.5 Promath·DeepSeek V3.2language·Gemini 2.5 Pro
Hype vs Reality
Attention vs performance
DeepSeek V3.2
#82 by perf·no signal
Gemini 2.5 Pro
#59 by perf·no signal
Best value
DeepSeek V3.2
16.6x better value than Gemini 2.5 Pro
DeepSeek V3.2
165.6 pts/$
$0.32/M
Gemini 2.5 Pro
10.0 pts/$
$5.63/M
Vendor risk
Mixed exposure
One or more vendors flagged
DeepSeek
$3.4B·Tier 1
Google DeepMind
$4.00T·Tier 1
Head to head
21 benchmarks · 2 models
DeepSeek V3.2Gemini 2.5 Pro
Artificial Analysis · Agentic Index
DeepSeek V3.2 leads by +20.2
DeepSeek V3.2
52.9
Gemini 2.5 Pro
32.7
Artificial Analysis · Coding Index
DeepSeek V3.2 leads by +4.8
DeepSeek V3.2
36.7
Gemini 2.5 Pro
31.9
Artificial Analysis · Quality Index
DeepSeek V3.2 leads by +7.1
DeepSeek V3.2
41.7
Gemini 2.5 Pro
34.6
Aider polyglot
Gemini 2.5 Pro leads by +8.9
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
DeepSeek V3.2
74.2
Gemini 2.5 Pro
83.1
ARC-AGI
DeepSeek V3.2 leads by +16.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
DeepSeek V3.2
57.0
Gemini 2.5 Pro
41.0
ARC-AGI-2
Gemini 2.5 Pro leads by +0.8
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
DeepSeek V3.2
4.0
Gemini 2.5 Pro
4.9
Chatbot Arena Elo · Coding
DeepSeek V3.2 leads by +124.9
DeepSeek V3.2
1326.9
Gemini 2.5 Pro
1202.0
Chatbot Arena Elo · Overall
Gemini 2.5 Pro leads by +23.8
DeepSeek V3.2
1424.4
Gemini 2.5 Pro
1448.2
Chess Puzzles
Gemini 2.5 Pro leads by +6.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
DeepSeek V3.2
14.0
Gemini 2.5 Pro
20.0
FrontierMath-2025-02-28-Private
DeepSeek V3.2 leads by +8.0
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
DeepSeek V3.2
22.1
Gemini 2.5 Pro
14.1
FrontierMath-Tier-4-2025-07-01-Private
Gemini 2.5 Pro leads by +2.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
DeepSeek V3.2
2.1
Gemini 2.5 Pro
4.2
GPQA diamond
Gemini 2.5 Pro leads by +2.5
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
DeepSeek V3.2
77.9
Gemini 2.5 Pro
80.4
OpenCompass · AIME2025
DeepSeek V3.2 leads by +4.3
DeepSeek V3.2
93.0
Gemini 2.5 Pro
88.7
OpenCompass · GPQA-Diamond
Gemini 2.5 Pro leads by +0.1
DeepSeek V3.2
84.6
Gemini 2.5 Pro
84.7
OpenCompass · HLE
DeepSeek V3.2 leads by +2.1
DeepSeek V3.2
23.2
Gemini 2.5 Pro
21.1
OpenCompass · IFEval
Gemini 2.5 Pro leads by +0.3
DeepSeek V3.2
89.7
Gemini 2.5 Pro
90.0
OpenCompass · LiveCodeBenchV6
DeepSeek V3.2 leads by +4.1
DeepSeek V3.2
75.4
Gemini 2.5 Pro
71.3
OpenCompass · MMLU-Pro
DeepSeek V3.2
85.8
Gemini 2.5 Pro
85.8
OTIS Mock AIME 2024-2025
DeepSeek V3.2 leads by +3.1
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
DeepSeek V3.2
87.8
Gemini 2.5 Pro
84.7
SimpleQA Verified
Gemini 2.5 Pro leads by +28.5
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
DeepSeek V3.2
27.5
Gemini 2.5 Pro
56.0
Terminal Bench
DeepSeek V3.2 leads by +7.0
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
DeepSeek V3.2
39.6
Gemini 2.5 Pro
32.6
Full benchmark table
| Benchmark | DeepSeek V3.2 | Gemini 2.5 Pro |
|---|---|---|
Artificial Analysis · Agentic Index | 52.9 | 32.7 |
Artificial Analysis · Coding Index | 36.7 | 31.9 |
Artificial Analysis · Quality Index | 41.7 | 34.6 |
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 74.2 | 83.1 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 57.0 | 41.0 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 4.0 | 4.9 |
Chatbot Arena Elo · Coding | 1326.9 | 1202.0 |
Chatbot Arena Elo · Overall | 1424.4 | 1448.2 |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | 14.0 | 20.0 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 22.1 | 14.1 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 2.1 | 4.2 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 77.9 | 80.4 |
OpenCompass · AIME2025 | 93.0 | 88.7 |
OpenCompass · GPQA-Diamond | 84.6 | 84.7 |
OpenCompass · HLE | 23.2 | 21.1 |
OpenCompass · IFEval | 89.7 | 90.0 |
OpenCompass · LiveCodeBenchV6 | 75.4 | 71.3 |
OpenCompass · MMLU-Pro | 85.8 | 85.8 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 87.8 | 84.7 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 27.5 | 56.0 |
Terminal Bench Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency. | 39.6 | 32.6 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.26 | $0.38 | 164K tokens (~82 books) | $2.90 | |
| $1.25 | $10.00 | 1.0M tokens (~524 books) | $34.38 |