Compare · ModelsLive · 3 picked · head to head

DeepSeek V3.2 Exp vs Gemini 2.5 Flash vs Claude Sonnet 4

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Gemini 2.5 Flash wins 6 of 15 shared benchmarks. Leads in knowledge · math.

Category leads
coding·DeepSeek V3.2 Expknowledge·Gemini 2.5 Flashagentic·DeepSeek V3.2 Expreasoning·Claude Sonnet 4arena·DeepSeek V3.2 Expmath·Gemini 2.5 Flash
Hype vs Reality
DeepSeek V3.2 Exp
#80 by perf·no signal
QUIET
Gemini 2.5 Flash
#144 by perf·#14 by attention
OVERHYPED
Claude Sonnet 4
#117 by perf·no signal
QUIET
Best value
5.5x better value than Gemini 2.5 Flash
DeepSeek V3.2 Exp
156.5 pts/$
$0.34/M
Gemini 2.5 Flash
28.6 pts/$
$1.40/M
Claude Sonnet 4
5.0 pts/$
$9.00/M
Vendor risk
One or more vendors flagged
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
Head to head
DeepSeek V3.2 ExpGemini 2.5 FlashClaude Sonnet 4
Aider polyglot
DeepSeek V3.2 Exp leads by +12.9
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
DeepSeek V3.2 Exp
74.2
Gemini 2.5 Flash
47.1
Claude Sonnet 4
61.3
Fiction.LiveBench
DeepSeek V3.2 Exp leads by +36.1
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
DeepSeek V3.2 Exp
83.3
Gemini 2.5 Flash
47.2
Claude Sonnet 4
46.9
The Agent Company
DeepSeek V3.2 Exp leads by +1.8
The Agent Company · tests AI agents on realistic corporate tasks like email management, code review, data analysis, and cross-tool workflows.
DeepSeek V3.2 Exp
42.9
Gemini 2.5 Flash
41.1
Claude Sonnet 4
33.1
WeirdML
Claude Sonnet 4 leads by +5.2
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
DeepSeek V3.2 Exp
39.5
Gemini 2.5 Flash
41.0
Claude Sonnet 4
46.1
ARC-AGI
Claude Sonnet 4 leads by +7.7
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Gemini 2.5 Flash
32.3
Claude Sonnet 4
40.0
ARC-AGI-2
Claude Sonnet 4 leads by +3.4
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Gemini 2.5 Flash
2.5
Claude Sonnet 4
5.9
Chatbot Arena Elo · Overall
DeepSeek V3.2 Exp leads by +11.8
DeepSeek V3.2 Exp
1422.8
Gemini 2.5 Flash
1411.0
DeepResearch Bench
Claude Sonnet 4 leads by +18.6
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
Gemini 2.5 Flash
29.2
Claude Sonnet 4
47.8
FrontierMath-2025-02-28-Private
Gemini 2.5 Flash leads by +0.7
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Gemini 2.5 Flash
4.8
Claude Sonnet 4
4.1
FrontierMath-Tier-4-2025-07-01-Private
Gemini 2.5 Flash leads by +4.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Gemini 2.5 Flash
4.2
Claude Sonnet 4
0.1
GeoBench
Gemini 2.5 Flash leads by +36.0
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
Gemini 2.5 Flash
73.0
Claude Sonnet 4
37.0
HLE
Gemini 2.5 Flash leads by +4.5
HLE (Humanity's Last Exam) · a reasoning benchmark designed to be the hardest public evaluation of AI. Questions span mathematics, physics, philosophy, and logic · curated to be at or beyond the frontier of human expert capability. Tested with and without tool augmentation. Claude Opus 4.7 scores 46.9% without tools and 54.7% with tools · making it one of the few benchmarks where the top score is below 60%.
Gemini 2.5 Flash
7.7
Claude Sonnet 4
3.1
OTIS Mock AIME 2024-2025
Gemini 2.5 Flash leads by +2.0
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 2.5 Flash
73.0
Claude Sonnet 4
71.1
SimpleBench
Claude Sonnet 4 leads by +5.2
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Gemini 2.5 Flash
29.4
Claude Sonnet 4
34.6
VPCT
Gemini 2.5 Flash leads by +6.0
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
Gemini 2.5 Flash
7.0
Claude Sonnet 4
1.0
Full benchmark table
BenchmarkDeepSeek V3.2 ExpGemini 2.5 FlashClaude Sonnet 4
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
74.247.161.3
Fiction.LiveBench
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
83.347.246.9
The Agent Company
The Agent Company · tests AI agents on realistic corporate tasks like email management, code review, data analysis, and cross-tool workflows.
42.941.133.1
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
39.541.046.1
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
32.340.0
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
2.55.9
Chatbot Arena Elo · Overall
1422.81411.0
DeepResearch Bench
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
29.247.8
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
4.84.1
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
4.20.1
GeoBench
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
73.037.0
HLE
HLE (Humanity's Last Exam) · a reasoning benchmark designed to be the hardest public evaluation of AI. Questions span mathematics, physics, philosophy, and logic · curated to be at or beyond the frontier of human expert capability. Tested with and without tool augmentation. Claude Opus 4.7 scores 46.9% without tools and 54.7% with tools · making it one of the few benchmarks where the top score is below 60%.
7.73.1
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
73.071.1
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
29.434.6
VPCT
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
7.01.0
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
DeepSeek logoDeepSeek V3.2 Exp$0.27$0.41164K tokens (~82 books)$3.05
Google DeepMind logoGemini 2.5 Flash$0.30$2.501.0M tokens (~524 books)$8.50
Anthropic logoClaude Sonnet 4$3.00$15.001.0M tokens (~500 books)$60.00