Compare · ModelsLive · 2 picked · head to head

Claude 3.7 Sonnet vs Gemini 2.5 Pro

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Gemini 2.5 Pro wins 21 of 23 shared benchmarks. Leads in coding · reasoning · knowledge.

Category leads
coding·Gemini 2.5 Proreasoning·Gemini 2.5 Proknowledge·Gemini 2.5 Promath·Gemini 2.5 Prolanguage·Gemini 2.5 Proagentic·Claude 3.7 Sonnet
Hype vs Reality
Claude 3.7 Sonnet
#103 by perf·no signal
QUIET
Gemini 2.5 Pro
#61 by perf·no signal
QUIET
Best value
1.9x better value than Claude 3.7 Sonnet
Claude 3.7 Sonnet
5.3 pts/$
$9.00/M
Gemini 2.5 Pro
10.0 pts/$
$5.63/M
Vendor risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Head to head
Claude 3.7 SonnetGemini 2.5 Pro
Aider polyglot
Gemini 2.5 Pro leads by +18.2
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
Claude 3.7 Sonnet
64.9
Gemini 2.5 Pro
83.1
ARC-AGI
Gemini 2.5 Pro leads by +12.4
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude 3.7 Sonnet
28.6
Gemini 2.5 Pro
41.0
ARC-AGI-2
Gemini 2.5 Pro leads by +4.0
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude 3.7 Sonnet
0.9
Gemini 2.5 Pro
4.9
CadEval
Gemini 2.5 Pro leads by +10.0
CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge.
Claude 3.7 Sonnet
54.0
Gemini 2.5 Pro
64.0
DeepResearch Bench
Gemini 2.5 Pro leads by +6.1
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
Claude 3.7 Sonnet
43.6
Gemini 2.5 Pro
49.7
Fiction.LiveBench
Gemini 2.5 Pro leads by +8.4
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
Claude 3.7 Sonnet
83.3
Gemini 2.5 Pro
91.7
FrontierMath-2025-02-28-Private
Gemini 2.5 Pro leads by +10.0
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude 3.7 Sonnet
4.1
Gemini 2.5 Pro
14.1
GeoBench
Gemini 2.5 Pro leads by +13.0
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
Claude 3.7 Sonnet
68.0
Gemini 2.5 Pro
81.0
GPQA diamond
Gemini 2.5 Pro leads by +7.4
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude 3.7 Sonnet
73.0
Gemini 2.5 Pro
80.4
GSO-Bench
Gemini 2.5 Pro leads by +0.1
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
Claude 3.7 Sonnet
3.8
Gemini 2.5 Pro
3.9
HELM · GPQA
Gemini 2.5 Pro leads by +14.1
Claude 3.7 Sonnet
60.8
Gemini 2.5 Pro
74.9
HELM · IFEval
Gemini 2.5 Pro leads by +0.6
Claude 3.7 Sonnet
83.4
Gemini 2.5 Pro
84.0
HELM · MMLU-Pro
Gemini 2.5 Pro leads by +7.9
Claude 3.7 Sonnet
78.4
Gemini 2.5 Pro
86.3
HELM · Omni-MATH
Gemini 2.5 Pro leads by +8.6
Claude 3.7 Sonnet
33.0
Gemini 2.5 Pro
41.6
HELM · WildBench
Gemini 2.5 Pro leads by +4.3
Claude 3.7 Sonnet
81.4
Gemini 2.5 Pro
85.7
HLE
Gemini 2.5 Pro leads by +14.3
HLE (Humanity's Last Exam) · a reasoning benchmark designed to be the hardest public evaluation of AI. Questions span mathematics, physics, philosophy, and logic · curated to be at or beyond the frontier of human expert capability. Tested with and without tool augmentation. Claude Opus 4.7 scores 46.9% without tools and 54.7% with tools · making it one of the few benchmarks where the top score is below 60%.
Claude 3.7 Sonnet
3.4
Gemini 2.5 Pro
17.7
Lech Mazur Writing
Gemini 2.5 Pro leads by +4.9
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
Claude 3.7 Sonnet
81.1
Gemini 2.5 Pro
86.0
MATH level 5
Gemini 2.5 Pro leads by +4.4
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Claude 3.7 Sonnet
91.2
Gemini 2.5 Pro
95.6
OTIS Mock AIME 2024-2025
Gemini 2.5 Pro leads by +27.0
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude 3.7 Sonnet
57.7
Gemini 2.5 Pro
84.7
SimpleBench
Gemini 2.5 Pro leads by +19.2
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Claude 3.7 Sonnet
35.7
Gemini 2.5 Pro
54.9
SWE-Bench verified
Claude 3.7 Sonnet leads by +3.4
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
Claude 3.7 Sonnet
61.0
Gemini 2.5 Pro
57.6
The Agent Company
Claude 3.7 Sonnet leads by +0.6
The Agent Company · tests AI agents on realistic corporate tasks like email management, code review, data analysis, and cross-tool workflows.
Claude 3.7 Sonnet
30.9
Gemini 2.5 Pro
30.3
VPCT
Gemini 2.5 Pro leads by +11.1
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
Claude 3.7 Sonnet
8.5
Gemini 2.5 Pro
19.6
Full benchmark table
BenchmarkClaude 3.7 SonnetGemini 2.5 Pro
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
64.983.1
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
28.641.0
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
0.94.9
CadEval
CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge.
54.064.0
DeepResearch Bench
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
43.649.7
Fiction.LiveBench
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
83.391.7
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
4.114.1
GeoBench
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
68.081.0
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
73.080.4
GSO-Bench
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
3.83.9
HELM · GPQA
60.874.9
HELM · IFEval
83.484.0
HELM · MMLU-Pro
78.486.3
HELM · Omni-MATH
33.041.6
HELM · WildBench
81.485.7
HLE
HLE (Humanity's Last Exam) · a reasoning benchmark designed to be the hardest public evaluation of AI. Questions span mathematics, physics, philosophy, and logic · curated to be at or beyond the frontier of human expert capability. Tested with and without tool augmentation. Claude Opus 4.7 scores 46.9% without tools and 54.7% with tools · making it one of the few benchmarks where the top score is below 60%.
3.417.7
Lech Mazur Writing
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
81.186.0
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
91.295.6
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
57.784.7
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
35.754.9
SWE-Bench verified
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
61.057.6
The Agent Company
The Agent Company · tests AI agents on realistic corporate tasks like email management, code review, data analysis, and cross-tool workflows.
30.930.3
VPCT
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
8.519.6
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Anthropic logoClaude 3.7 Sonnet$3.00$15.00200K tokens (~100 books)$60.00
Google DeepMind logoGemini 2.5 Pro$1.25$10.001.0M tokens (~524 books)$34.38