Compare · ModelsLive · 2 picked · head to head
Claude Sonnet 4.5 vs Gemini 2.5 Pro
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Sonnet 4.5 wins on 9/17 benchmarks
Claude Sonnet 4.5 wins 9 of 17 shared benchmarks. Leads in reasoning · math · coding.
Category leads
reasoning·Claude Sonnet 4.5knowledge·Gemini 2.5 Promath·Claude Sonnet 4.5coding·Claude Sonnet 4.5
Hype vs Reality
Attention vs performance
Claude Sonnet 4.5
#130 by perf·no signal
Gemini 2.5 Pro
#59 by perf·no signal
Best value
Gemini 2.5 Pro
2.1x better value than Claude Sonnet 4.5
Claude Sonnet 4.5
4.7 pts/$
$9.00/M
Gemini 2.5 Pro
10.0 pts/$
$5.63/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
Google DeepMind
$4.00T·Tier 1
Head to head
17 benchmarks · 2 models
Claude Sonnet 4.5Gemini 2.5 Pro
ARC-AGI
Claude Sonnet 4.5 leads by +22.7
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Sonnet 4.5
63.7
Gemini 2.5 Pro
41.0
ARC-AGI-2
Claude Sonnet 4.5 leads by +8.8
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Sonnet 4.5
13.6
Gemini 2.5 Pro
4.9
Chess Puzzles
Gemini 2.5 Pro leads by +8.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Claude Sonnet 4.5
12.0
Gemini 2.5 Pro
20.0
DeepResearch Bench
Claude Sonnet 4.5 leads by +2.9
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
Claude Sonnet 4.5
52.6
Gemini 2.5 Pro
49.7
FrontierMath-2025-02-28-Private
Claude Sonnet 4.5 leads by +1.1
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Sonnet 4.5
15.2
Gemini 2.5 Pro
14.1
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Claude Sonnet 4.5
4.2
Gemini 2.5 Pro
4.2
GPQA diamond
Gemini 2.5 Pro leads by +4.0
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Sonnet 4.5
76.4
Gemini 2.5 Pro
80.4
GSO-Bench
Claude Sonnet 4.5 leads by +10.8
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
Claude Sonnet 4.5
14.7
Gemini 2.5 Pro
3.9
HLE
Gemini 2.5 Pro leads by +8.3
HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains.
Claude Sonnet 4.5
9.4
Gemini 2.5 Pro
17.7
MATH level 5
Claude Sonnet 4.5 leads by +2.2
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Claude Sonnet 4.5
97.7
Gemini 2.5 Pro
95.6
OTIS Mock AIME 2024-2025
Gemini 2.5 Pro leads by +6.9
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Sonnet 4.5
77.8
Gemini 2.5 Pro
84.7
SimpleBench
Gemini 2.5 Pro leads by +9.7
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Claude Sonnet 4.5
45.2
Gemini 2.5 Pro
54.9
SimpleQA Verified
Gemini 2.5 Pro leads by +32.4
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Claude Sonnet 4.5
23.6
Gemini 2.5 Pro
56.0
SWE-Bench verified
Claude Sonnet 4.5 leads by +13.7
Claude Sonnet 4.5
71.3
Gemini 2.5 Pro
57.6
Terminal Bench
Claude Sonnet 4.5 leads by +13.9
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
Claude Sonnet 4.5
46.5
Gemini 2.5 Pro
32.6
VPCT
Gemini 2.5 Pro leads by +9.9
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
Claude Sonnet 4.5
9.7
Gemini 2.5 Pro
19.6
WeirdML
Gemini 2.5 Pro leads by +6.3
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Sonnet 4.5
47.7
Gemini 2.5 Pro
54.0
Full benchmark table
| Benchmark | Claude Sonnet 4.5 | Gemini 2.5 Pro |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 63.7 | 41.0 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 13.6 | 4.9 |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | 12.0 | 20.0 |
DeepResearch Bench DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses. | 52.6 | 49.7 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 15.2 | 14.1 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 4.2 | 4.2 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 76.4 | 80.4 |
GSO-Bench GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues. | 14.7 | 3.9 |
HLE HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains. | 9.4 | 17.7 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 97.7 | 95.6 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 77.8 | 84.7 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 45.2 | 54.9 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 23.6 | 56.0 |
SWE-Bench verified | 71.3 | 57.6 |
Terminal Bench Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency. | 46.5 | 32.6 |
VPCT VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations. | 9.7 | 19.6 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 47.7 | 54.0 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $3.00 | $15.00 | 1.0M tokens (~500 books) | $60.00 | |
| $1.25 | $10.00 | 1.0M tokens (~524 books) | $34.38 |