Compare · ModelsLive · 2 picked · head to head
Claude Sonnet 4 vs Claude Sonnet 4.5
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Sonnet 4.5 wins on 16/16 benchmarks
Claude Sonnet 4.5 wins 16 of 16 shared benchmarks. Leads in reasoning · coding · knowledge.
Category leads
reasoning·Claude Sonnet 4.5coding·Claude Sonnet 4.5knowledge·Claude Sonnet 4.5math·Claude Sonnet 4.5agentic·Claude Sonnet 4.5
Hype vs Reality
Attention vs performance
Claude Sonnet 4
#115 by perf·no signal
Claude Sonnet 4.5
#130 by perf·no signal
Best value
Claude Sonnet 4
1.1x better value than Claude Sonnet 4.5
Claude Sonnet 4
5.0 pts/$
$9.00/M
Claude Sonnet 4.5
4.7 pts/$
$9.00/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
Anthropic
$380.0B·Tier 1
Head to head
16 benchmarks · 2 models
Claude Sonnet 4Claude Sonnet 4.5
ARC-AGI
Claude Sonnet 4.5 leads by +23.7
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Sonnet 4
40.0
Claude Sonnet 4.5
63.7
ARC-AGI-2
Claude Sonnet 4.5 leads by +7.7
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Sonnet 4
5.9
Claude Sonnet 4.5
13.6
Cybench
Claude Sonnet 4.5 leads by +25.0
Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning.
Claude Sonnet 4
35.0
Claude Sonnet 4.5
60.0
DeepResearch Bench
Claude Sonnet 4.5 leads by +4.8
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
Claude Sonnet 4
47.8
Claude Sonnet 4.5
52.6
FrontierMath-2025-02-28-Private
Claude Sonnet 4.5 leads by +11.1
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Sonnet 4
4.1
Claude Sonnet 4.5
15.2
FrontierMath-Tier-4-2025-07-01-Private
Claude Sonnet 4.5 leads by +4.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Claude Sonnet 4
0.1
Claude Sonnet 4.5
4.2
GPQA diamond
Claude Sonnet 4.5 leads by +4.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Sonnet 4
72.3
Claude Sonnet 4.5
76.4
GSO-Bench
Claude Sonnet 4.5 leads by +9.8
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
Claude Sonnet 4
4.9
Claude Sonnet 4.5
14.7
HLE
Claude Sonnet 4.5 leads by +6.3
HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains.
Claude Sonnet 4
3.1
Claude Sonnet 4.5
9.4
MATH level 5
Claude Sonnet 4.5 leads by +13.4
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Claude Sonnet 4
84.4
Claude Sonnet 4.5
97.7
OSWorld
Claude Sonnet 4.5 leads by +19.0
OSWorld · tests AI agents on real-world computer tasks across operating systems, including web browsing, file management, and application use.
Claude Sonnet 4
43.9
Claude Sonnet 4.5
62.9
OTIS Mock AIME 2024-2025
Claude Sonnet 4.5 leads by +6.7
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Sonnet 4
71.1
Claude Sonnet 4.5
77.8
SimpleBench
Claude Sonnet 4.5 leads by +10.6
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Claude Sonnet 4
34.6
Claude Sonnet 4.5
45.2
SWE-Bench Verified (Bash Only)
Claude Sonnet 4.5 leads by +5.7
SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks.
Claude Sonnet 4
64.9
Claude Sonnet 4.5
70.6
VPCT
Claude Sonnet 4.5 leads by +8.7
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
Claude Sonnet 4
1.0
Claude Sonnet 4.5
9.7
WeirdML
Claude Sonnet 4.5 leads by +1.6
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Sonnet 4
46.1
Claude Sonnet 4.5
47.7
Full benchmark table
| Benchmark | Claude Sonnet 4 | Claude Sonnet 4.5 |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 40.0 | 63.7 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 5.9 | 13.6 |
Cybench Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning. | 35.0 | 60.0 |
DeepResearch Bench DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses. | 47.8 | 52.6 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 4.1 | 15.2 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 0.1 | 4.2 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 72.3 | 76.4 |
GSO-Bench GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues. | 4.9 | 14.7 |
HLE HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains. | 3.1 | 9.4 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 84.4 | 97.7 |
OSWorld OSWorld · tests AI agents on real-world computer tasks across operating systems, including web browsing, file management, and application use. | 43.9 | 62.9 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 71.1 | 77.8 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 34.6 | 45.2 |
SWE-Bench Verified (Bash Only) SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks. | 64.9 | 70.6 |
VPCT VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations. | 1.0 | 9.7 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 46.1 | 47.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $3.00 | $15.00 | 1.0M tokens (~500 books) | $60.00 | |
| $3.00 | $15.00 | 1.0M tokens (~500 books) | $60.00 |