Compare · ModelsLive · 2 picked · head to head
o3 vs Claude Sonnet 4.5
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Sonnet 4.5 wins on 10/17 benchmarks
Claude Sonnet 4.5 wins 10 of 17 shared benchmarks. Leads in reasoning · coding · agentic.
Category leads
reasoning·Claude Sonnet 4.5knowledge·o3math·o3coding·Claude Sonnet 4.5agentic·Claude Sonnet 4.5
Hype vs Reality
Attention vs performance
o3
#69 by perf·no signal
Claude Sonnet 4.5
#132 by perf·no signal
Best value
o3
2.4x better value than Claude Sonnet 4.5
o3
11.0 pts/$
$5.00/M
Claude Sonnet 4.5
4.7 pts/$
$9.00/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
Anthropic
$380.0B·Tier 1
Head to head
17 benchmarks · 2 models
o3Claude Sonnet 4.5
ARC-AGI
Claude Sonnet 4.5 leads by +2.9
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
o3
60.8
Claude Sonnet 4.5
63.7
ARC-AGI-2
Claude Sonnet 4.5 leads by +7.1
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
o3
6.5
Claude Sonnet 4.5
13.6
DeepResearch Bench
Claude Sonnet 4.5 leads by +6.0
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
o3
46.6
Claude Sonnet 4.5
52.6
FrontierMath-2025-02-28-Private
o3 leads by +3.5
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
o3
18.7
Claude Sonnet 4.5
15.2
FrontierMath-Tier-4-2025-07-01-Private
Claude Sonnet 4.5 leads by +2.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
o3
2.1
Claude Sonnet 4.5
4.2
GPQA diamond
Claude Sonnet 4.5 leads by +0.7
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
o3
75.8
Claude Sonnet 4.5
76.4
GSO-Bench
Claude Sonnet 4.5 leads by +5.9
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
o3
8.8
Claude Sonnet 4.5
14.7
HLE
o3 leads by +6.9
HLE (Humanity's Last Exam) · a reasoning benchmark designed to be the hardest public evaluation of AI. Questions span mathematics, physics, philosophy, and logic · curated to be at or beyond the frontier of human expert capability. Tested with and without tool augmentation. Claude Opus 4.7 scores 46.9% without tools and 54.7% with tools · making it one of the few benchmarks where the top score is below 60%.
o3
16.3
Claude Sonnet 4.5
9.4
MATH level 5
o3 leads by +0.0
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
o3
97.8
Claude Sonnet 4.5
97.7
OSWorld
Claude Sonnet 4.5 leads by +39.9
OSWorld · tests AI agents on real-world computer tasks across operating systems, including web browsing, file management, and application use.
o3
23.0
Claude Sonnet 4.5
62.9
OTIS Mock AIME 2024-2025
o3 leads by +6.1
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
o3
83.9
Claude Sonnet 4.5
77.8
SimpleBench
Claude Sonnet 4.5 leads by +1.4
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
o3
43.7
Claude Sonnet 4.5
45.2
SimpleQA Verified
o3 leads by +29.4
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
o3
53.0
Claude Sonnet 4.5
23.6
SWE-Bench verified
Claude Sonnet 4.5 leads by +9.0
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
o3
62.3
Claude Sonnet 4.5
71.3
SWE-Bench Verified (Bash Only)
Claude Sonnet 4.5 leads by +12.2
SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks.
o3
58.4
Claude Sonnet 4.5
70.6
VPCT
o3 leads by +18.3
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
o3
28.0
Claude Sonnet 4.5
9.7
WeirdML
o3 leads by +4.7
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
o3
52.4
Claude Sonnet 4.5
47.7
Full benchmark table
| Benchmark | o3 | Claude Sonnet 4.5 |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 60.8 | 63.7 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 6.5 | 13.6 |
DeepResearch Bench DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses. | 46.6 | 52.6 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 18.7 | 15.2 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 2.1 | 4.2 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 75.8 | 76.4 |
GSO-Bench GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues. | 8.8 | 14.7 |
HLE HLE (Humanity's Last Exam) · a reasoning benchmark designed to be the hardest public evaluation of AI. Questions span mathematics, physics, philosophy, and logic · curated to be at or beyond the frontier of human expert capability. Tested with and without tool augmentation. Claude Opus 4.7 scores 46.9% without tools and 54.7% with tools · making it one of the few benchmarks where the top score is below 60%. | 16.3 | 9.4 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 97.8 | 97.7 |
OSWorld OSWorld · tests AI agents on real-world computer tasks across operating systems, including web browsing, file management, and application use. | 23.0 | 62.9 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 83.9 | 77.8 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 43.7 | 45.2 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 53.0 | 23.6 |
SWE-Bench verified SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability. | 62.3 | 71.3 |
SWE-Bench Verified (Bash Only) SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks. | 58.4 | 70.6 |
VPCT VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations. | 28.0 | 9.7 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 52.4 | 47.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.00 | $8.00 | 200K tokens (~100 books) | $35.00 | |
| $3.00 | $15.00 | 1.0M tokens (~500 books) | $60.00 |