Beta
Compare · ModelsLive · 2 picked · head to head

Claude Sonnet 4.5 vs GPT-5 Nano

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Claude Sonnet 4.5 wins 11 of 12 shared benchmarks. Leads in reasoning · math · knowledge.

Category leads
reasoning·Claude Sonnet 4.5math·Claude Sonnet 4.5knowledge·Claude Sonnet 4.5coding·Claude Sonnet 4.5
Hype vs Reality
Claude Sonnet 4.5
#130 by perf·no signal
QUIET
GPT-5 Nano
#112 by perf·no signal
QUIET
Best value
43.0x better value than Claude Sonnet 4.5
Claude Sonnet 4.5
4.7 pts/$
$9.00/M
GPT-5 Nano
201.3 pts/$
$0.23/M
Vendor risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
Claude Sonnet 4.5GPT-5 Nano
ARC-AGI
Claude Sonnet 4.5 leads by +43.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Sonnet 4.5
63.7
GPT-5 Nano
20.7
ARC-AGI-2
Claude Sonnet 4.5 leads by +11.0
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Sonnet 4.5
13.6
GPT-5 Nano
2.6
FrontierMath-2025-02-28-Private
Claude Sonnet 4.5 leads by +6.9
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Sonnet 4.5
15.2
GPT-5 Nano
8.3
FrontierMath-Tier-4-2025-07-01-Private
Claude Sonnet 4.5 leads by +2.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Claude Sonnet 4.5
4.2
GPT-5 Nano
2.1
GPQA diamond
Claude Sonnet 4.5 leads by +17.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Sonnet 4.5
76.4
GPT-5 Nano
59.3
MATH level 5
Claude Sonnet 4.5 leads by +2.5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Claude Sonnet 4.5
97.7
GPT-5 Nano
95.2
OTIS Mock AIME 2024-2025
GPT-5 Nano leads by +3.3
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Sonnet 4.5
77.8
GPT-5 Nano
81.1
SimpleQA Verified
Claude Sonnet 4.5 leads by +11.4
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Claude Sonnet 4.5
23.6
GPT-5 Nano
12.2
SWE-Bench Verified (Bash Only)
Claude Sonnet 4.5 leads by +35.8
SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks.
Claude Sonnet 4.5
70.6
GPT-5 Nano
34.8
Terminal Bench
Claude Sonnet 4.5 leads by +35.0
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
Claude Sonnet 4.5
46.5
GPT-5 Nano
11.5
VPCT
Claude Sonnet 4.5 leads by +3.9
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
Claude Sonnet 4.5
9.7
GPT-5 Nano
5.8
WeirdML
Claude Sonnet 4.5 leads by +9.6
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Sonnet 4.5
47.7
GPT-5 Nano
38.1
Full benchmark table
BenchmarkClaude Sonnet 4.5GPT-5 Nano
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
63.720.7
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
13.62.6
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
15.28.3
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
4.22.1
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
76.459.3
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
97.795.2
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
77.881.1
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
23.612.2
SWE-Bench Verified (Bash Only)
SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks.
70.634.8
Terminal Bench
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
46.511.5
VPCT
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
9.75.8
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
47.738.1
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Anthropic logoClaude Sonnet 4.5$3.00$15.001.0M tokens (~500 books)$60.00
OpenAI logoGPT-5 Nano$0.05$0.40400K tokens (~200 books)$1.38