Compare · ModelsLive · 2 picked · head to head
Claude Sonnet 4.6 vs Claude Opus 4.5
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Sonnet 4.6 wins on 8/13 benchmarks
Claude Sonnet 4.6 wins 8 of 13 shared benchmarks. Leads in reasoning · arena · knowledge.
Category leads
reasoning·Claude Sonnet 4.6arena·Claude Sonnet 4.6knowledge·Claude Sonnet 4.6math·Claude Sonnet 4.6coding·Claude Opus 4.5
Hype vs Reality
Attention vs performance
Claude Sonnet 4.6
#102 by perf·#18 by attention
Claude Opus 4.5
#111 by perf·no signal
Best value
Claude Sonnet 4.6
1.7x better value than Claude Opus 4.5
Claude Sonnet 4.6
5.3 pts/$
$9.00/M
Claude Opus 4.5
3.0 pts/$
$15.00/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
Anthropic
$380.0B·Tier 1
Head to head
13 benchmarks · 2 models
Claude Sonnet 4.6Claude Opus 4.5
ARC-AGI
Claude Sonnet 4.6 leads by +6.5
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Sonnet 4.6
86.5
Claude Opus 4.5
80.0
ARC-AGI-2
Claude Sonnet 4.6 leads by +22.8
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Sonnet 4.6
60.4
Claude Opus 4.5
37.6
Chatbot Arena Elo · Coding
Claude Sonnet 4.6 leads by +55.8
Claude Sonnet 4.6
1521.0
Claude Opus 4.5
1465.2
Chatbot Arena Elo · Overall
Claude Opus 4.5 leads by +5.5
Claude Sonnet 4.6
1462.2
Claude Opus 4.5
1467.7
Chess Puzzles
Claude Sonnet 4.6 leads by +1.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Claude Sonnet 4.6
13.0
Claude Opus 4.5
12.0
FrontierMath-2025-02-28-Private
Claude Sonnet 4.6 leads by +11.7
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Sonnet 4.6
32.4
Claude Opus 4.5
20.7
FrontierMath-Tier-4-2025-07-01-Private
Claude Sonnet 4.6 leads by +4.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Claude Sonnet 4.6
8.3
Claude Opus 4.5
4.2
GPQA diamond
Claude Sonnet 4.6 leads by +1.8
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Sonnet 4.6
83.2
Claude Opus 4.5
81.4
OTIS Mock AIME 2024-2025
Claude Opus 4.5 leads by +0.3
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Sonnet 4.6
85.8
Claude Opus 4.5
86.1
PostTrainBench
Claude Opus 4.5 leads by +0.9
Claude Sonnet 4.6
16.4
Claude Opus 4.5
17.3
SimpleQA Verified
Claude Opus 4.5 leads by +12.8
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Claude Sonnet 4.6
29.0
Claude Opus 4.5
41.8
SWE-Bench verified
Claude Opus 4.5 leads by +1.4
Claude Sonnet 4.6
75.2
Claude Opus 4.5
76.7
WeirdML
Claude Sonnet 4.6 leads by +2.4
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Sonnet 4.6
66.1
Claude Opus 4.5
63.7
Full benchmark table
| Benchmark | Claude Sonnet 4.6 | Claude Opus 4.5 |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 86.5 | 80.0 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 60.4 | 37.6 |
Chatbot Arena Elo · Coding | 1521.0 | 1465.2 |
Chatbot Arena Elo · Overall | 1462.2 | 1467.7 |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | 13.0 | 12.0 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 32.4 | 20.7 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 8.3 | 4.2 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 83.2 | 81.4 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 85.8 | 86.1 |
PostTrainBench | 16.4 | 17.3 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 29.0 | 41.8 |
SWE-Bench verified | 75.2 | 76.7 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 66.1 | 63.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $3.00 | $15.00 | 1.0M tokens (~500 books) | $60.00 | |
| $5.00 | $25.00 | 200K tokens (~100 books) | $100.00 |