Beta
Compare · ModelsLive · 2 picked · head to head

Claude Opus 4.6 vs Claude Sonnet 4.6

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Claude Opus 4.6 wins 13 of 13 shared benchmarks. Leads in reasoning · arena · knowledge.

Category leads
reasoning·Claude Opus 4.6arena·Claude Opus 4.6knowledge·Claude Opus 4.6math·Claude Opus 4.6coding·Claude Opus 4.6
Hype vs Reality
Claude Opus 4.6
#54 by perf·#4 by attention
DESERVED
Claude Sonnet 4.6
#102 by perf·#18 by attention
UNDERRATED
Best value
1.4x better value than Claude Opus 4.6
Claude Opus 4.6
3.8 pts/$
$15.00/M
Claude Sonnet 4.6
5.3 pts/$
$9.00/M
Vendor risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
Head to head
Claude Opus 4.6Claude Sonnet 4.6
ARC-AGI
Claude Opus 4.6 leads by +7.5
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Opus 4.6
94.0
Claude Sonnet 4.6
86.5
ARC-AGI-2
Claude Opus 4.6 leads by +8.8
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Opus 4.6
69.2
Claude Sonnet 4.6
60.4
Chatbot Arena Elo · Coding
Claude Opus 4.6 leads by +21.9
Claude Opus 4.6
1542.9
Claude Sonnet 4.6
1521.0
Chatbot Arena Elo · Overall
Claude Opus 4.6 leads by +34.4
Claude Opus 4.6
1496.6
Claude Sonnet 4.6
1462.2
Chess Puzzles
Claude Opus 4.6 leads by +4.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Claude Opus 4.6
17.0
Claude Sonnet 4.6
13.0
FrontierMath-2025-02-28-Private
Claude Opus 4.6 leads by +8.3
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Opus 4.6
40.7
Claude Sonnet 4.6
32.4
FrontierMath-Tier-4-2025-07-01-Private
Claude Opus 4.6 leads by +14.6
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Claude Opus 4.6
22.9
Claude Sonnet 4.6
8.3
GPQA diamond
Claude Opus 4.6 leads by +4.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Opus 4.6
87.4
Claude Sonnet 4.6
83.2
OTIS Mock AIME 2024-2025
Claude Opus 4.6 leads by +8.7
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Opus 4.6
94.4
Claude Sonnet 4.6
85.8
PostTrainBench
Claude Opus 4.6 leads by +6.7
Claude Opus 4.6
23.2
Claude Sonnet 4.6
16.4
SimpleQA Verified
Claude Opus 4.6 leads by +17.5
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Claude Opus 4.6
46.5
Claude Sonnet 4.6
29.0
SWE-Bench verified
Claude Opus 4.6 leads by +3.5
Claude Opus 4.6
78.7
Claude Sonnet 4.6
75.2
WeirdML
Claude Opus 4.6 leads by +11.8
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Opus 4.6
77.9
Claude Sonnet 4.6
66.1
Full benchmark table
BenchmarkClaude Opus 4.6Claude Sonnet 4.6
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
94.086.5
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
69.260.4
Chatbot Arena Elo · Coding
1542.91521.0
Chatbot Arena Elo · Overall
1496.61462.2
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
17.013.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
40.732.4
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
22.98.3
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
87.483.2
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
94.485.8
PostTrainBench
23.216.4
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
46.529.0
SWE-Bench verified
78.775.2
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
77.966.1
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Anthropic logoClaude Opus 4.6$5.00$25.001.0M tokens (~500 books)$100.00
Anthropic logoClaude Sonnet 4.6$3.00$15.001.0M tokens (~500 books)$60.00