Compare · ModelsLive · 2 picked · head to head
Claude 3.7 Sonnet vs Claude Opus 4.6
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Opus 4.6 wins on 10/10 benchmarks
Claude Opus 4.6 wins 10 of 10 shared benchmarks. Leads in reasoning · coding · math.
Category leads
reasoning·Claude Opus 4.6coding·Claude Opus 4.6math·Claude Opus 4.6knowledge·Claude Opus 4.6
Hype vs Reality
Attention vs performance
Claude 3.7 Sonnet
#101 by perf·no signal
Claude Opus 4.6
#54 by perf·#4 by attention
Best value
Claude 3.7 Sonnet
1.4x better value than Claude Opus 4.6
Claude 3.7 Sonnet
5.3 pts/$
$9.00/M
Claude Opus 4.6
3.8 pts/$
$15.00/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
Anthropic
$380.0B·Tier 1
Head to head
10 benchmarks · 2 models
Claude 3.7 SonnetClaude Opus 4.6
ARC-AGI
Claude Opus 4.6 leads by +65.4
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude 3.7 Sonnet
28.6
Claude Opus 4.6
94.0
ARC-AGI-2
Claude Opus 4.6 leads by +68.3
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude 3.7 Sonnet
0.9
Claude Opus 4.6
69.2
Cybench
Claude Opus 4.6 leads by +73.0
Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning.
Claude 3.7 Sonnet
20.0
Claude Opus 4.6
93.0
FrontierMath-2025-02-28-Private
Claude Opus 4.6 leads by +36.6
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude 3.7 Sonnet
4.1
Claude Opus 4.6
40.7
GPQA diamond
Claude Opus 4.6 leads by +14.4
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude 3.7 Sonnet
73.0
Claude Opus 4.6
87.4
GSO-Bench
Claude Opus 4.6 leads by +29.5
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
Claude 3.7 Sonnet
3.8
Claude Opus 4.6
33.3
HLE
Claude Opus 4.6 leads by +27.7
HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains.
Claude 3.7 Sonnet
3.4
Claude Opus 4.6
31.1
OTIS Mock AIME 2024-2025
Claude Opus 4.6 leads by +36.7
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude 3.7 Sonnet
57.7
Claude Opus 4.6
94.4
SimpleBench
Claude Opus 4.6 leads by +25.4
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Claude 3.7 Sonnet
35.7
Claude Opus 4.6
61.1
SWE-Bench verified
Claude Opus 4.6 leads by +17.8
Claude 3.7 Sonnet
61.0
Claude Opus 4.6
78.7
Full benchmark table
| Benchmark | Claude 3.7 Sonnet | Claude Opus 4.6 |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 28.6 | 94.0 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 0.9 | 69.2 |
Cybench Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning. | 20.0 | 93.0 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 4.1 | 40.7 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 73.0 | 87.4 |
GSO-Bench GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues. | 3.8 | 33.3 |
HLE HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains. | 3.4 | 31.1 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 57.7 | 94.4 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 35.7 | 61.1 |
SWE-Bench verified | 61.0 | 78.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $3.00 | $15.00 | 200K tokens (~100 books) | $60.00 | |
| $5.00 | $25.00 | 1.0M tokens (~500 books) | $100.00 |