Beta
Compare · ModelsLive · 2 picked · head to head

Claude Opus 4 vs Claude Opus 4.6

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Claude Opus 4.6 wins 12 of 12 shared benchmarks. Leads in reasoning · coding · math.

Category leads
reasoning·Claude Opus 4.6coding·Claude Opus 4.6math·Claude Opus 4.6knowledge·Claude Opus 4.6
Hype vs Reality
Claude Opus 4
#131 by perf·no signal
QUIET
Claude Opus 4.6
#54 by perf·#4 by attention
DESERVED
Best value
4.1x better value than Claude Opus 4
Claude Opus 4
0.9 pts/$
$45.00/M
Claude Opus 4.6
3.8 pts/$
$15.00/M
Vendor risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
Head to head
Claude Opus 4Claude Opus 4.6
ARC-AGI
Claude Opus 4.6 leads by +58.3
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Opus 4
35.7
Claude Opus 4.6
94.0
ARC-AGI-2
Claude Opus 4.6 leads by +60.6
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Opus 4
8.6
Claude Opus 4.6
69.2
Cybench
Claude Opus 4.6 leads by +55.0
Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning.
Claude Opus 4
38.0
Claude Opus 4.6
93.0
FrontierMath-2025-02-28-Private
Claude Opus 4.6 leads by +36.2
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Opus 4
4.5
Claude Opus 4.6
40.7
FrontierMath-Tier-4-2025-07-01-Private
Claude Opus 4.6 leads by +18.7
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Claude Opus 4
4.2
Claude Opus 4.6
22.9
GPQA diamond
Claude Opus 4.6 leads by +19.0
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Opus 4
68.3
Claude Opus 4.6
87.4
GSO-Bench
Claude Opus 4.6 leads by +26.4
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
Claude Opus 4
6.9
Claude Opus 4.6
33.3
HLE
Claude Opus 4.6 leads by +24.9
HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains.
Claude Opus 4
6.2
Claude Opus 4.6
31.1
OTIS Mock AIME 2024-2025
Claude Opus 4.6 leads by +30.0
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Opus 4
64.4
Claude Opus 4.6
94.4
SimpleBench
Claude Opus 4.6 leads by +10.6
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Claude Opus 4
50.6
Claude Opus 4.6
61.1
SWE-Bench verified
Claude Opus 4.6 leads by +8.1
Claude Opus 4
70.7
Claude Opus 4.6
78.7
WeirdML
Claude Opus 4.6 leads by +34.5
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Opus 4
43.4
Claude Opus 4.6
77.9
Full benchmark table
BenchmarkClaude Opus 4Claude Opus 4.6
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
35.794.0
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
8.669.2
Cybench
Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning.
38.093.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
4.540.7
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
4.222.9
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
68.387.4
GSO-Bench
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
6.933.3
HLE
HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains.
6.231.1
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
64.494.4
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
50.661.1
SWE-Bench verified
70.778.7
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
43.477.9
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Anthropic logoClaude Opus 4$15.00$75.00200K tokens (~100 books)$300.00
Anthropic logoClaude Opus 4.6$5.00$25.001.0M tokens (~500 books)$100.00