Compare · ModelsLive · 2 picked · head to head
Claude Opus 4.1 vs Claude Opus 4
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Opus 4.1 wins on 9/11 benchmarks
Claude Opus 4.1 wins 9 of 11 shared benchmarks. Leads in coding · knowledge · math.
Category leads
coding·Claude Opus 4.1knowledge·Claude Opus 4.1math·Claude Opus 4.1reasoning·Claude Opus 4.1
Hype vs Reality
Attention vs performance
Claude Opus 4.1
#137 by perf·no signal
Claude Opus 4
#133 by perf·no signal
Best value
Claude Opus 4
1.0x better value than Claude Opus 4.1
Claude Opus 4.1
0.9 pts/$
$45.00/M
Claude Opus 4
0.9 pts/$
$45.00/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
Anthropic
$380.0B·Tier 1
Head to head
11 benchmarks · 2 models
Claude Opus 4.1Claude Opus 4
Cybench
Claude Opus 4.1 leads by +4.0
Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning.
Claude Opus 4.1
42.0
Claude Opus 4
38.0
DeepResearch Bench
Claude Opus 4.1 leads by +0.7
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
Claude Opus 4.1
49.7
Claude Opus 4
49.0
FrontierMath-2025-02-28-Private
Claude Opus 4.1 leads by +2.8
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Opus 4.1
7.2
Claude Opus 4
4.5
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Claude Opus 4.1
4.2
Claude Opus 4
4.2
GPQA diamond
Claude Opus 4.1 leads by +1.4
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Opus 4.1
69.7
Claude Opus 4
68.3
HLE
Claude Opus 4.1 leads by +0.8
HLE (Humanity's Last Exam) · a reasoning benchmark designed to be the hardest public evaluation of AI. Questions span mathematics, physics, philosophy, and logic · curated to be at or beyond the frontier of human expert capability. Tested with and without tool augmentation. Claude Opus 4.7 scores 46.9% without tools and 54.7% with tools · making it one of the few benchmarks where the top score is below 60%.
Claude Opus 4.1
7.1
Claude Opus 4
6.2
OTIS Mock AIME 2024-2025
Claude Opus 4.1 leads by +4.5
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Opus 4.1
68.9
Claude Opus 4
64.4
SimpleBench
Claude Opus 4.1 leads by +1.4
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Claude Opus 4.1
52.0
Claude Opus 4
50.6
SWE-Bench verified
Claude Opus 4.1 leads by +2.7
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
Claude Opus 4.1
73.3
Claude Opus 4
70.7
VPCT
Claude Opus 4 leads by +4.5
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
Claude Opus 4.1
2.5
Claude Opus 4
7.0
WeirdML
Claude Opus 4 leads by +0.6
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Opus 4.1
42.8
Claude Opus 4
43.4
Full benchmark table
| Benchmark | Claude Opus 4.1 | Claude Opus 4 |
|---|---|---|
Cybench Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning. | 42.0 | 38.0 |
DeepResearch Bench DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses. | 49.7 | 49.0 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 7.2 | 4.5 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 4.2 | 4.2 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 69.7 | 68.3 |
HLE HLE (Humanity's Last Exam) · a reasoning benchmark designed to be the hardest public evaluation of AI. Questions span mathematics, physics, philosophy, and logic · curated to be at or beyond the frontier of human expert capability. Tested with and without tool augmentation. Claude Opus 4.7 scores 46.9% without tools and 54.7% with tools · making it one of the few benchmarks where the top score is below 60%. | 7.1 | 6.2 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 68.9 | 64.4 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 52.0 | 50.6 |
SWE-Bench verified SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability. | 73.3 | 70.7 |
VPCT VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations. | 2.5 | 7.0 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 42.8 | 43.4 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $15.00 | $75.00 | 200K tokens (~100 books) | $300.00 | |
| $15.00 | $75.00 | 200K tokens (~100 books) | $300.00 |
People also compared