Compare · ModelsLive · 3 picked · head to head
Claude Opus 4.6 (Fast) vs Claude Opus 4.6 vs Claude Sonnet 4.6
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Opus 4.6 wins on 11/18 benchmarks
Claude Opus 4.6 wins 11 of 18 shared benchmarks. Leads in reasoning · knowledge · math.
Category leads
arena·Claude Opus 4.6 (Fast)speed·Claude Opus 4.6 (Fast)reasoning·Claude Opus 4.6knowledge·Claude Opus 4.6math·Claude Opus 4.6agentic·Claude Opus 4.6 (Fast)coding·Claude Opus 4.6
Hype vs Reality
Attention vs performance
Claude Opus 4.6 (Fast)
#122 by perf·no signal
Claude Opus 4.6
#56 by perf·#4 by attention
Claude Sonnet 4.6
#104 by perf·#18 by attention
Best value
Claude Sonnet 4.6
1.4x better value than Claude Opus 4.6
Claude Opus 4.6 (Fast)
0.5 pts/$
$90.00/M
Claude Opus 4.6
3.8 pts/$
$15.00/M
Claude Sonnet 4.6
5.3 pts/$
$9.00/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
Anthropic
$380.0B·Tier 1
Anthropic
$380.0B·Tier 1
Head to head
18 benchmarks · 3 models
Claude Opus 4.6 (Fast)Claude Opus 4.6Claude Sonnet 4.6
Chatbot Arena Elo · Coding
Claude Opus 4.6 (Fast) leads by +3.3
Claude Opus 4.6 (Fast)
1546.2
Claude Opus 4.6
1542.9
Claude Sonnet 4.6
1521.0
Chatbot Arena Elo · Overall
Claude Opus 4.6 (Fast) leads by +6.2
Claude Opus 4.6 (Fast)
1502.8
Claude Opus 4.6
1496.6
Claude Sonnet 4.6
1462.2
Artificial Analysis · Agentic Index
Claude Opus 4.6 (Fast) leads by +4.6
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
Claude Opus 4.6 (Fast)
67.6
Claude Sonnet 4.6
63.0
Artificial Analysis · Coding Index
Claude Sonnet 4.6 leads by +2.8
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
Claude Opus 4.6 (Fast)
48.1
Claude Sonnet 4.6
50.9
Artificial Analysis · Quality Index
Claude Opus 4.6 (Fast) leads by +1.2
Claude Opus 4.6 (Fast)
53.0
Claude Sonnet 4.6
51.7
ARC-AGI
Claude Opus 4.6 leads by +7.5
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Opus 4.6
94.0
Claude Sonnet 4.6
86.5
ARC-AGI-2
Claude Opus 4.6 leads by +8.8
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Opus 4.6
69.2
Claude Sonnet 4.6
60.4
Chess Puzzles
Claude Opus 4.6 leads by +4.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Claude Opus 4.6
17.0
Claude Sonnet 4.6
13.0
FrontierMath-2025-02-28-Private
Claude Opus 4.6 leads by +8.3
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Opus 4.6
40.7
Claude Sonnet 4.6
32.4
FrontierMath-Tier-4-2025-07-01-Private
Claude Opus 4.6 leads by +14.6
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Claude Opus 4.6
22.9
Claude Sonnet 4.6
8.3
GPQA diamond
Claude Opus 4.6 leads by +4.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Opus 4.6
87.4
Claude Sonnet 4.6
83.2
OTIS Mock AIME 2024-2025
Claude Opus 4.6 leads by +8.7
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Opus 4.6
94.4
Claude Sonnet 4.6
85.8
PostTrainBench
Claude Opus 4.6 leads by +6.7
Claude Opus 4.6
23.2
Claude Sonnet 4.6
16.4
SWE Atlas · Codebase QnA
Claude Opus 4.6 (Fast) leads by +2.1
Claude Opus 4.6 (Fast)
33.3
Claude Sonnet 4.6
31.2
SWE Atlas · Test Writing
Claude Opus 4.6 (Fast) leads by +4.9
Claude Opus 4.6 (Fast)
36.7
Claude Sonnet 4.6
31.8
SimpleQA Verified
Claude Opus 4.6 leads by +17.5
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Claude Opus 4.6
46.5
Claude Sonnet 4.6
29.0
SWE-Bench verified
Claude Opus 4.6 leads by +3.5
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
Claude Opus 4.6
78.7
Claude Sonnet 4.6
75.2
WeirdML
Claude Opus 4.6 leads by +11.8
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Opus 4.6
77.9
Claude Sonnet 4.6
66.1
Full benchmark table
| Benchmark | Claude Opus 4.6 (Fast) | Claude Opus 4.6 | Claude Sonnet 4.6 |
|---|---|---|---|
Chatbot Arena Elo · Coding | 1546.2 | 1542.9 | 1521.0 |
Chatbot Arena Elo · Overall | 1502.8 | 1496.6 | 1462.2 |
Artificial Analysis · Agentic Index Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?" | 67.6 | — | 63.0 |
Artificial Analysis · Coding Index Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads. | 48.1 | — | 50.9 |
Artificial Analysis · Quality Index | 53.0 | — | 51.7 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | — | 94.0 | 86.5 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | — | 69.2 | 60.4 |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | — | 17.0 | 13.0 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | — | 40.7 | 32.4 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | — | 22.9 | 8.3 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | — | 87.4 | 83.2 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | — | 94.4 | 85.8 |
PostTrainBench | — | 23.2 | 16.4 |
SWE Atlas · Codebase QnA | 33.3 | — | 31.2 |
SWE Atlas · Test Writing | 36.7 | — | 31.8 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | — | 46.5 | 29.0 |
SWE-Bench verified SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability. | — | 78.7 | 75.2 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | — | 77.9 | 66.1 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $30.00 | $150.00 | 1.0M tokens (~500 books) | $600.00 | |
| $5.00 | $25.00 | 1.0M tokens (~500 books) | $100.00 | |
| $3.00 | $15.00 | 1.0M tokens (~500 books) | $60.00 |