Compare · ModelsLive · 3 picked · head to head
Claude Opus 4.6 (Fast) vs GPT-5.3-Codex vs Claude Sonnet 4.6
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Opus 4.6 (Fast) wins on 5/10 benchmarks
Claude Opus 4.6 (Fast) wins 5 of 10 shared benchmarks. Leads in agentic · arena.
Category leads
speed·GPT-5.3-Codexagentic·Claude Opus 4.6 (Fast)arena·Claude Opus 4.6 (Fast)knowledge·GPT-5.3-Codexcoding·Claude Sonnet 4.6
Hype vs Reality
Attention vs performance
Claude Opus 4.6 (Fast)
#122 by perf·no signal
GPT-5.3-Codex
#86 by perf·no signal
Claude Sonnet 4.6
#104 by perf·#18 by attention
Best value
GPT-5.3-Codex
1.3x better value than Claude Sonnet 4.6
Claude Opus 4.6 (Fast)
0.5 pts/$
$90.00/M
GPT-5.3-Codex
6.6 pts/$
$7.88/M
Claude Sonnet 4.6
5.3 pts/$
$9.00/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
OpenAI
$840.0B·Tier 1
Anthropic
$380.0B·Tier 1
Head to head
10 benchmarks · 3 models
Claude Opus 4.6 (Fast)GPT-5.3-CodexClaude Sonnet 4.6
Artificial Analysis · Agentic Index
Claude Opus 4.6 (Fast) leads by +4.6
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
Claude Opus 4.6 (Fast)
67.6
GPT-5.3-Codex
62.2
Claude Sonnet 4.6
63.0
Artificial Analysis · Coding Index
GPT-5.3-Codex leads by +2.2
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
Claude Opus 4.6 (Fast)
48.1
GPT-5.3-Codex
53.1
Claude Sonnet 4.6
50.9
Artificial Analysis · Quality Index
GPT-5.3-Codex leads by +1.0
Claude Opus 4.6 (Fast)
53.0
GPT-5.3-Codex
54.0
Claude Sonnet 4.6
51.7
SWE Atlas · Codebase QnA
Claude Opus 4.6 (Fast) leads by +0.7
Claude Opus 4.6 (Fast)
33.3
GPT-5.3-Codex
32.6
Claude Sonnet 4.6
31.2
Chatbot Arena Elo · Coding
Claude Opus 4.6 (Fast) leads by +25.2
Claude Opus 4.6 (Fast)
1546.2
Claude Sonnet 4.6
1521.0
Chatbot Arena Elo · Overall
Claude Opus 4.6 (Fast) leads by +40.5
Claude Opus 4.6 (Fast)
1502.8
Claude Sonnet 4.6
1462.2
PostTrainBench
GPT-5.3-Codex leads by +1.3
GPT-5.3-Codex
17.8
Claude Sonnet 4.6
16.4
SWE Atlas · Test Writing
Claude Opus 4.6 (Fast) leads by +4.9
Claude Opus 4.6 (Fast)
36.7
Claude Sonnet 4.6
31.8
SWE-Bench verified
Claude Sonnet 4.6 leads by +0.4
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
GPT-5.3-Codex
74.8
Claude Sonnet 4.6
75.2
WeirdML
GPT-5.3-Codex leads by +13.2
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-5.3-Codex
79.3
Claude Sonnet 4.6
66.1
Full benchmark table
| Benchmark | Claude Opus 4.6 (Fast) | GPT-5.3-Codex | Claude Sonnet 4.6 |
|---|---|---|---|
Artificial Analysis · Agentic Index Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?" | 67.6 | 62.2 | 63.0 |
Artificial Analysis · Coding Index Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads. | 48.1 | 53.1 | 50.9 |
Artificial Analysis · Quality Index | 53.0 | 54.0 | 51.7 |
SWE Atlas · Codebase QnA | 33.3 | 32.6 | 31.2 |
Chatbot Arena Elo · Coding | 1546.2 | — | 1521.0 |
Chatbot Arena Elo · Overall | 1502.8 | — | 1462.2 |
PostTrainBench | — | 17.8 | 16.4 |
SWE Atlas · Test Writing | 36.7 | — | 31.8 |
SWE-Bench verified SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability. | — | 74.8 | 75.2 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | — | 79.3 | 66.1 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $30.00 | $150.00 | 1.0M tokens (~500 books) | $600.00 | |
| $1.75 | $14.00 | 400K tokens (~200 books) | $48.13 | |
| $3.00 | $15.00 | 1.0M tokens (~500 books) | $60.00 |