Compare · ModelsLive · 3 picked · head to head
Claude Opus 4.5 vs Claude 3.5 Sonnet vs gpt-oss-120b
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Opus 4.5 wins on 14/23 benchmarks
Claude Opus 4.5 wins 14 of 23 shared benchmarks. Leads in arena · knowledge · safety.
Category leads
arena·Claude Opus 4.5knowledge·Claude Opus 4.5math·gpt-oss-120bsafety·Claude Opus 4.5reasoning·Claude Opus 4.5coding·Claude Opus 4.5agentic·Claude Opus 4.5language·Claude 3.5 Sonnet
Hype vs Reality
Attention vs performance
Claude Opus 4.5
#113 by perf·no signal
Claude 3.5 Sonnet
#129 by perf·no signal
gpt-oss-120b
#108 by perf·no signal
Best value
gpt-oss-120b
141.5x better value than Claude Opus 4.5
Claude Opus 4.5
3.0 pts/$
$15.00/M
Claude 3.5 Sonnet
—
no price
gpt-oss-120b
428.3 pts/$
$0.11/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
Anthropic
$380.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
23 benchmarks · 3 models
Claude Opus 4.5Claude 3.5 Sonnetgpt-oss-120b
Chatbot Arena Elo · Overall
Claude Opus 4.5 leads by +96.4
Claude Opus 4.5
1467.7
Claude 3.5 Sonnet
1371.4
gpt-oss-120b
1353.8
GPQA diamond
Claude Opus 4.5 leads by +13.7
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Opus 4.5
81.4
Claude 3.5 Sonnet
38.7
gpt-oss-120b
67.7
OTIS Mock AIME 2024-2025
gpt-oss-120b leads by +2.8
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Opus 4.5
86.1
Claude 3.5 Sonnet
6.4
gpt-oss-120b
88.9
Fortress
Claude Opus 4.5 leads by +0.6
Claude Opus 4.5
13.6
Claude 3.5 Sonnet
13.0
gpt-oss-120b
8.2
SimpleBench
Claude Opus 4.5 leads by +41.4
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Claude Opus 4.5
54.4
Claude 3.5 Sonnet
13.0
gpt-oss-120b
6.5
WeirdML
Claude Opus 4.5 leads by +15.5
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Opus 4.5
63.7
Claude 3.5 Sonnet
31.0
gpt-oss-120b
48.2
Aider polyglot
Claude 3.5 Sonnet leads by +9.8
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
Claude 3.5 Sonnet
51.6
gpt-oss-120b
41.8
APEX-Agents
Claude Opus 4.5 leads by +13.7
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
Claude Opus 4.5
18.4
gpt-oss-120b
4.7
Chess Puzzles
gpt-oss-120b leads by +8.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Claude Opus 4.5
12.0
gpt-oss-120b
20.0
Cybench
Claude Opus 4.5 leads by +64.5
Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning.
Claude Opus 4.5
82.0
Claude 3.5 Sonnet
17.5
FrontierMath-2025-02-28-Private
Claude Opus 4.5 leads by +19.7
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Opus 4.5
20.7
Claude 3.5 Sonnet
1.0
FrontierMath-Tier-4-2025-07-01-Private
Claude Opus 4.5 leads by +4.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Claude Opus 4.5
4.2
Claude 3.5 Sonnet
0.1
GeoBench
Claude Opus 4.5 leads by +13.0
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
Claude Opus 4.5
75.0
Claude 3.5 Sonnet
62.0
GSO-Bench
Claude Opus 4.5 leads by +21.9
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
Claude Opus 4.5
26.5
Claude 3.5 Sonnet
4.6
HELM · GPQA
gpt-oss-120b leads by +11.9
Claude 3.5 Sonnet
56.5
gpt-oss-120b
68.4
HELM · IFEval
Claude 3.5 Sonnet leads by +2.0
Claude 3.5 Sonnet
85.6
gpt-oss-120b
83.6
HELM · MMLU-Pro
gpt-oss-120b leads by +1.8
Claude 3.5 Sonnet
77.7
gpt-oss-120b
79.5
HELM · Omni-MATH
gpt-oss-120b leads by +41.2
Claude 3.5 Sonnet
27.6
gpt-oss-120b
68.8
HELM · WildBench
gpt-oss-120b leads by +5.3
Claude 3.5 Sonnet
79.2
gpt-oss-120b
84.5
Lech Mazur Writing
Claude 3.5 Sonnet leads by +3.0
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
Claude 3.5 Sonnet
80.3
gpt-oss-120b
77.3
SimpleQA Verified
Claude Opus 4.5 leads by +27.9
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Claude Opus 4.5
41.8
gpt-oss-120b
13.9
SWE-Bench Verified (Bash Only)
Claude Opus 4.5 leads by +48.4
SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks.
Claude Opus 4.5
74.4
gpt-oss-120b
26.0
Terminal Bench
Claude Opus 4.5 leads by +44.4
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
Claude Opus 4.5
63.1
gpt-oss-120b
18.7
Full benchmark table
| Benchmark | Claude Opus 4.5 | Claude 3.5 Sonnet | gpt-oss-120b |
|---|---|---|---|
Chatbot Arena Elo · Overall | 1467.7 | 1371.4 | 1353.8 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 81.4 | 38.7 | 67.7 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 86.1 | 6.4 | 88.9 |
Fortress | 13.6 | 13.0 | 8.2 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 54.4 | 13.0 | 6.5 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 63.7 | 31.0 | 48.2 |
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | — | 51.6 | 41.8 |
APEX-Agents APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments. | 18.4 | — | 4.7 |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | 12.0 | — | 20.0 |
Cybench Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning. | 82.0 | 17.5 | — |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 20.7 | 1.0 | — |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 4.2 | 0.1 | — |
GeoBench GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding. | 75.0 | 62.0 | — |
GSO-Bench GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues. | 26.5 | 4.6 | — |
HELM · GPQA | — | 56.5 | 68.4 |
HELM · IFEval | — | 85.6 | 83.6 |
HELM · MMLU-Pro | — | 77.7 | 79.5 |
HELM · Omni-MATH | — | 27.6 | 68.8 |
HELM · WildBench | — | 79.2 | 84.5 |
Lech Mazur Writing Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication. | — | 80.3 | 77.3 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 41.8 | — | 13.9 |
SWE-Bench Verified (Bash Only) SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks. | 74.4 | — | 26.0 |
Terminal Bench Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence. | 63.1 | — | 18.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $5.00 | $25.00 | 200K tokens (~100 books) | $100.00 | |
| — | — | — | — | |
| $0.04 | $0.18 | 131K tokens (~66 books) | $0.74 |
People also compared