Compare · ModelsLive · 2 picked · head to head
o3 vs gpt-oss-120b
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
o3 wins on 13/14 benchmarks
o3 wins 13 of 14 shared benchmarks. Leads in coding · knowledge · language.
Category leads
coding·o3knowledge·o3language·o3math·o3reasoning·o3
Hype vs Reality
Attention vs performance
o3
#67 by perf·no signal
gpt-oss-120b
#106 by perf·no signal
Best value
gpt-oss-120b
37.1x better value than o3
o3
11.0 pts/$
$5.00/M
gpt-oss-120b
409.6 pts/$
$0.11/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
14 benchmarks · 2 models
o3gpt-oss-120b
Aider polyglot
o3 leads by +39.5
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
o3
81.3
gpt-oss-120b
41.8
Fiction.LiveBench
o3 leads by +44.5
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
o3
88.9
gpt-oss-120b
44.4
GPQA diamond
o3 leads by +8.1
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
o3
75.8
gpt-oss-120b
67.7
HELM · GPQA
o3 leads by +6.9
o3
75.3
gpt-oss-120b
68.4
HELM · IFEval
o3 leads by +3.3
o3
86.9
gpt-oss-120b
83.6
HELM · MMLU-Pro
o3 leads by +6.4
o3
85.9
gpt-oss-120b
79.5
HELM · Omni-MATH
o3 leads by +2.6
o3
71.4
gpt-oss-120b
68.8
HELM · WildBench
o3 leads by +1.6
o3
86.1
gpt-oss-120b
84.5
Lech Mazur Writing
o3 leads by +6.6
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
o3
83.9
gpt-oss-120b
77.3
OTIS Mock AIME 2024-2025
gpt-oss-120b leads by +5.0
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
o3
83.9
gpt-oss-120b
88.9
SimpleBench
o3 leads by +37.2
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
o3
43.7
gpt-oss-120b
6.5
SimpleQA Verified
o3 leads by +39.1
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
o3
53.0
gpt-oss-120b
13.9
SWE-Bench Verified (Bash Only)
o3 leads by +32.4
SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks.
o3
58.4
gpt-oss-120b
26.0
WeirdML
o3 leads by +4.3
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
o3
52.4
gpt-oss-120b
48.2
Full benchmark table
| Benchmark | o3 | gpt-oss-120b |
|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 81.3 | 41.8 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 88.9 | 44.4 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 75.8 | 67.7 |
HELM · GPQA | 75.3 | 68.4 |
HELM · IFEval | 86.9 | 83.6 |
HELM · MMLU-Pro | 85.9 | 79.5 |
HELM · Omni-MATH | 71.4 | 68.8 |
HELM · WildBench | 86.1 | 84.5 |
Lech Mazur Writing Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication. | 83.9 | 77.3 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 83.9 | 88.9 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 43.7 | 6.5 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 53.0 | 13.9 |
SWE-Bench Verified (Bash Only) SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks. | 58.4 | 26.0 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 52.4 | 48.2 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.00 | $8.00 | 200K tokens (~100 books) | $35.00 | |
| $0.04 | $0.19 | 131K tokens (~66 books) | $0.77 |