Compare · ModelsLive · 2 picked · head to head
GPT-4.1 vs gpt-oss-120b
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-4.1 wins on 7/12 benchmarks
GPT-4.1 wins 7 of 12 shared benchmarks. Leads in coding · knowledge · language.
Category leads
coding·GPT-4.1knowledge·GPT-4.1language·GPT-4.1math·gpt-oss-120breasoning·GPT-4.1
Hype vs Reality
Attention vs performance
GPT-4.1
#121 by perf·no signal
gpt-oss-120b
#106 by perf·no signal
Best value
gpt-oss-120b
47.3x better value than GPT-4.1
GPT-4.1
8.7 pts/$
$5.00/M
gpt-oss-120b
409.6 pts/$
$0.11/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
12 benchmarks · 2 models
GPT-4.1gpt-oss-120b
Aider polyglot
GPT-4.1 leads by +10.6
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
GPT-4.1
52.4
gpt-oss-120b
41.8
Fiction.LiveBench
GPT-4.1 leads by +19.5
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
GPT-4.1
63.9
gpt-oss-120b
44.4
GPQA diamond
gpt-oss-120b leads by +11.8
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4.1
55.9
gpt-oss-120b
67.7
HELM · GPQA
gpt-oss-120b leads by +2.5
GPT-4.1
65.9
gpt-oss-120b
68.4
HELM · IFEval
GPT-4.1 leads by +0.2
GPT-4.1
83.8
gpt-oss-120b
83.6
HELM · MMLU-Pro
GPT-4.1 leads by +1.6
GPT-4.1
81.1
gpt-oss-120b
79.5
HELM · Omni-MATH
gpt-oss-120b leads by +21.7
GPT-4.1
47.1
gpt-oss-120b
68.8
HELM · WildBench
GPT-4.1 leads by +0.9
GPT-4.1
85.4
gpt-oss-120b
84.5
OTIS Mock AIME 2024-2025
gpt-oss-120b leads by +50.6
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4.1
38.3
gpt-oss-120b
88.9
SimpleBench
GPT-4.1 leads by +5.9
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GPT-4.1
12.4
gpt-oss-120b
6.5
SWE-Bench Verified (Bash Only)
GPT-4.1 leads by +13.6
SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks.
GPT-4.1
39.6
gpt-oss-120b
26.0
WeirdML
gpt-oss-120b leads by +9.1
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4.1
39.0
gpt-oss-120b
48.2
Full benchmark table
| Benchmark | GPT-4.1 | gpt-oss-120b |
|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 52.4 | 41.8 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 63.9 | 44.4 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 55.9 | 67.7 |
HELM · GPQA | 65.9 | 68.4 |
HELM · IFEval | 83.8 | 83.6 |
HELM · MMLU-Pro | 81.1 | 79.5 |
HELM · Omni-MATH | 47.1 | 68.8 |
HELM · WildBench | 85.4 | 84.5 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 38.3 | 88.9 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 12.4 | 6.5 |
SWE-Bench Verified (Bash Only) SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks. | 39.6 | 26.0 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 39.0 | 48.2 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.00 | $8.00 | 1.0M tokens (~524 books) | $35.00 | |
| $0.04 | $0.19 | 131K tokens (~66 books) | $0.77 |