Compare · ModelsLive · 2 picked · head to head
GPT-4.1 Mini vs GPT-5 Mini
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-5 Mini wins on 14/14 benchmarks
GPT-5 Mini wins 14 of 14 shared benchmarks. Leads in reasoning · knowledge · math.
Category leads
reasoning·GPT-5 Miniknowledge·GPT-5 Minimath·GPT-5 Minilanguage·GPT-5 Minicoding·GPT-5 Mini
Hype vs Reality
Attention vs performance
GPT-4.1 Mini
#116 by perf·no signal
GPT-5 Mini
#63 by perf·no signal
Best value
GPT-5 Mini
1.1x better value than GPT-4.1 Mini
GPT-4.1 Mini
44.5 pts/$
$1.00/M
GPT-5 Mini
49.8 pts/$
$1.13/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
14 benchmarks · 2 models
GPT-4.1 MiniGPT-5 Mini
ARC-AGI
GPT-5 Mini leads by +50.8
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
GPT-4.1 Mini
3.5
GPT-5 Mini
54.3
ARC-AGI-2
GPT-5 Mini leads by +4.3
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
GPT-4.1 Mini
0.1
GPT-5 Mini
4.4
Fiction.LiveBench
GPT-5 Mini leads by +25.0
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
GPT-4.1 Mini
44.4
GPT-5 Mini
69.4
FrontierMath-2025-02-28-Private
GPT-5 Mini leads by +22.8
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
GPT-4.1 Mini
4.5
GPT-5 Mini
27.2
GPQA diamond
GPT-5 Mini leads by +12.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4.1 Mini
54.5
GPT-5 Mini
66.7
HELM · GPQA
GPT-5 Mini leads by +14.2
GPT-4.1 Mini
61.4
GPT-5 Mini
75.6
HELM · IFEval
GPT-5 Mini leads by +2.3
GPT-4.1 Mini
90.4
GPT-5 Mini
92.7
HELM · MMLU-Pro
GPT-5 Mini leads by +5.2
GPT-4.1 Mini
78.3
GPT-5 Mini
83.5
HELM · Omni-MATH
GPT-5 Mini leads by +23.1
GPT-4.1 Mini
49.1
GPT-5 Mini
72.2
HELM · WildBench
GPT-5 Mini leads by +1.7
GPT-4.1 Mini
83.8
GPT-5 Mini
85.5
MATH level 5
GPT-5 Mini leads by +10.6
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4.1 Mini
87.3
GPT-5 Mini
97.8
OTIS Mock AIME 2024-2025
GPT-5 Mini leads by +42.0
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4.1 Mini
44.7
GPT-5 Mini
86.7
SWE-Bench Verified (Bash Only)
GPT-5 Mini leads by +35.9
SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks.
GPT-4.1 Mini
23.9
GPT-5 Mini
59.8
WeirdML
GPT-5 Mini leads by +15.1
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4.1 Mini
37.6
GPT-5 Mini
52.7
Full benchmark table
| Benchmark | GPT-4.1 Mini | GPT-5 Mini |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 3.5 | 54.3 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 0.1 | 4.4 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 44.4 | 69.4 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 4.5 | 27.2 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 54.5 | 66.7 |
HELM · GPQA | 61.4 | 75.6 |
HELM · IFEval | 90.4 | 92.7 |
HELM · MMLU-Pro | 78.3 | 83.5 |
HELM · Omni-MATH | 49.1 | 72.2 |
HELM · WildBench | 83.8 | 85.5 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 87.3 | 97.8 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 44.7 | 86.7 |
SWE-Bench Verified (Bash Only) SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks. | 23.9 | 59.8 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 37.6 | 52.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.40 | $1.60 | 1.0M tokens (~524 books) | $7.00 | |
| $0.25 | $2.00 | 400K tokens (~200 books) | $6.88 |