Compare · ModelsLive · 2 picked · head to head
GPT-4.1 Nano vs GPT-5 Nano
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-5 Nano wins on 12/13 benchmarks
GPT-5 Nano wins 12 of 13 shared benchmarks. Leads in reasoning · knowledge · math.
Category leads
reasoning·GPT-5 Nanoknowledge·GPT-5 Nanomath·GPT-5 Nanolanguage·GPT-5 Nanocoding·GPT-5 Nano
Hype vs Reality
Attention vs performance
GPT-4.1 Nano
#166 by perf·no signal
GPT-5 Nano
#112 by perf·no signal
Best value
GPT-5 Nano
1.4x better value than GPT-4.1 Nano
GPT-4.1 Nano
140.8 pts/$
$0.25/M
GPT-5 Nano
201.3 pts/$
$0.23/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
13 benchmarks · 2 models
GPT-4.1 NanoGPT-5 Nano
ARC-AGI
GPT-5 Nano leads by +20.6
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
GPT-4.1 Nano
0.1
GPT-5 Nano
20.7
ARC-AGI-2
GPT-5 Nano leads by +2.5
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
GPT-4.1 Nano
0.1
GPT-5 Nano
2.6
Fiction.LiveBench
GPT-5 Nano leads by +19.4
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
GPT-4.1 Nano
25.0
GPT-5 Nano
44.4
FrontierMath-2025-02-28-Private
GPT-5 Nano leads by +7.2
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
GPT-4.1 Nano
1.0
GPT-5 Nano
8.3
GPQA diamond
GPT-5 Nano leads by +27.4
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4.1 Nano
31.9
GPT-5 Nano
59.3
HELM · GPQA
GPT-5 Nano leads by +17.2
GPT-4.1 Nano
50.7
GPT-5 Nano
67.9
HELM · IFEval
GPT-5 Nano leads by +8.9
GPT-4.1 Nano
84.3
GPT-5 Nano
93.2
HELM · MMLU-Pro
GPT-5 Nano leads by +22.8
GPT-4.1 Nano
55.0
GPT-5 Nano
77.8
HELM · Omni-MATH
GPT-5 Nano leads by +18.0
GPT-4.1 Nano
36.7
GPT-5 Nano
54.7
HELM · WildBench
GPT-4.1 Nano leads by +0.5
GPT-4.1 Nano
81.1
GPT-5 Nano
80.6
MATH level 5
GPT-5 Nano leads by +25.2
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4.1 Nano
70.0
GPT-5 Nano
95.2
OTIS Mock AIME 2024-2025
GPT-5 Nano leads by +52.3
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4.1 Nano
28.8
GPT-5 Nano
81.1
WeirdML
GPT-5 Nano leads by +19.1
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4.1 Nano
19.0
GPT-5 Nano
38.1
Full benchmark table
| Benchmark | GPT-4.1 Nano | GPT-5 Nano |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 0.1 | 20.7 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 0.1 | 2.6 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 25.0 | 44.4 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 1.0 | 8.3 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 31.9 | 59.3 |
HELM · GPQA | 50.7 | 67.9 |
HELM · IFEval | 84.3 | 93.2 |
HELM · MMLU-Pro | 55.0 | 77.8 |
HELM · Omni-MATH | 36.7 | 54.7 |
HELM · WildBench | 81.1 | 80.6 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 70.0 | 95.2 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 28.8 | 81.1 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 19.0 | 38.1 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.10 | $0.40 | 1.0M tokens (~524 books) | $1.75 | |
| $0.05 | $0.40 | 400K tokens (~200 books) | $1.38 |