Beta
Compare · ModelsLive · 2 picked · head to head

GPT-4.1 vs GPT-5 Nano

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-5 Nano wins 10 of 15 shared benchmarks. Leads in reasoning · math · language.

Category leads
reasoning·GPT-5 Nanoknowledge·GPT-4.1math·GPT-5 Nanolanguage·GPT-5 Nanocoding·GPT-4.1
Hype vs Reality
GPT-4.1
#121 by perf·no signal
QUIET
GPT-5 Nano
#112 by perf·no signal
QUIET
Best value
23.2x better value than GPT-4.1
GPT-4.1
8.7 pts/$
$5.00/M
GPT-5 Nano
201.3 pts/$
$0.23/M
Vendor risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
GPT-4.1GPT-5 Nano
ARC-AGI
GPT-5 Nano leads by +15.2
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
GPT-4.1
5.5
GPT-5 Nano
20.7
ARC-AGI-2
GPT-5 Nano leads by +2.2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
GPT-4.1
0.4
GPT-5 Nano
2.6
Fiction.LiveBench
GPT-4.1 leads by +19.5
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
GPT-4.1
63.9
GPT-5 Nano
44.4
FrontierMath-2025-02-28-Private
GPT-5 Nano leads by +2.8
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
GPT-4.1
5.5
GPT-5 Nano
8.3
FrontierMath-Tier-4-2025-07-01-Private
GPT-5 Nano leads by +2.0
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
GPT-4.1
0.1
GPT-5 Nano
2.1
GPQA diamond
GPT-5 Nano leads by +3.4
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4.1
55.9
GPT-5 Nano
59.3
HELM · GPQA
GPT-5 Nano leads by +2.0
GPT-4.1
65.9
GPT-5 Nano
67.9
HELM · IFEval
GPT-5 Nano leads by +9.4
GPT-4.1
83.8
GPT-5 Nano
93.2
HELM · MMLU-Pro
GPT-4.1 leads by +3.3
GPT-4.1
81.1
GPT-5 Nano
77.8
HELM · Omni-MATH
GPT-5 Nano leads by +7.6
GPT-4.1
47.1
GPT-5 Nano
54.7
HELM · WildBench
GPT-4.1 leads by +4.8
GPT-4.1
85.4
GPT-5 Nano
80.6
MATH level 5
GPT-5 Nano leads by +12.2
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4.1
83.0
GPT-5 Nano
95.2
OTIS Mock AIME 2024-2025
GPT-5 Nano leads by +42.8
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4.1
38.3
GPT-5 Nano
81.1
SWE-Bench Verified (Bash Only)
GPT-4.1 leads by +4.8
SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks.
GPT-4.1
39.6
GPT-5 Nano
34.8
WeirdML
GPT-4.1 leads by +1.0
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4.1
39.0
GPT-5 Nano
38.1
Full benchmark table
BenchmarkGPT-4.1GPT-5 Nano
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
5.520.7
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
0.42.6
Fiction.LiveBench
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
63.944.4
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
5.58.3
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
0.12.1
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
55.959.3
HELM · GPQA
65.967.9
HELM · IFEval
83.893.2
HELM · MMLU-Pro
81.177.8
HELM · Omni-MATH
47.154.7
HELM · WildBench
85.480.6
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
83.095.2
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
38.381.1
SWE-Bench Verified (Bash Only)
SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks.
39.634.8
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
39.038.1
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
OpenAI logoGPT-4.1$2.00$8.001.0M tokens (~524 books)$35.00
OpenAI logoGPT-5 Nano$0.05$0.40400K tokens (~200 books)$1.38