Compare · ModelsLive · 3 picked · head to head

GPT-4 (older v0314) vs GPT-4o-mini vs GPT-4o-mini (2024-07-18)

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-4o-mini wins 12 of 16 shared benchmarks. Leads in knowledge · math · coding.

Category leads
knowledge·GPT-4o-minimath·GPT-4o-minicoding·GPT-4o-minireasoning·GPT-4o-miniarena·GPT-4o-mini (2024-07-18)multimodal·GPT-4o-mini
Hype vs Reality
GPT-4 (older v0314)
#72 by perf·no signal
QUIET
GPT-4o-mini
#146 by perf·no signal
QUIET
GPT-4o-mini (2024-07-18)
#125 by perf·no signal
QUIET
Best value
1.1x better value than GPT-4o-mini
GPT-4 (older v0314)
1.2 pts/$
$45.00/M
GPT-4o-mini
105.6 pts/$
$0.38/M
GPT-4o-mini (2024-07-18)
115.2 pts/$
$0.38/M
Vendor risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
GPT-4 (older v0314)GPT-4o-miniGPT-4o-mini (2024-07-18)
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4 (older v0314)
14.3
GPT-4o-mini
17.0
GPT-4o-mini (2024-07-18)
17.0
GSM8K
GPT-4 (older v0314) leads by +0.7
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
GPT-4 (older v0314)
92.0
GPT-4o-mini
91.3
GPT-4o-mini (2024-07-18)
91.3
MMLU
GPT-4 (older v0314) leads by +6.1
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
GPT-4 (older v0314)
81.9
GPT-4o-mini
75.7
GPT-4o-mini (2024-07-18)
75.7
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4 (older v0314)
0.5
GPT-4o-mini
6.8
GPT-4o-mini (2024-07-18)
6.8
Aider · Code Editing
GPT-4 (older v0314) leads by +10.6
GPT-4 (older v0314)
66.2
GPT-4o-mini
55.6
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
GPT-4o-mini
3.6
GPT-4o-mini (2024-07-18)
3.6
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
GPT-4o-mini
0.1
GPT-4o-mini (2024-07-18)
0.1
Chatbot Arena Elo · Overall
GPT-4o-mini (2024-07-18) leads by +31.5
GPT-4 (older v0314)
1285.8
GPT-4o-mini (2024-07-18)
1317.2
Balrog
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
GPT-4o-mini
17.4
GPT-4o-mini (2024-07-18)
17.4
GeoBench
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
GPT-4o-mini
64.0
GPT-4o-mini (2024-07-18)
64.0
Lech Mazur Writing
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
GPT-4o-mini
67.2
GPT-4o-mini (2024-07-18)
67.2
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4o-mini
52.6
GPT-4o-mini (2024-07-18)
52.6
PIQA
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
GPT-4o-mini
77.4
GPT-4o-mini (2024-07-18)
77.4
VideoMME
VideoMME · multimodal benchmark testing video understanding across diverse domains, requiring temporal reasoning and cross-frame comprehension.
GPT-4o-mini
53.1
GPT-4o-mini (2024-07-18)
53.1
VPCT
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
GPT-4o-mini
1.0
GPT-4o-mini (2024-07-18)
1.0
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4o-mini
11.8
GPT-4o-mini (2024-07-18)
11.8
Full benchmark table
BenchmarkGPT-4 (older v0314)GPT-4o-miniGPT-4o-mini (2024-07-18)
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
14.317.017.0
GSM8K
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
92.091.391.3
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
81.975.775.7
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
0.56.86.8
Aider · Code Editing
66.255.6
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
3.63.6
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
0.10.1
Chatbot Arena Elo · Overall
1285.81317.2
Balrog
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
17.417.4
GeoBench
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
64.064.0
Lech Mazur Writing
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
67.267.2
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
52.652.6
PIQA
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
77.477.4
VideoMME
VideoMME · multimodal benchmark testing video understanding across diverse domains, requiring temporal reasoning and cross-frame comprehension.
53.153.1
VPCT
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
1.01.0
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
11.811.8
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
OpenAI logoGPT-4 (older v0314)$30.00$60.008K tokens (~4 books)$375.00
OpenAI logoGPT-4o-mini$0.15$0.60128K tokens (~64 books)$2.62
OpenAI logoGPT-4o-mini (2024-07-18)$0.15$0.60128K tokens (~64 books)$2.62