Compare · ModelsLive · 3 picked · head to head

GPT-4o-mini vs GPT-4o-mini (2024-07-18) vs Gemini 1.5 Flash (May 2024)

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-4o-mini wins 10 of 20 shared benchmarks. Leads in knowledge · math · reasoning.

Category leads
knowledge·GPT-4o-minimath·GPT-4o-minimultimodal·Gemini 1.5 Flash (May 2024)coding·Gemini 1.5 Flash (May 2024)reasoning·GPT-4o-miniarena·GPT-4o-mini (2024-07-18)language·Gemini 1.5 Flash (May 2024)
Hype vs Reality
GPT-4o-mini
#146 by perf·no signal
QUIET
GPT-4o-mini (2024-07-18)
#125 by perf·no signal
QUIET
Gemini 1.5 Flash (May 2024)
#105 by perf·no signal
QUIET
Best value
1.1x better value than GPT-4o-mini
GPT-4o-mini
105.6 pts/$
$0.38/M
GPT-4o-mini (2024-07-18)
115.2 pts/$
$0.38/M
Gemini 1.5 Flash (May 2024)
no price
Vendor risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Head to head
GPT-4o-miniGPT-4o-mini (2024-07-18)Gemini 1.5 Flash (May 2024)
Balrog
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
GPT-4o-mini
17.4
GPT-4o-mini (2024-07-18)
17.4
Gemini 1.5 Flash (May 2024)
14.6
GeoBench
Gemini 1.5 Flash (May 2024) leads by +12.0
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
GPT-4o-mini
64.0
GPT-4o-mini (2024-07-18)
64.0
Gemini 1.5 Flash (May 2024)
76.0
GPQA diamond
Gemini 1.5 Flash (May 2024) leads by +3.5
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4o-mini
17.0
GPT-4o-mini (2024-07-18)
17.0
Gemini 1.5 Flash (May 2024)
20.5
GSM8K
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
GPT-4o-mini
91.3
GPT-4o-mini (2024-07-18)
91.3
Gemini 1.5 Flash (May 2024)
82.4
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4o-mini
52.6
GPT-4o-mini (2024-07-18)
52.6
Gemini 1.5 Flash (May 2024)
25.1
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
GPT-4o-mini
75.7
GPT-4o-mini (2024-07-18)
75.7
Gemini 1.5 Flash (May 2024)
70.5
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4o-mini
6.8
GPT-4o-mini (2024-07-18)
6.8
Gemini 1.5 Flash (May 2024)
3.8
PIQA
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
GPT-4o-mini
77.4
GPT-4o-mini (2024-07-18)
77.4
Gemini 1.5 Flash (May 2024)
75.0
VideoMME
Gemini 1.5 Flash (May 2024) leads by +7.3
VideoMME · multimodal benchmark testing video understanding across diverse domains, requiring temporal reasoning and cross-frame comprehension.
GPT-4o-mini
53.1
GPT-4o-mini (2024-07-18)
53.1
Gemini 1.5 Flash (May 2024)
60.4
WeirdML
Gemini 1.5 Flash (May 2024) leads by +13.1
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4o-mini
11.8
GPT-4o-mini (2024-07-18)
11.8
Gemini 1.5 Flash (May 2024)
24.9
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
GPT-4o-mini
3.6
GPT-4o-mini (2024-07-18)
3.6
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
GPT-4o-mini
0.1
GPT-4o-mini (2024-07-18)
0.1
Chatbot Arena Elo · Overall
GPT-4o-mini (2024-07-18) leads by +32.1
GPT-4o-mini (2024-07-18)
1317.2
Gemini 1.5 Flash (May 2024)
1285.1
HELM · GPQA
Gemini 1.5 Flash (May 2024) leads by +6.9
GPT-4o-mini (2024-07-18)
36.8
Gemini 1.5 Flash (May 2024)
43.7
HELM · IFEval
Gemini 1.5 Flash (May 2024) leads by +4.9
GPT-4o-mini (2024-07-18)
78.2
Gemini 1.5 Flash (May 2024)
83.1
HELM · MMLU-Pro
Gemini 1.5 Flash (May 2024) leads by +7.5
GPT-4o-mini (2024-07-18)
60.3
Gemini 1.5 Flash (May 2024)
67.8
HELM · Omni-MATH
Gemini 1.5 Flash (May 2024) leads by +2.5
GPT-4o-mini (2024-07-18)
28.0
Gemini 1.5 Flash (May 2024)
30.5
HELM · WildBench
Gemini 1.5 Flash (May 2024) leads by +0.1
GPT-4o-mini (2024-07-18)
79.1
Gemini 1.5 Flash (May 2024)
79.2
Lech Mazur Writing
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
GPT-4o-mini
67.2
GPT-4o-mini (2024-07-18)
67.2
VPCT
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
GPT-4o-mini
1.0
GPT-4o-mini (2024-07-18)
1.0
Full benchmark table
BenchmarkGPT-4o-miniGPT-4o-mini (2024-07-18)Gemini 1.5 Flash (May 2024)
Balrog
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
17.417.414.6
GeoBench
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
64.064.076.0
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
17.017.020.5
GSM8K
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
91.391.382.4
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
52.652.625.1
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
75.775.770.5
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
6.86.83.8
PIQA
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
77.477.475.0
VideoMME
VideoMME · multimodal benchmark testing video understanding across diverse domains, requiring temporal reasoning and cross-frame comprehension.
53.153.160.4
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
11.811.824.9
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
3.63.6
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
0.10.1
Chatbot Arena Elo · Overall
1317.21285.1
HELM · GPQA
36.843.7
HELM · IFEval
78.283.1
HELM · MMLU-Pro
60.367.8
HELM · Omni-MATH
28.030.5
HELM · WildBench
79.179.2
Lech Mazur Writing
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
67.267.2
VPCT
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
1.01.0
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
OpenAI logoGPT-4o-mini$0.15$0.60128K tokens (~64 books)$2.62
OpenAI logoGPT-4o-mini (2024-07-18)$0.15$0.60128K tokens (~64 books)$2.62
Google DeepMind logoGemini 1.5 Flash (May 2024)