Compare · ModelsLive · 2 picked · head to head
GPT-4o (2024-11-20) vs Gemini 1.5 Flash (May 2024)
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-4o (2024-11-20) wins on 11/14 benchmarks
GPT-4o (2024-11-20) wins 11 of 14 shared benchmarks. Leads in knowledge · math · reasoning.
Category leads
knowledge·GPT-4o (2024-11-20)math·GPT-4o (2024-11-20)language·Gemini 1.5 Flash (May 2024)reasoning·GPT-4o (2024-11-20)multimodal·GPT-4o (2024-11-20)coding·GPT-4o (2024-11-20)
Hype vs Reality
Attention vs performance
GPT-4o (2024-11-20)
#156 by perf·no signal
Gemini 1.5 Flash (May 2024)
#105 by perf·no signal
Best value
GPT-4o (2024-11-20)
GPT-4o (2024-11-20)
6.0 pts/$
$6.25/M
Gemini 1.5 Flash (May 2024)
—
no price
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
Google DeepMind
$4.00T·Tier 1
Head to head
14 benchmarks · 2 models
GPT-4o (2024-11-20)Gemini 1.5 Flash (May 2024)
Balrog
GPT-4o (2024-11-20) leads by +17.7
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
GPT-4o (2024-11-20)
32.3
Gemini 1.5 Flash (May 2024)
14.6
FrontierMath-2025-02-28-Private
GPT-4o (2024-11-20) leads by +0.2
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
GPT-4o (2024-11-20)
0.3
Gemini 1.5 Flash (May 2024)
0.1
GeoBench
Gemini 1.5 Flash (May 2024) leads by +5.0
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
GPT-4o (2024-11-20)
71.0
Gemini 1.5 Flash (May 2024)
76.0
GPQA diamond
GPT-4o (2024-11-20) leads by +11.8
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4o (2024-11-20)
32.3
Gemini 1.5 Flash (May 2024)
20.5
HELM · GPQA
GPT-4o (2024-11-20) leads by +8.3
GPT-4o (2024-11-20)
52.0
Gemini 1.5 Flash (May 2024)
43.7
HELM · IFEval
Gemini 1.5 Flash (May 2024) leads by +1.4
GPT-4o (2024-11-20)
81.7
Gemini 1.5 Flash (May 2024)
83.1
HELM · MMLU-Pro
GPT-4o (2024-11-20) leads by +3.5
GPT-4o (2024-11-20)
71.3
Gemini 1.5 Flash (May 2024)
67.8
HELM · Omni-MATH
Gemini 1.5 Flash (May 2024) leads by +1.2
GPT-4o (2024-11-20)
29.3
Gemini 1.5 Flash (May 2024)
30.5
HELM · WildBench
GPT-4o (2024-11-20) leads by +3.6
GPT-4o (2024-11-20)
82.8
Gemini 1.5 Flash (May 2024)
79.2
MATH level 5
GPT-4o (2024-11-20) leads by +28.2
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4o (2024-11-20)
53.3
Gemini 1.5 Flash (May 2024)
25.1
MMLU
GPT-4o (2024-11-20) leads by +8.5
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
GPT-4o (2024-11-20)
79.1
Gemini 1.5 Flash (May 2024)
70.5
OTIS Mock AIME 2024-2025
GPT-4o (2024-11-20) leads by +2.5
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4o (2024-11-20)
6.3
Gemini 1.5 Flash (May 2024)
3.8
VideoMME
GPT-4o (2024-11-20) leads by +2.1
VideoMME · multimodal benchmark testing video understanding across diverse domains, requiring temporal reasoning and cross-frame comprehension.
GPT-4o (2024-11-20)
62.5
Gemini 1.5 Flash (May 2024)
60.4
WeirdML
GPT-4o (2024-11-20) leads by +0.3
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4o (2024-11-20)
25.1
Gemini 1.5 Flash (May 2024)
24.9
Full benchmark table
| Benchmark | GPT-4o (2024-11-20) | Gemini 1.5 Flash (May 2024) |
|---|---|---|
Balrog Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning. | 32.3 | 14.6 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 0.3 | 0.1 |
GeoBench GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding. | 71.0 | 76.0 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 32.3 | 20.5 |
HELM · GPQA | 52.0 | 43.7 |
HELM · IFEval | 81.7 | 83.1 |
HELM · MMLU-Pro | 71.3 | 67.8 |
HELM · Omni-MATH | 29.3 | 30.5 |
HELM · WildBench | 82.8 | 79.2 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 53.3 | 25.1 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 79.1 | 70.5 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 6.3 | 3.8 |
VideoMME VideoMME · multimodal benchmark testing video understanding across diverse domains, requiring temporal reasoning and cross-frame comprehension. | 62.5 | 60.4 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 25.1 | 24.9 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.50 | $10.00 | 128K tokens (~64 books) | $43.75 | |
| — | — | — | — |