Beta
Compare · ModelsLive · 2 picked · head to head

Gemini 2.0 Flash vs GPT-4o-mini

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Gemini 2.0 Flash wins 8 of 9 shared benchmarks. Leads in coding · reasoning · knowledge.

Category leads
coding·Gemini 2.0 Flashreasoning·Gemini 2.0 Flashknowledge·Gemini 2.0 Flashmath·Gemini 2.0 Flash
Hype vs Reality
Gemini 2.0 Flash
#99 by perf·no signal
QUIET
GPT-4o-mini
#144 by perf·no signal
QUIET
Best value
1.8x better value than GPT-4o-mini
Gemini 2.0 Flash
192.0 pts/$
$0.25/M
GPT-4o-mini
105.6 pts/$
$0.38/M
Vendor risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
Gemini 2.0 FlashGPT-4o-mini
Aider polyglot
Gemini 2.0 Flash leads by +34.6
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
Gemini 2.0 Flash
38.2
GPT-4o-mini
3.6
ARC-AGI-2
Gemini 2.0 Flash leads by +1.2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Gemini 2.0 Flash
1.3
GPT-4o-mini
0.1
GeoBench
Gemini 2.0 Flash leads by +13.0
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
Gemini 2.0 Flash
77.0
GPT-4o-mini
64.0
GPQA diamond
Gemini 2.0 Flash leads by +35.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Gemini 2.0 Flash
52.2
GPT-4o-mini
17.0
Lech Mazur Writing
Gemini 2.0 Flash leads by +4.3
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
Gemini 2.0 Flash
71.5
GPT-4o-mini
67.2
MATH level 5
Gemini 2.0 Flash leads by +29.5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Gemini 2.0 Flash
82.2
GPT-4o-mini
52.6
MMLU
GPT-4o-mini leads by +2.8
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Gemini 2.0 Flash
72.9
GPT-4o-mini
75.7
OTIS Mock AIME 2024-2025
Gemini 2.0 Flash leads by +24.2
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 2.0 Flash
31.0
GPT-4o-mini
6.8
WeirdML
Gemini 2.0 Flash leads by +14.0
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Gemini 2.0 Flash
25.8
GPT-4o-mini
11.8
Full benchmark table
BenchmarkGemini 2.0 FlashGPT-4o-mini
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
38.23.6
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
1.30.1
GeoBench
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
77.064.0
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
52.217.0
Lech Mazur Writing
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
71.567.2
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
82.252.6
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
72.975.7
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
31.06.8
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
25.811.8
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Google DeepMind logoGemini 2.0 Flash$0.10$0.401.0M tokens (~524 books)$1.75
OpenAI logoGPT-4o-mini$0.15$0.60128K tokens (~64 books)$2.62