Beta
Compare · ModelsLive · 2 picked · head to head

Gemini 2.5 Flash vs GPT-5 Mini

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-5 Mini wins 15 of 15 shared benchmarks. Leads in reasoning · knowledge · math.

Category leads
reasoning·GPT-5 Miniknowledge·GPT-5 Minimath·GPT-5 Minilanguage·GPT-5 Minicoding·GPT-5 Mini
Hype vs Reality
Gemini 2.5 Flash
#142 by perf·#14 by attention
OVERHYPED
GPT-5 Mini
#63 by perf·no signal
QUIET
Best value
1.7x better value than Gemini 2.5 Flash
Gemini 2.5 Flash
28.6 pts/$
$1.40/M
GPT-5 Mini
49.8 pts/$
$1.13/M
Vendor risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
Gemini 2.5 FlashGPT-5 Mini
ARC-AGI
GPT-5 Mini leads by +22.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Gemini 2.5 Flash
32.3
GPT-5 Mini
54.3
ARC-AGI-2
GPT-5 Mini leads by +1.9
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Gemini 2.5 Flash
2.5
GPT-5 Mini
4.4
Fiction.LiveBench
GPT-5 Mini leads by +22.2
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
Gemini 2.5 Flash
47.2
GPT-5 Mini
69.4
FrontierMath-2025-02-28-Private
GPT-5 Mini leads by +22.4
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Gemini 2.5 Flash
4.8
GPT-5 Mini
27.2
FrontierMath-Tier-4-2025-07-01-Private
GPT-5 Mini leads by +2.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Gemini 2.5 Flash
4.2
GPT-5 Mini
6.3
HELM · GPQA
GPT-5 Mini leads by +36.6
Gemini 2.5 Flash
39.0
GPT-5 Mini
75.6
HELM · IFEval
GPT-5 Mini leads by +2.9
Gemini 2.5 Flash
89.8
GPT-5 Mini
92.7
HELM · MMLU-Pro
GPT-5 Mini leads by +19.6
Gemini 2.5 Flash
63.9
GPT-5 Mini
83.5
HELM · Omni-MATH
GPT-5 Mini leads by +33.8
Gemini 2.5 Flash
38.4
GPT-5 Mini
72.2
HELM · WildBench
GPT-5 Mini leads by +3.8
Gemini 2.5 Flash
81.7
GPT-5 Mini
85.5
HLE
GPT-5 Mini leads by +7.7
HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains.
Gemini 2.5 Flash
7.7
GPT-5 Mini
15.4
OTIS Mock AIME 2024-2025
GPT-5 Mini leads by +13.6
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 2.5 Flash
73.0
GPT-5 Mini
86.7
Terminal Bench
GPT-5 Mini leads by +17.7
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
Gemini 2.5 Flash
17.1
GPT-5 Mini
34.8
VPCT
GPT-5 Mini leads by +3.3
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
Gemini 2.5 Flash
7.0
GPT-5 Mini
10.3
WeirdML
GPT-5 Mini leads by +11.7
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Gemini 2.5 Flash
41.0
GPT-5 Mini
52.7
Full benchmark table
BenchmarkGemini 2.5 FlashGPT-5 Mini
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
32.354.3
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
2.54.4
Fiction.LiveBench
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
47.269.4
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
4.827.2
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
4.26.3
HELM · GPQA
39.075.6
HELM · IFEval
89.892.7
HELM · MMLU-Pro
63.983.5
HELM · Omni-MATH
38.472.2
HELM · WildBench
81.785.5
HLE
HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains.
7.715.4
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
73.086.7
Terminal Bench
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
17.134.8
VPCT
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
7.010.3
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
41.052.7
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Google DeepMind logoGemini 2.5 Flash$0.30$2.501.0M tokens (~524 books)$8.50
OpenAI logoGPT-5 Mini$0.25$2.00400K tokens (~200 books)$6.88