Compare · ModelsLive · 2 picked · head to head
Gemini 2.5 Flash vs GPT-5 Mini
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-5 Mini wins on 15/15 benchmarks
GPT-5 Mini wins 15 of 15 shared benchmarks. Leads in reasoning · knowledge · math.
Category leads
reasoning·GPT-5 Miniknowledge·GPT-5 Minimath·GPT-5 Minilanguage·GPT-5 Minicoding·GPT-5 Mini
Hype vs Reality
Attention vs performance
Gemini 2.5 Flash
#142 by perf·#14 by attention
GPT-5 Mini
#63 by perf·no signal
Best value
GPT-5 Mini
1.7x better value than Gemini 2.5 Flash
Gemini 2.5 Flash
28.6 pts/$
$1.40/M
GPT-5 Mini
49.8 pts/$
$1.13/M
Vendor risk
Who is behind the model
Google DeepMind
$4.00T·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
15 benchmarks · 2 models
Gemini 2.5 FlashGPT-5 Mini
ARC-AGI
GPT-5 Mini leads by +22.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Gemini 2.5 Flash
32.3
GPT-5 Mini
54.3
ARC-AGI-2
GPT-5 Mini leads by +1.9
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Gemini 2.5 Flash
2.5
GPT-5 Mini
4.4
Fiction.LiveBench
GPT-5 Mini leads by +22.2
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
Gemini 2.5 Flash
47.2
GPT-5 Mini
69.4
FrontierMath-2025-02-28-Private
GPT-5 Mini leads by +22.4
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Gemini 2.5 Flash
4.8
GPT-5 Mini
27.2
FrontierMath-Tier-4-2025-07-01-Private
GPT-5 Mini leads by +2.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Gemini 2.5 Flash
4.2
GPT-5 Mini
6.3
HELM · GPQA
GPT-5 Mini leads by +36.6
Gemini 2.5 Flash
39.0
GPT-5 Mini
75.6
HELM · IFEval
GPT-5 Mini leads by +2.9
Gemini 2.5 Flash
89.8
GPT-5 Mini
92.7
HELM · MMLU-Pro
GPT-5 Mini leads by +19.6
Gemini 2.5 Flash
63.9
GPT-5 Mini
83.5
HELM · Omni-MATH
GPT-5 Mini leads by +33.8
Gemini 2.5 Flash
38.4
GPT-5 Mini
72.2
HELM · WildBench
GPT-5 Mini leads by +3.8
Gemini 2.5 Flash
81.7
GPT-5 Mini
85.5
HLE
GPT-5 Mini leads by +7.7
HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains.
Gemini 2.5 Flash
7.7
GPT-5 Mini
15.4
OTIS Mock AIME 2024-2025
GPT-5 Mini leads by +13.6
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 2.5 Flash
73.0
GPT-5 Mini
86.7
Terminal Bench
GPT-5 Mini leads by +17.7
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
Gemini 2.5 Flash
17.1
GPT-5 Mini
34.8
VPCT
GPT-5 Mini leads by +3.3
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
Gemini 2.5 Flash
7.0
GPT-5 Mini
10.3
WeirdML
GPT-5 Mini leads by +11.7
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Gemini 2.5 Flash
41.0
GPT-5 Mini
52.7
Full benchmark table
| Benchmark | Gemini 2.5 Flash | GPT-5 Mini |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 32.3 | 54.3 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 2.5 | 4.4 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 47.2 | 69.4 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 4.8 | 27.2 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 4.2 | 6.3 |
HELM · GPQA | 39.0 | 75.6 |
HELM · IFEval | 89.8 | 92.7 |
HELM · MMLU-Pro | 63.9 | 83.5 |
HELM · Omni-MATH | 38.4 | 72.2 |
HELM · WildBench | 81.7 | 85.5 |
HLE HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains. | 7.7 | 15.4 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 73.0 | 86.7 |
Terminal Bench Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency. | 17.1 | 34.8 |
VPCT VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations. | 7.0 | 10.3 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 41.0 | 52.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.30 | $2.50 | 1.0M tokens (~524 books) | $8.50 | |
| $0.25 | $2.00 | 400K tokens (~200 books) | $6.88 |