Compare · ModelsLive · 2 picked · head to head
GPT-4.1 Mini vs Gemini 1.5 Flash (May 2024)
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-4.1 Mini wins on 10/10 benchmarks
GPT-4.1 Mini wins 10 of 10 shared benchmarks. Leads in math · knowledge · language.
Category leads
math·GPT-4.1 Miniknowledge·GPT-4.1 Minilanguage·GPT-4.1 Minireasoning·GPT-4.1 Minicoding·GPT-4.1 Mini
Hype vs Reality
Attention vs performance
GPT-4.1 Mini
#118 by perf·no signal
Gemini 1.5 Flash (May 2024)
#105 by perf·no signal
Best value
GPT-4.1 Mini
GPT-4.1 Mini
44.5 pts/$
$1.00/M
Gemini 1.5 Flash (May 2024)
—
no price
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
Google DeepMind
$4.00T·Tier 1
Head to head
10 benchmarks · 2 models
GPT-4.1 MiniGemini 1.5 Flash (May 2024)
FrontierMath-2025-02-28-Private
GPT-4.1 Mini leads by +4.4
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
GPT-4.1 Mini
4.5
Gemini 1.5 Flash (May 2024)
0.1
GPQA diamond
GPT-4.1 Mini leads by +34.0
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4.1 Mini
54.5
Gemini 1.5 Flash (May 2024)
20.5
HELM · GPQA
GPT-4.1 Mini leads by +17.7
GPT-4.1 Mini
61.4
Gemini 1.5 Flash (May 2024)
43.7
HELM · IFEval
GPT-4.1 Mini leads by +7.3
GPT-4.1 Mini
90.4
Gemini 1.5 Flash (May 2024)
83.1
HELM · MMLU-Pro
GPT-4.1 Mini leads by +10.5
GPT-4.1 Mini
78.3
Gemini 1.5 Flash (May 2024)
67.8
HELM · Omni-MATH
GPT-4.1 Mini leads by +18.6
GPT-4.1 Mini
49.1
Gemini 1.5 Flash (May 2024)
30.5
HELM · WildBench
GPT-4.1 Mini leads by +4.6
GPT-4.1 Mini
83.8
Gemini 1.5 Flash (May 2024)
79.2
MATH level 5
GPT-4.1 Mini leads by +62.2
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4.1 Mini
87.3
Gemini 1.5 Flash (May 2024)
25.1
OTIS Mock AIME 2024-2025
GPT-4.1 Mini leads by +40.9
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4.1 Mini
44.7
Gemini 1.5 Flash (May 2024)
3.8
WeirdML
GPT-4.1 Mini leads by +12.7
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4.1 Mini
37.6
Gemini 1.5 Flash (May 2024)
24.9
Full benchmark table
| Benchmark | GPT-4.1 Mini | Gemini 1.5 Flash (May 2024) |
|---|---|---|
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 4.5 | 0.1 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 54.5 | 20.5 |
HELM · GPQA | 61.4 | 43.7 |
HELM · IFEval | 90.4 | 83.1 |
HELM · MMLU-Pro | 78.3 | 67.8 |
HELM · Omni-MATH | 49.1 | 30.5 |
HELM · WildBench | 83.8 | 79.2 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 87.3 | 25.1 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 44.7 | 3.8 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 37.6 | 24.9 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.40 | $1.60 | 1.0M tokens (~524 books) | $7.00 | |
| — | — | — | — |
People also compared