Compare · ModelsLive · 2 picked · head to head
GPT-4.1 vs Gemini 1.5 Pro (Feb 2024)
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-4.1 wins on 10/12 benchmarks
GPT-4.1 wins 10 of 12 shared benchmarks. Leads in coding · knowledge · language.
Category leads
reasoning·Gemini 1.5 Pro (Feb 2024)coding·GPT-4.1knowledge·GPT-4.1language·GPT-4.1math·GPT-4.1
Hype vs Reality
Attention vs performance
GPT-4.1
#123 by perf·no signal
Gemini 1.5 Pro (Feb 2024)
#138 by perf·no signal
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
Google DeepMind
$4.00T·Tier 1
Head to head
12 benchmarks · 2 models
GPT-4.1Gemini 1.5 Pro (Feb 2024)
ARC-AGI-2
Gemini 1.5 Pro (Feb 2024) leads by +0.4
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
GPT-4.1
0.4
Gemini 1.5 Pro (Feb 2024)
0.8
CadEval
GPT-4.1 leads by +8.0
CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge.
GPT-4.1
42.0
Gemini 1.5 Pro (Feb 2024)
34.0
GPQA diamond
GPT-4.1 leads by +28.1
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4.1
55.9
Gemini 1.5 Pro (Feb 2024)
27.8
HELM · GPQA
GPT-4.1 leads by +12.5
GPT-4.1
65.9
Gemini 1.5 Pro (Feb 2024)
53.4
HELM · IFEval
GPT-4.1 leads by +0.1
GPT-4.1
83.8
Gemini 1.5 Pro (Feb 2024)
83.7
HELM · MMLU-Pro
GPT-4.1 leads by +7.4
GPT-4.1
81.1
Gemini 1.5 Pro (Feb 2024)
73.7
HELM · Omni-MATH
GPT-4.1 leads by +10.7
GPT-4.1
47.1
Gemini 1.5 Pro (Feb 2024)
36.4
HELM · WildBench
GPT-4.1 leads by +4.1
GPT-4.1
85.4
Gemini 1.5 Pro (Feb 2024)
81.3
MATH level 5
GPT-4.1 leads by +42.3
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4.1
83.0
Gemini 1.5 Pro (Feb 2024)
40.8
OTIS Mock AIME 2024-2025
GPT-4.1 leads by +31.6
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4.1
38.3
Gemini 1.5 Pro (Feb 2024)
6.7
SimpleBench
Gemini 1.5 Pro (Feb 2024) leads by +0.1
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GPT-4.1
12.4
Gemini 1.5 Pro (Feb 2024)
12.5
WeirdML
GPT-4.1 leads by +16.8
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4.1
39.0
Gemini 1.5 Pro (Feb 2024)
22.2
Full benchmark table
| Benchmark | GPT-4.1 | Gemini 1.5 Pro (Feb 2024) |
|---|---|---|
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 0.4 | 0.8 |
CadEval CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge. | 42.0 | 34.0 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 55.9 | 27.8 |
HELM · GPQA | 65.9 | 53.4 |
HELM · IFEval | 83.8 | 83.7 |
HELM · MMLU-Pro | 81.1 | 73.7 |
HELM · Omni-MATH | 47.1 | 36.4 |
HELM · WildBench | 85.4 | 81.3 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 83.0 | 40.8 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 38.3 | 6.7 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 12.4 | 12.5 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 39.0 | 22.2 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.00 | $8.00 | 1.0M tokens (~524 books) | $35.00 | |
| — | — | — | — |