Compare · ModelsLive · 2 picked · head to head
GPT-4.1 vs Phi 4
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-4.1 wins on 3/3 benchmarks
GPT-4.1 wins 3 of 3 shared benchmarks. Leads in knowledge · math.
Category leads
knowledge·GPT-4.1math·GPT-4.1
Hype vs Reality
Attention vs performance
GPT-4.1
#121 by perf·no signal
Phi 4
#124 by perf·no signal
Best value
Phi 4
48.7x better value than GPT-4.1
GPT-4.1
8.7 pts/$
$5.00/M
Phi 4
421.5 pts/$
$0.10/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
Microsoft
$3.00T·Big Tech
Head to head
3 benchmarks · 2 models
GPT-4.1Phi 4
GPQA diamond
GPT-4.1 leads by +14.5
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4.1
55.9
Phi 4
41.4
MATH level 5
GPT-4.1 leads by +18.1
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4.1
83.0
Phi 4
64.9
OTIS Mock AIME 2024-2025
GPT-4.1 leads by +24.6
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4.1
38.3
Phi 4
13.7
Full benchmark table
| Benchmark | GPT-4.1 | Phi 4 |
|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 55.9 | 41.4 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 83.0 | 64.9 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 38.3 | 13.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens