Compare · ModelsLive · 2 picked · head to head
GPT-4 (older v0314) vs GPT-4o-mini
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-4 (older v0314) wins on 3/5 benchmarks
GPT-4 (older v0314) wins 3 of 5 shared benchmarks. Leads in coding · math.
Category leads
coding·GPT-4 (older v0314)knowledge·GPT-4o-minimath·GPT-4 (older v0314)
Hype vs Reality
Attention vs performance
GPT-4 (older v0314)
#72 by perf·no signal
GPT-4o-mini
#146 by perf·no signal
Best value
GPT-4o-mini
86.4x better value than GPT-4 (older v0314)
GPT-4 (older v0314)
1.2 pts/$
$45.00/M
GPT-4o-mini
105.6 pts/$
$0.38/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
5 benchmarks · 2 models
GPT-4 (older v0314)GPT-4o-mini
Aider · Code Editing
GPT-4 (older v0314) leads by +10.6
GPT-4 (older v0314)
66.2
GPT-4o-mini
55.6
GPQA diamond
GPT-4o-mini leads by +2.7
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4 (older v0314)
14.3
GPT-4o-mini
17.0
GSM8K
GPT-4 (older v0314) leads by +0.7
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
GPT-4 (older v0314)
92.0
GPT-4o-mini
91.3
MMLU
GPT-4 (older v0314) leads by +6.1
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
GPT-4 (older v0314)
81.9
GPT-4o-mini
75.7
OTIS Mock AIME 2024-2025
GPT-4o-mini leads by +6.4
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4 (older v0314)
0.5
GPT-4o-mini
6.8
Full benchmark table
| Benchmark | GPT-4 (older v0314) | GPT-4o-mini |
|---|---|---|
Aider · Code Editing | 66.2 | 55.6 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 14.3 | 17.0 |
GSM8K Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve. | 92.0 | 91.3 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 81.9 | 75.7 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 0.5 | 6.8 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $30.00 | $60.00 | 8K tokens (~4 books) | $375.00 | |
| $0.15 | $0.60 | 128K tokens (~64 books) | $2.62 |
People also compared