Compare · ModelsLive · 2 picked · head to head

GPT-4 (older v0314) vs GPT-4o-mini

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-4 (older v0314) wins 3 of 5 shared benchmarks. Leads in coding · math.

Category leads
coding·GPT-4 (older v0314)knowledge·GPT-4o-minimath·GPT-4 (older v0314)
Hype vs Reality
GPT-4 (older v0314)
#72 by perf·no signal
QUIET
GPT-4o-mini
#146 by perf·no signal
QUIET
Best value
86.4x better value than GPT-4 (older v0314)
GPT-4 (older v0314)
1.2 pts/$
$45.00/M
GPT-4o-mini
105.6 pts/$
$0.38/M
Vendor risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
GPT-4 (older v0314)GPT-4o-mini
Aider · Code Editing
GPT-4 (older v0314) leads by +10.6
GPT-4 (older v0314)
66.2
GPT-4o-mini
55.6
GPQA diamond
GPT-4o-mini leads by +2.7
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4 (older v0314)
14.3
GPT-4o-mini
17.0
GSM8K
GPT-4 (older v0314) leads by +0.7
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
GPT-4 (older v0314)
92.0
GPT-4o-mini
91.3
MMLU
GPT-4 (older v0314) leads by +6.1
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
GPT-4 (older v0314)
81.9
GPT-4o-mini
75.7
OTIS Mock AIME 2024-2025
GPT-4o-mini leads by +6.4
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4 (older v0314)
0.5
GPT-4o-mini
6.8
Full benchmark table
BenchmarkGPT-4 (older v0314)GPT-4o-mini
Aider · Code Editing
66.255.6
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
14.317.0
GSM8K
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
92.091.3
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
81.975.7
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
0.56.8
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
OpenAI logoGPT-4 (older v0314)$30.00$60.008K tokens (~4 books)$375.00
OpenAI logoGPT-4o-mini$0.15$0.60128K tokens (~64 books)$2.62