Compare · ModelsLive · 3 picked · head to head
GPT-4o (2024-05-13) vs GPT-4o (2024-11-20) vs Claude 3 Haiku
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-4o (2024-11-20) wins on 6/9 benchmarks
GPT-4o (2024-11-20) wins 6 of 9 shared benchmarks. Leads in knowledge · math · coding.
Category leads
knowledge·GPT-4o (2024-11-20)math·GPT-4o (2024-11-20)coding·GPT-4o (2024-11-20)
Hype vs Reality
Attention vs performance
GPT-4o (2024-05-13)
#89 by perf·no signal
GPT-4o (2024-11-20)
#156 by perf·no signal
Claude 3 Haiku
#191 by perf·no signal
Best value
Claude 3 Haiku
6.3x better value than GPT-4o (2024-11-20)
GPT-4o (2024-05-13)
5.1 pts/$
$10.00/M
GPT-4o (2024-11-20)
6.0 pts/$
$6.25/M
Claude 3 Haiku
38.3 pts/$
$0.75/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
OpenAI
$840.0B·Tier 1
Anthropic
$380.0B·Tier 1
Head to head
9 benchmarks · 3 models
GPT-4o (2024-05-13)GPT-4o (2024-11-20)Claude 3 Haiku
GPQA diamond
GPT-4o (2024-11-20) leads by +0.4
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4o (2024-05-13)
31.9
GPT-4o (2024-11-20)
32.3
Claude 3 Haiku
15.1
MATH level 5
GPT-4o (2024-11-20) leads by +2.2
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4o (2024-05-13)
51.0
GPT-4o (2024-11-20)
53.3
Claude 3 Haiku
14.9
MMLU
GPT-4o (2024-11-20) leads by +0.1
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
GPT-4o (2024-05-13)
78.9
GPT-4o (2024-11-20)
79.1
Claude 3 Haiku
65.1
OTIS Mock AIME 2024-2025
GPT-4o (2024-11-20) leads by +0.1
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4o (2024-05-13)
6.2
GPT-4o (2024-11-20)
6.3
Claude 3 Haiku
1.7
ScienceQA
ScienceQA · multimodal science questions spanning natural science, social science, and language science with diverse question formats and image context.
GPT-4o (2024-05-13)
84.7
GPT-4o (2024-11-20)
84.7
Claude 3 Haiku
62.7
Aider · Code Editing
GPT-4o (2024-05-13) leads by +1.5
GPT-4o (2024-05-13)
72.9
GPT-4o (2024-11-20)
71.4
Balrog
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
GPT-4o (2024-05-13)
32.3
GPT-4o (2024-11-20)
32.3
CadEval
GPT-4o (2024-11-20) leads by +14.0
CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge.
GPT-4o (2024-11-20)
26.0
Claude 3 Haiku
12.0
WeirdML
GPT-4o (2024-11-20) leads by +15.3
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4o (2024-11-20)
25.1
Claude 3 Haiku
9.8
Full benchmark table
| Benchmark | GPT-4o (2024-05-13) | GPT-4o (2024-11-20) | Claude 3 Haiku |
|---|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 31.9 | 32.3 | 15.1 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 51.0 | 53.3 | 14.9 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 78.9 | 79.1 | 65.1 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 6.2 | 6.3 | 1.7 |
ScienceQA ScienceQA · multimodal science questions spanning natural science, social science, and language science with diverse question formats and image context. | 84.7 | 84.7 | 62.7 |
Aider · Code Editing | 72.9 | 71.4 | — |
Balrog Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning. | 32.3 | 32.3 | — |
CadEval CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge. | — | 26.0 | 12.0 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | — | 25.1 | 9.8 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $5.00 | $15.00 | 128K tokens (~64 books) | $75.00 | |
| $2.50 | $10.00 | 128K tokens (~64 books) | $43.75 | |
| $0.25 | $1.25 | 200K tokens (~100 books) | $5.00 |