Compare · ModelsLive · 3 picked · head to head

GPT-4o (2024-05-13) vs GPT-4o (2024-11-20) vs Claude 3 Haiku

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-4o (2024-11-20) wins 6 of 9 shared benchmarks. Leads in knowledge · math · coding.

Category leads
knowledge·GPT-4o (2024-11-20)math·GPT-4o (2024-11-20)coding·GPT-4o (2024-11-20)
Hype vs Reality
GPT-4o (2024-05-13)
#89 by perf·no signal
QUIET
GPT-4o (2024-11-20)
#156 by perf·no signal
QUIET
Claude 3 Haiku
#191 by perf·no signal
QUIET
Best value
6.3x better value than GPT-4o (2024-11-20)
GPT-4o (2024-05-13)
5.1 pts/$
$10.00/M
GPT-4o (2024-11-20)
6.0 pts/$
$6.25/M
Claude 3 Haiku
38.3 pts/$
$0.75/M
Vendor risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
Head to head
GPT-4o (2024-05-13)GPT-4o (2024-11-20)Claude 3 Haiku
GPQA diamond
GPT-4o (2024-11-20) leads by +0.4
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4o (2024-05-13)
31.9
GPT-4o (2024-11-20)
32.3
Claude 3 Haiku
15.1
MATH level 5
GPT-4o (2024-11-20) leads by +2.2
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4o (2024-05-13)
51.0
GPT-4o (2024-11-20)
53.3
Claude 3 Haiku
14.9
MMLU
GPT-4o (2024-11-20) leads by +0.1
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
GPT-4o (2024-05-13)
78.9
GPT-4o (2024-11-20)
79.1
Claude 3 Haiku
65.1
OTIS Mock AIME 2024-2025
GPT-4o (2024-11-20) leads by +0.1
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4o (2024-05-13)
6.2
GPT-4o (2024-11-20)
6.3
Claude 3 Haiku
1.7
ScienceQA
ScienceQA · multimodal science questions spanning natural science, social science, and language science with diverse question formats and image context.
GPT-4o (2024-05-13)
84.7
GPT-4o (2024-11-20)
84.7
Claude 3 Haiku
62.7
Aider · Code Editing
GPT-4o (2024-05-13) leads by +1.5
GPT-4o (2024-05-13)
72.9
GPT-4o (2024-11-20)
71.4
Balrog
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
GPT-4o (2024-05-13)
32.3
GPT-4o (2024-11-20)
32.3
CadEval
GPT-4o (2024-11-20) leads by +14.0
CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge.
GPT-4o (2024-11-20)
26.0
Claude 3 Haiku
12.0
WeirdML
GPT-4o (2024-11-20) leads by +15.3
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4o (2024-11-20)
25.1
Claude 3 Haiku
9.8
Full benchmark table
BenchmarkGPT-4o (2024-05-13)GPT-4o (2024-11-20)Claude 3 Haiku
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
31.932.315.1
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
51.053.314.9
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
78.979.165.1
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
6.26.31.7
ScienceQA
ScienceQA · multimodal science questions spanning natural science, social science, and language science with diverse question formats and image context.
84.784.762.7
Aider · Code Editing
72.971.4
Balrog
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
32.332.3
CadEval
CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge.
26.012.0
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
25.19.8
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
OpenAI logoGPT-4o (2024-05-13)$5.00$15.00128K tokens (~64 books)$75.00
OpenAI logoGPT-4o (2024-11-20)$2.50$10.00128K tokens (~64 books)$43.75
Anthropic logoClaude 3 Haiku$0.25$1.25200K tokens (~100 books)$5.00