Compare · ModelsLive · 2 picked · head to head
GPT-4o (2024-08-06) vs Claude Haiku 4.5
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Haiku 4.5 wins on 4/4 benchmarks
Claude Haiku 4.5 wins 4 of 4 shared benchmarks. Leads in math · knowledge.
Category leads
math·Claude Haiku 4.5knowledge·Claude Haiku 4.5
Hype vs Reality
Attention vs performance
GPT-4o (2024-08-06)
#167 by perf·no signal
Claude Haiku 4.5
#161 by perf·no signal
Best value
Claude Haiku 4.5
2.2x better value than GPT-4o (2024-08-06)
GPT-4o (2024-08-06)
5.7 pts/$
$6.25/M
Claude Haiku 4.5
12.4 pts/$
$3.00/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
Anthropic
$380.0B·Tier 1
Head to head
4 benchmarks · 2 models
GPT-4o (2024-08-06)Claude Haiku 4.5
FrontierMath-2025-02-28-Private
Claude Haiku 4.5 leads by +5.6
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
GPT-4o (2024-08-06)
0.3
Claude Haiku 4.5
5.9
GPQA diamond
Claude Haiku 4.5 leads by +29.3
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4o (2024-08-06)
32.3
Claude Haiku 4.5
61.6
MATH level 5
Claude Haiku 4.5 leads by +43.1
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4o (2024-08-06)
53.3
Claude Haiku 4.5
96.4
OTIS Mock AIME 2024-2025
Claude Haiku 4.5 leads by +60.3
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4o (2024-08-06)
6.3
Claude Haiku 4.5
66.6
Full benchmark table
| Benchmark | GPT-4o (2024-08-06) | Claude Haiku 4.5 |
|---|---|---|
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 0.3 | 5.9 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 32.3 | 61.6 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 53.3 | 96.4 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 6.3 | 66.6 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.50 | $10.00 | 128K tokens (~64 books) | $43.75 | |
| $1.00 | $5.00 | 200K tokens (~100 books) | $20.00 |
People also compared