Compare · ModelsLive · 2 picked · head to head
Claude 3.5 Haiku vs GPT-4o-mini
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude 3.5 Haiku wins on 5/9 benchmarks
Claude 3.5 Haiku wins 5 of 9 shared benchmarks. Leads in coding · knowledge.
Category leads
coding·Claude 3.5 Haikuknowledge·Claude 3.5 Haikumath·GPT-4o-mini
Hype vs Reality
Attention vs performance
Claude 3.5 Haiku
#157 by perf·no signal
GPT-4o-mini
#144 by perf·no signal
Best value
GPT-4o-mini
6.8x better value than Claude 3.5 Haiku
Claude 3.5 Haiku
15.5 pts/$
$2.40/M
GPT-4o-mini
105.6 pts/$
$0.38/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
9 benchmarks · 2 models
Claude 3.5 HaikuGPT-4o-mini
Aider polyglot
Claude 3.5 Haiku leads by +24.4
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
Claude 3.5 Haiku
28.0
GPT-4o-mini
3.6
Balrog
Claude 3.5 Haiku leads by +1.9
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
Claude 3.5 Haiku
19.3
GPT-4o-mini
17.4
GeoBench
GPT-4o-mini leads by +30.0
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
Claude 3.5 Haiku
34.0
GPT-4o-mini
64.0
GPQA diamond
Claude 3.5 Haiku leads by +0.6
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude 3.5 Haiku
17.5
GPT-4o-mini
17.0
Lech Mazur Writing
Claude 3.5 Haiku leads by +6.3
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
Claude 3.5 Haiku
73.5
GPT-4o-mini
67.2
MATH level 5
GPT-4o-mini leads by +6.3
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Claude 3.5 Haiku
46.4
GPT-4o-mini
52.6
MMLU
GPT-4o-mini leads by +10.0
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Claude 3.5 Haiku
65.7
GPT-4o-mini
75.7
OTIS Mock AIME 2024-2025
GPT-4o-mini leads by +2.6
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude 3.5 Haiku
4.2
GPT-4o-mini
6.8
WeirdML
Claude 3.5 Haiku leads by +19.0
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude 3.5 Haiku
30.7
GPT-4o-mini
11.8
Full benchmark table
| Benchmark | Claude 3.5 Haiku | GPT-4o-mini |
|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 28.0 | 3.6 |
Balrog Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning. | 19.3 | 17.4 |
GeoBench GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding. | 34.0 | 64.0 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 17.5 | 17.0 |
Lech Mazur Writing Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication. | 73.5 | 67.2 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 46.4 | 52.6 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 65.7 | 75.7 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 4.2 | 6.8 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 30.7 | 11.8 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.80 | $4.00 | 200K tokens (~100 books) | $16.00 | |
| $0.15 | $0.60 | 128K tokens (~64 books) | $2.62 |
People also compared