Compare · ModelsLive · 2 picked · head to head
Claude Haiku 4.5 vs GPT-4.1 Mini
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Haiku 4.5 wins on 7/7 benchmarks
Claude Haiku 4.5 wins 7 of 7 shared benchmarks. Leads in reasoning · math · knowledge.
Category leads
reasoning·Claude Haiku 4.5math·Claude Haiku 4.5knowledge·Claude Haiku 4.5coding·Claude Haiku 4.5
Hype vs Reality
Attention vs performance
Claude Haiku 4.5
#159 by perf·no signal
GPT-4.1 Mini
#116 by perf·no signal
Best value
GPT-4.1 Mini
3.6x better value than Claude Haiku 4.5
Claude Haiku 4.5
12.4 pts/$
$3.00/M
GPT-4.1 Mini
44.5 pts/$
$1.00/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
7 benchmarks · 2 models
Claude Haiku 4.5GPT-4.1 Mini
ARC-AGI
Claude Haiku 4.5 leads by +44.2
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Haiku 4.5
47.7
GPT-4.1 Mini
3.5
ARC-AGI-2
Claude Haiku 4.5 leads by +3.9
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Haiku 4.5
4.0
GPT-4.1 Mini
0.1
FrontierMath-2025-02-28-Private
Claude Haiku 4.5 leads by +1.4
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Haiku 4.5
5.9
GPT-4.1 Mini
4.5
GPQA diamond
Claude Haiku 4.5 leads by +7.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Haiku 4.5
61.6
GPT-4.1 Mini
54.5
MATH level 5
Claude Haiku 4.5 leads by +9.1
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Claude Haiku 4.5
96.4
GPT-4.1 Mini
87.3
OTIS Mock AIME 2024-2025
Claude Haiku 4.5 leads by +22.0
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Haiku 4.5
66.6
GPT-4.1 Mini
44.7
WeirdML
Claude Haiku 4.5 leads by +7.8
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Haiku 4.5
45.4
GPT-4.1 Mini
37.6
Full benchmark table
| Benchmark | Claude Haiku 4.5 | GPT-4.1 Mini |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 47.7 | 3.5 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 4.0 | 0.1 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 5.9 | 4.5 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 61.6 | 54.5 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 96.4 | 87.3 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 66.6 | 44.7 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 45.4 | 37.6 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $1.00 | $5.00 | 200K tokens (~100 books) | $20.00 | |
| $0.40 | $1.60 | 1.0M tokens (~524 books) | $7.00 |
People also compared