Home/Models/Claude Haiku 4.5
Anthropic logo

Claude Haiku 4.5

by Anthropic · Released Oct 2025

Multimodal
39.2
avg score
Rank #148
Compare
Better than 36% of all models
Context
200K tokens (~100 books)
Input $/1M
$1.00
Output $/1M
$5.00
Type
multimodal
License
Proprietary
Benchmarks
10 tested
Data updated today
About

Claude Haiku 4.5 is Anthropic’s fastest and most efficient model, delivering near-frontier intelligence at a fraction of the cost and latency of larger Claude models. Matching Claude Sonnet 4’s performance...

Tested on 10 benchmarks with 37.1% average. Top scores: MATH level 5 (96.4%), OTIS Mock AIME 2024-2025 (66.6%), GPQA diamond (61.6%).

Looking for similar performance at lower cost?
ERNIE 4.5 21B A3B Thinking scores 39.8 (102% as good) at $0.07/1M input · 93% cheaper
Capabilities
coding
40.5
#94 globally
reasoning
25.9
#96 globally
math
42.7
#100 globally
knowledge
33.8
#168 globally
Benchmark Scores
Compare All
Tested on 10 benchmarks · Ranked across 4 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

45.4
Terminal Bench

Complex terminal-based engineering tasks. Models must use command-line tools, navigate filesystems, and debug systems through shell interaction.

35.5
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

47.7
ARC-AGI-2

ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.

4.0
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

96.4
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

66.6
FrontierMath-2025-02-28-Private

Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.

5.9
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
claude-haiku-4-5
Specifications
  • Typemultimodal
  • Context200K tokens (~100 books)
  • ReleasedOct 2025
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.007
Available On
Anthropic logoAnthropic$1.00
Share & Export
Tweet
Claude Haiku 4.5 is a proprietary multimodal AI model by Anthropic, released in October 2025. It has an average benchmark score of 39.2. Context window: 200K tokens.