Home/Models/Claude 3.5 Haiku
Anthropic logo

Claude 3.5 Haiku

by Anthropic · Released Nov 2024

Multimodal
24.8
avg score
Rank #195
Compare
Better than 16% of all models
Context
200K tokens (~100 books)
Input $/1M
$0.80
Output $/1M
$4.00
Type
multimodal
License
Proprietary
Benchmarks
17 tested
Data updated today
About

Claude 3.5 Haiku features offers enhanced capabilities in speed, coding accuracy, and tool use. Engineered to excel in real-time applications, it delivers quick response times that are essential for dynamic...

Tested on 17 benchmarks with 37.2% average. Top scores: HELM — IFEval (79.2%), HELM — WildBench (76.0%), Lech Mazur Writing (73.5%).

Looking for similar performance at lower cost?
Gemma 3 27B scores 25.1 (101% as good) at $0.08/1M input · 90% cheaper
Capabilities
coding
30.2
#117 globally
reasoning
76.0
#15 globally
math
18.3
#170 globally
knowledge
39.2
#153 globally
language
79.2
#52 globally
Benchmark Scores
Compare All
Tested on 17 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
CadEval

Computer-aided design evaluation. Tests understanding of CAD concepts, 3D modeling, and engineering design principles.

32.0
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

30.7
Aider polyglot

Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.

28.0
HELM — WildBench

Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.

76.0
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

46.4
HELM — Omni-MATH

Stanford HELM evaluation of mathematical reasoning across diverse problem types.

22.4
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

4.2
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
claude-3-5-haiku
Specifications
  • Typemultimodal
  • Context200K tokens (~100 books)
  • ReleasedNov 2024
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.006
Available On
Anthropic logoAnthropic$0.80
Share & Export
Tweet
Claude 3.5 Haiku is a proprietary multimodal AI model by Anthropic, released in November 2024. It has an average benchmark score of 24.8. Context window: 200K tokens.