Claude 3 Haiku is Anthropic's fastest and most compact model for near-instant responsiveness. Quick and accurate targeted performance. See the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-haiku) #multimodal
Tested on 8 benchmarks with 28.7% average. Top scores: MMLU (65.1%), ScienceQA (62.7%), Winogrande (48.4%).
Llama 3.3 70B Instruct (free) scores 29.6 (99% as good) at $0.00/1M input · 100% cheaper
Computer-aided design evaluation. Tests understanding of CAD concepts, 3D modeling, and engineering design principles.
Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.
Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.
Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.
Massive Multitask Language Understanding. 57 subjects from STEM, humanities, and social sciences. The most widely-cited knowledge benchmark.
Science questions with multimodal context including diagrams and charts from K-12 curriculum.
Commonsense coreference resolution. Tests understanding of pronoun references in ambiguous sentences.
- Typemultimodal
- Context200K tokens (~100 books)
- ReleasedMar 2024
- LicenseProprietary
- StatusActive
- Cost / Message~$0.002