Compare · ModelsLive · 2 picked · head to head
Mixtral 8x22B Instruct vs Claude 3.5 Haiku
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude 3.5 Haiku wins on 3/4 benchmarks
Claude 3.5 Haiku wins 3 of 4 shared benchmarks. Leads in knowledge · math · coding.
Category leads
knowledge·Claude 3.5 Haikumath·Claude 3.5 Haikucoding·Claude 3.5 Haiku
Hype vs Reality
Attention vs performance
Mixtral 8x22B Instruct
#208 by perf·no signal
Claude 3.5 Haiku
#159 by perf·no signal
Best value
Claude 3.5 Haiku
2.6x better value than Mixtral 8x22B Instruct
Mixtral 8x22B Instruct
5.9 pts/$
$4.00/M
Claude 3.5 Haiku
15.5 pts/$
$2.40/M
Vendor risk
Who is behind the model
Mistral AI
$14.0B·Tier 1
Anthropic
$380.0B·Tier 1
Head to head
4 benchmarks · 2 models
Mixtral 8x22B InstructClaude 3.5 Haiku
GPQA diamond
Claude 3.5 Haiku leads by +5.4
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Mixtral 8x22B Instruct
12.1
Claude 3.5 Haiku
17.5
MATH level 5
Claude 3.5 Haiku leads by +22.1
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Mixtral 8x22B Instruct
24.2
Claude 3.5 Haiku
46.4
MMLU
Mixtral 8x22B Instruct leads by +4.7
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Mixtral 8x22B Instruct
70.4
Claude 3.5 Haiku
65.7
WeirdML
Claude 3.5 Haiku leads by +27.6
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Mixtral 8x22B Instruct
3.2
Claude 3.5 Haiku
30.7
Full benchmark table
| Benchmark | Mixtral 8x22B Instruct | Claude 3.5 Haiku |
|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 12.1 | 17.5 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 24.2 | 46.4 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 70.4 | 65.7 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 3.2 | 30.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.00 | $6.00 | 66K tokens (~33 books) | $30.00 | |
| $0.80 | $4.00 | 200K tokens (~100 books) | $16.00 |
People also compared