Compare · ModelsLive · 2 picked · head to head
Mistral Large 2407 vs Llama 3.1 8B Instruct
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Mistral Large 2407 wins on 5/5 benchmarks
Mistral Large 2407 wins 5 of 5 shared benchmarks. Leads in arena · knowledge · math.
Category leads
arena·Mistral Large 2407knowledge·Mistral Large 2407math·Mistral Large 2407
Hype vs Reality
Attention vs performance
Mistral Large 2407
#147 by perf·no signal
Llama 3.1 8B Instruct
#199 by perf·no signal
Best value
Llama 3.1 8B Instruct
80.1x better value than Mistral Large 2407
Mistral Large 2407
9.8 pts/$
$4.00/M
Llama 3.1 8B Instruct
782.9 pts/$
$0.04/M
Vendor risk
Who is behind the model
Mistral AI
$14.0B·Tier 1
Meta AI
$1.50T·Tier 1
Head to head
5 benchmarks · 2 models
Mistral Large 2407Llama 3.1 8B Instruct
Chatbot Arena Elo · Overall
Mistral Large 2407 leads by +102.3
Mistral Large 2407
1313.3
Llama 3.1 8B Instruct
1211.0
GPQA diamond
Mistral Large 2407 leads by +30.8
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Mistral Large 2407
32.0
Llama 3.1 8B Instruct
1.3
MATH level 5
Mistral Large 2407 leads by +21.9
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Mistral Large 2407
44.8
Llama 3.1 8B Instruct
22.9
MMLU
Mistral Large 2407 leads by +31.9
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Mistral Large 2407
73.3
Llama 3.1 8B Instruct
41.5
OTIS Mock AIME 2024-2025
Mistral Large 2407 leads by +6.0
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Mistral Large 2407
8.4
Llama 3.1 8B Instruct
2.4
Full benchmark table
| Benchmark | Mistral Large 2407 | Llama 3.1 8B Instruct |
|---|---|---|
Chatbot Arena Elo · Overall | 1313.3 | 1211.0 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 32.0 | 1.3 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 44.8 | 22.9 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 73.3 | 41.5 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 8.4 | 2.4 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.00 | $6.00 | 131K tokens (~66 books) | $30.00 | |
| $0.02 | $0.05 | 16K tokens (~8 books) | $0.28 |