Compare · ModelsLive · 2 picked · head to head

Mistral Large 2407 vs Llama 3.1 8B Instruct

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Mistral Large 2407 wins 5 of 5 shared benchmarks. Leads in arena · knowledge · math.

Category leads
arena·Mistral Large 2407knowledge·Mistral Large 2407math·Mistral Large 2407
Hype vs Reality
Mistral Large 2407
#147 by perf·no signal
QUIET
Llama 3.1 8B Instruct
#199 by perf·no signal
QUIET
Best value
80.1x better value than Mistral Large 2407
Mistral Large 2407
9.8 pts/$
$4.00/M
Llama 3.1 8B Instruct
782.9 pts/$
$0.04/M
Vendor risk
Mistral AI logo
Mistral AI
$14.0B·Tier 1
Medium risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Head to head
Mistral Large 2407Llama 3.1 8B Instruct
Chatbot Arena Elo · Overall
Mistral Large 2407 leads by +102.3
Mistral Large 2407
1313.3
Llama 3.1 8B Instruct
1211.0
GPQA diamond
Mistral Large 2407 leads by +30.8
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Mistral Large 2407
32.0
Llama 3.1 8B Instruct
1.3
MATH level 5
Mistral Large 2407 leads by +21.9
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Mistral Large 2407
44.8
Llama 3.1 8B Instruct
22.9
MMLU
Mistral Large 2407 leads by +31.9
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Mistral Large 2407
73.3
Llama 3.1 8B Instruct
41.5
OTIS Mock AIME 2024-2025
Mistral Large 2407 leads by +6.0
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Mistral Large 2407
8.4
Llama 3.1 8B Instruct
2.4
Full benchmark table
BenchmarkMistral Large 2407Llama 3.1 8B Instruct
Chatbot Arena Elo · Overall
1313.31211.0
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
32.01.3
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
44.822.9
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
73.341.5
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
8.42.4
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Mistral AI logoMistral Large 2407$2.00$6.00131K tokens (~66 books)$30.00
Meta logoLlama 3.1 8B Instruct$0.02$0.0516K tokens (~8 books)$0.28