Compare · ModelsLive · 2 picked · head to head

Grok-2 (Dec 2024) vs Mistral Large

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Grok-2 (Dec 2024) wins 5 of 7 shared benchmarks. Leads in math · knowledge · reasoning.

Category leads
coding·Mistral Largemath·Grok-2 (Dec 2024)knowledge·Grok-2 (Dec 2024)reasoning·Grok-2 (Dec 2024)
Hype vs Reality
Grok-2 (Dec 2024)
#176 by perf·no signal
QUIET
Mistral Large
#186 by perf·no signal
QUIET
Best value
Grok-2 (Dec 2024)
no price
Mistral Large
7.5 pts/$
$4.00/M
Vendor risk
xAI logo
xAI
$250.0B·Tier 1
Medium risk
Mistral AI logo
Mistral AI
$14.0B·Tier 1
Medium risk
Head to head
Grok-2 (Dec 2024)Mistral Large
Aider · Code Editing
Mistral Large leads by +1.6
Grok-2 (Dec 2024)
58.6
Mistral Large
60.2
FrontierMath-2025-02-28-Private
Grok-2 (Dec 2024) leads by +0.3
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Grok-2 (Dec 2024)
0.7
Mistral Large
0.3
GPQA diamond
Grok-2 (Dec 2024) leads by +20.0
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Grok-2 (Dec 2024)
38.4
Mistral Large
18.4
Lech Mazur Writing
Mistral Large leads by +5.4
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
Grok-2 (Dec 2024)
63.6
Mistral Large
69.0
MATH level 5
Grok-2 (Dec 2024) leads by +39.1
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Grok-2 (Dec 2024)
63.5
Mistral Large
24.5
OTIS Mock AIME 2024-2025
Grok-2 (Dec 2024) leads by +9.6
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Grok-2 (Dec 2024)
11.4
Mistral Large
1.9
SimpleBench
Grok-2 (Dec 2024) leads by +0.2
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Grok-2 (Dec 2024)
7.2
Mistral Large
7.0
Full benchmark table
BenchmarkGrok-2 (Dec 2024)Mistral Large
Aider · Code Editing
58.660.2
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
0.70.3
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
38.418.4
Lech Mazur Writing
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
63.669.0
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
63.524.5
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
11.41.9
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
7.27.0
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
xAI logoGrok-2 (Dec 2024)
Mistral AI logoMistral Large$2.00$6.00128K tokens (~64 books)$30.00