Compare · ModelsLive · 2 picked · head to head
Mistral Large 2407 vs Gemma 3 27B (free)
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Gemma 3 27B (free) wins on 3/4 benchmarks
Gemma 3 27B (free) wins 3 of 4 shared benchmarks. Leads in math.
Category leads
knowledge·Mistral Large 2407math·Gemma 3 27B (free)
Hype vs Reality
Attention vs performance
Mistral Large 2407
#147 by perf·no signal
Gemma 3 27B (free)
#131 by perf·no signal
Best value
Mistral Large 2407
Mistral Large 2407
9.8 pts/$
$4.00/M
Gemma 3 27B (free)
—
$0.00/M
Vendor risk
Who is behind the model
Mistral AI
$14.0B·Tier 1
Google DeepMind
$4.00T·Tier 1
Head to head
4 benchmarks · 2 models
Mistral Large 2407Gemma 3 27B (free)
GPQA diamond
Mistral Large 2407 leads by +0.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Mistral Large 2407
32.0
Gemma 3 27B (free)
31.8
Lech Mazur Writing
Gemma 3 27B (free) leads by +10.9
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
Mistral Large 2407
69.0
Gemma 3 27B (free)
79.9
MATH level 5
Gemma 3 27B (free) leads by +29.2
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Mistral Large 2407
44.8
Gemma 3 27B (free)
74.0
OTIS Mock AIME 2024-2025
Gemma 3 27B (free) leads by +11.3
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Mistral Large 2407
8.4
Gemma 3 27B (free)
19.6
Full benchmark table
| Benchmark | Mistral Large 2407 | Gemma 3 27B (free) |
|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 32.0 | 31.8 |
Lech Mazur Writing Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication. | 69.0 | 79.9 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 44.8 | 74.0 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 8.4 | 19.6 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.00 | $6.00 | 131K tokens (~66 books) | $30.00 | |
| $0.00 | $0.00 | 131K tokens (~66 books) | — |