Compare · ModelsLive · 2 picked · head to head
Llama 4 Maverick vs Gemma 3 27B (free)
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Llama 4 Maverick wins on 5/7 benchmarks
Llama 4 Maverick wins 5 of 7 shared benchmarks. Leads in coding · knowledge.
Category leads
coding·Llama 4 Maverickknowledge·Llama 4 Maverickmath·Gemma 3 27B (free)
Hype vs Reality
Attention vs performance
Llama 4 Maverick
#193 by perf·no signal
Gemma 3 27B (free)
#129 by perf·no signal
Best value
Llama 4 Maverick
Llama 4 Maverick
74.7 pts/$
$0.38/M
Gemma 3 27B (free)
—
$0.00/M
Vendor risk
Who is behind the model
Meta AI
$1.50T·Tier 1
Google DeepMind
$4.00T·Tier 1
Head to head
7 benchmarks · 2 models
Llama 4 MaverickGemma 3 27B (free)
Aider polyglot
Llama 4 Maverick leads by +10.7
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
Llama 4 Maverick
15.6
Gemma 3 27B (free)
4.9
Fiction.LiveBench
Llama 4 Maverick leads by +12.9
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
Llama 4 Maverick
46.2
Gemma 3 27B (free)
33.3
GeoBench
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
Llama 4 Maverick
52.0
Gemma 3 27B (free)
52.0
GPQA diamond
Llama 4 Maverick leads by +24.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Llama 4 Maverick
56.0
Gemma 3 27B (free)
31.8
Lech Mazur Writing
Gemma 3 27B (free) leads by +16.2
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
Llama 4 Maverick
63.7
Gemma 3 27B (free)
79.9
MATH level 5
Gemma 3 27B (free) leads by +1.0
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Llama 4 Maverick
73.0
Gemma 3 27B (free)
74.0
OTIS Mock AIME 2024-2025
Llama 4 Maverick leads by +0.8
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Llama 4 Maverick
20.5
Gemma 3 27B (free)
19.6
Full benchmark table
| Benchmark | Llama 4 Maverick | Gemma 3 27B (free) |
|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 15.6 | 4.9 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 46.2 | 33.3 |
GeoBench GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding. | 52.0 | 52.0 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 56.0 | 31.8 |
Lech Mazur Writing Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication. | 63.7 | 79.9 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 73.0 | 74.0 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 20.5 | 19.6 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.15 | $0.60 | 1.0M tokens (~524 books) | $2.62 | |
| $0.00 | $0.00 | 131K tokens (~66 books) | — |
People also compared