Compare · ModelsLive · 2 picked · head to head
Grok 3 Mini vs Grok 4
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Grok 4 wins on 10/10 benchmarks
Grok 4 wins 10 of 10 shared benchmarks. Leads in coding · reasoning · knowledge.
Category leads
coding·Grok 4reasoning·Grok 4knowledge·Grok 4math·Grok 4
Hype vs Reality
Attention vs performance
Grok 3 Mini
#108 by perf·no signal
Grok 4
#71 by perf·no signal
Best value
Grok 3 Mini
19.1x better value than Grok 4
Grok 3 Mini
116.5 pts/$
$0.40/M
Grok 4
6.1 pts/$
$9.00/M
Vendor risk
Who is behind the model
xAI
$250.0B·Tier 1
xAI
$250.0B·Tier 1
Head to head
10 benchmarks · 2 models
Grok 3 MiniGrok 4
Aider polyglot
Grok 4 leads by +30.3
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
Grok 3 Mini
49.3
Grok 4
79.6
ARC-AGI
Grok 4 leads by +50.2
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Grok 3 Mini
16.5
Grok 4
66.7
ARC-AGI-2
Grok 4 leads by +15.6
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Grok 3 Mini
0.4
Grok 4
16.0
Fiction.LiveBench
Grok 4 leads by +27.7
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
Grok 3 Mini
66.7
Grok 4
94.4
FrontierMath-2025-02-28-Private
Grok 4 leads by +13.8
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Grok 3 Mini
5.9
Grok 4
19.7
GPQA diamond
Grok 4 leads by +14.3
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Grok 3 Mini
68.3
Grok 4
82.7
Lech Mazur Writing
Grok 4 leads by +7.2
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
Grok 3 Mini
73.5
Grok 4
80.7
OTIS Mock AIME 2024-2025
Grok 4 leads by +6.2
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Grok 3 Mini
77.8
Grok 4
84.0
SimpleQA Verified
Grok 4 leads by +26.8
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Grok 3 Mini
21.1
Grok 4
47.9
WeirdML
Grok 4 leads by +3.1
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Grok 3 Mini
42.6
Grok 4
45.7
Full benchmark table
| Benchmark | Grok 3 Mini | Grok 4 |
|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 49.3 | 79.6 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 16.5 | 66.7 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 0.4 | 16.0 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 66.7 | 94.4 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 5.9 | 19.7 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 68.3 | 82.7 |
Lech Mazur Writing Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication. | 73.5 | 80.7 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 77.8 | 84.0 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 21.1 | 47.9 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 42.6 | 45.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.30 | $0.50 | 131K tokens (~66 books) | $3.50 | |
| $3.00 | $15.00 | 256K tokens (~128 books) | $60.00 |