Compare · ModelsLive · 2 picked · head to head
o3 vs Grok 4
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
o3 wins on 10/19 benchmarks
o3 wins 10 of 19 shared benchmarks. Leads in coding · knowledge.
Category leads
coding·o3reasoning·Grok 4knowledge·o3math·Grok 4language·Grok 4
Hype vs Reality
Attention vs performance
o3
#67 by perf·no signal
Grok 4
#71 by perf·no signal
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
xAI
$250.0B·Tier 1
Head to head
19 benchmarks · 2 models
o3Grok 4
Aider polyglot
o3 leads by +1.7
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
o3
81.3
Grok 4
79.6
ARC-AGI
Grok 4 leads by +5.9
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
o3
60.8
Grok 4
66.7
ARC-AGI-2
Grok 4 leads by +9.4
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
o3
6.5
Grok 4
16.0
DeepResearch Bench
Grok 4 leads by +1.3
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
o3
46.6
Grok 4
47.9
Fiction.LiveBench
Grok 4 leads by +5.5
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
o3
88.9
Grok 4
94.4
FrontierMath-2025-02-28-Private
Grok 4 leads by +1.0
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
o3
18.7
Grok 4
19.7
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
o3
2.1
Grok 4
2.1
GeoBench
o3 leads by +29.0
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
o3
74.0
Grok 4
45.0
GPQA diamond
Grok 4 leads by +6.9
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
o3
75.8
Grok 4
82.7
HELM · GPQA
o3 leads by +2.7
o3
75.3
Grok 4
72.6
HELM · IFEval
Grok 4 leads by +8.0
o3
86.9
Grok 4
94.9
HELM · MMLU-Pro
o3 leads by +0.8
o3
85.9
Grok 4
85.1
HELM · Omni-MATH
o3 leads by +11.1
o3
71.4
Grok 4
60.3
HELM · WildBench
o3 leads by +6.4
o3
86.1
Grok 4
79.7
Lech Mazur Writing
o3 leads by +3.2
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
o3
83.9
Grok 4
80.7
OTIS Mock AIME 2024-2025
Grok 4 leads by +0.1
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
o3
83.9
Grok 4
84.0
SimpleBench
Grok 4 leads by +8.9
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
o3
43.7
Grok 4
52.6
SimpleQA Verified
o3 leads by +5.1
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
o3
53.0
Grok 4
47.9
WeirdML
o3 leads by +6.7
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
o3
52.4
Grok 4
45.7
Full benchmark table
| Benchmark | o3 | Grok 4 |
|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 81.3 | 79.6 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 60.8 | 66.7 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 6.5 | 16.0 |
DeepResearch Bench DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses. | 46.6 | 47.9 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 88.9 | 94.4 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 18.7 | 19.7 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 2.1 | 2.1 |
GeoBench GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding. | 74.0 | 45.0 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 75.8 | 82.7 |
HELM · GPQA | 75.3 | 72.6 |
HELM · IFEval | 86.9 | 94.9 |
HELM · MMLU-Pro | 85.9 | 85.1 |
HELM · Omni-MATH | 71.4 | 60.3 |
HELM · WildBench | 86.1 | 79.7 |
Lech Mazur Writing Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication. | 83.9 | 80.7 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 83.9 | 84.0 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 43.7 | 52.6 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 53.0 | 47.9 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 52.4 | 45.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens