Compare · ModelsLive · 3 picked · head to head
GPT-5 vs o3 Pro vs Grok 4
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-5 wins on 14/20 benchmarks
GPT-5 wins 14 of 20 shared benchmarks. Leads in coding · knowledge · agentic.
Category leads
coding·GPT-5reasoning·Grok 4knowledge·GPT-5agentic·GPT-5math·GPT-5
Hype vs Reality
Attention vs performance
GPT-5
#74 by perf·no signal
o3 Pro
#35 by perf·no signal
Grok 4
#73 by perf·no signal
Best value
GPT-5
1.6x better value than Grok 4
GPT-5
9.7 pts/$
$5.63/M
o3 Pro
1.2 pts/$
$50.00/M
Grok 4
6.1 pts/$
$9.00/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
OpenAI
$840.0B·Tier 1
xAI
$250.0B·Tier 1
Head to head
20 benchmarks · 3 models
GPT-5o3 ProGrok 4
Aider polyglot
GPT-5 leads by +3.1
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
GPT-5
88.0
o3 Pro
84.9
Grok 4
79.6
ARC-AGI
Grok 4 leads by +1.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
GPT-5
65.7
o3 Pro
59.3
Grok 4
66.7
ARC-AGI-2
Grok 4 leads by +6.1
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
GPT-5
9.9
o3 Pro
4.9
Grok 4
16.0
Fiction.LiveBench
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
GPT-5
97.2
o3 Pro
97.2
Grok 4
94.4
Lech Mazur Writing
GPT-5 leads by +1.0
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
GPT-5
87.2
o3 Pro
86.3
Grok 4
80.7
WeirdML
GPT-5 leads by +2.5
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-5
60.7
o3 Pro
58.2
Grok 4
45.7
APEX-Agents
GPT-5 leads by +3.1
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
GPT-5
18.3
Grok 4
15.2
Balrog
Grok 4 leads by +10.8
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
GPT-5
32.8
Grok 4
43.6
Chess Puzzles
GPT-5 leads by +9.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
GPT-5
37.0
Grok 4
28.0
DeepResearch Bench
GPT-5 leads by +7.2
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
GPT-5
55.1
Grok 4
47.9
FrontierMath-2025-02-28-Private
GPT-5 leads by +12.7
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
GPT-5
32.4
Grok 4
19.7
FrontierMath-Tier-4-2025-07-01-Private
GPT-5 leads by +10.4
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
GPT-5
12.5
Grok 4
2.1
GeoBench
GPT-5 leads by +36.0
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
GPT-5
81.0
Grok 4
45.0
GPQA diamond
Grok 4 leads by +1.1
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-5
81.6
Grok 4
82.7
OTIS Mock AIME 2024-2025
GPT-5 leads by +7.4
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-5
91.4
Grok 4
84.0
Professional Reasoning · Finance
GPT-5 leads by +2.2
GPT-5
51.3
o3 Pro
49.1
Professional Reasoning · Legal
o3 Pro leads by +0.7
GPT-5
49.0
o3 Pro
49.7
SimpleBench
Grok 4 leads by +4.6
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GPT-5
48.0
Grok 4
52.6
SimpleQA Verified
GPT-5 leads by +2.7
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
GPT-5
50.6
Grok 4
47.9
Terminal Bench
GPT-5 leads by +22.4
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
GPT-5
49.6
Grok 4
27.2
Full benchmark table
| Benchmark | GPT-5 | o3 Pro | Grok 4 |
|---|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 88.0 | 84.9 | 79.6 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 65.7 | 59.3 | 66.7 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 9.9 | 4.9 | 16.0 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 97.2 | 97.2 | 94.4 |
Lech Mazur Writing Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication. | 87.2 | 86.3 | 80.7 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 60.7 | 58.2 | 45.7 |
APEX-Agents APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments. | 18.3 | — | 15.2 |
Balrog Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning. | 32.8 | — | 43.6 |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | 37.0 | — | 28.0 |
DeepResearch Bench DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses. | 55.1 | — | 47.9 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 32.4 | — | 19.7 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 12.5 | — | 2.1 |
GeoBench GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding. | 81.0 | — | 45.0 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 81.6 | — | 82.7 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 91.4 | — | 84.0 |
Professional Reasoning · Finance | 51.3 | 49.1 | — |
Professional Reasoning · Legal | 49.0 | 49.7 | — |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 48.0 | — | 52.6 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 50.6 | — | 47.9 |
Terminal Bench Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence. | 49.6 | — | 27.2 |
Pricing · per 1M tokens · projected $/mo at 10M tokens