Compare · ModelsLive · 2 picked · head to head
GPT-5.4 Pro vs Gemini 3.1 Pro Preview
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-5.4 Pro wins on 5/8 benchmarks
GPT-5.4 Pro wins 5 of 8 shared benchmarks. Leads in knowledge · math.
Category leads
reasoning·Gemini 3.1 Pro Previewknowledge·GPT-5.4 Promath·GPT-5.4 Pro
Hype vs Reality
Attention vs performance
GPT-5.4 Pro
#24 by perf·no signal
Gemini 3.1 Pro Preview
#36 by perf·no signal
Best value
Gemini 3.1 Pro Preview
13.6x better value than GPT-5.4 Pro
GPT-5.4 Pro
0.6 pts/$
$105.00/M
Gemini 3.1 Pro Preview
8.7 pts/$
$7.00/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
Google DeepMind
$4.00T·Tier 1
Head to head
8 benchmarks · 2 models
GPT-5.4 ProGemini 3.1 Pro Preview
ARC-AGI
Gemini 3.1 Pro Preview leads by +3.5
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
GPT-5.4 Pro
94.5
Gemini 3.1 Pro Preview
98.0
ARC-AGI-2
GPT-5.4 Pro leads by +6.2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
GPT-5.4 Pro
83.3
Gemini 3.1 Pro Preview
77.1
Chess Puzzles
GPT-5.4 Pro leads by +3.6
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
GPT-5.4 Pro
58.6
Gemini 3.1 Pro Preview
55.0
FrontierMath-2025-02-28-Private
GPT-5.4 Pro leads by +13.1
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
GPT-5.4 Pro
50.0
Gemini 3.1 Pro Preview
36.9
FrontierMath-Tier-4-2025-07-01-Private
GPT-5.4 Pro leads by +20.8
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
GPT-5.4 Pro
37.5
Gemini 3.1 Pro Preview
16.7
GPQA diamond
GPT-5.4 Pro leads by +0.7
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-5.4 Pro
92.8
Gemini 3.1 Pro Preview
92.1
SimpleBench
Gemini 3.1 Pro Preview leads by +6.6
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GPT-5.4 Pro
68.9
Gemini 3.1 Pro Preview
75.5
SimpleQA Verified
Gemini 3.1 Pro Preview leads by +29.5
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
GPT-5.4 Pro
47.8
Gemini 3.1 Pro Preview
77.3
Full benchmark table
| Benchmark | GPT-5.4 Pro | Gemini 3.1 Pro Preview |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 94.5 | 98.0 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 83.3 | 77.1 |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | 58.6 | 55.0 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 50.0 | 36.9 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 37.5 | 16.7 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 92.8 | 92.1 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 68.9 | 75.5 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 47.8 | 77.3 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $30.00 | $180.00 | 1.1M tokens (~525 books) | $675.00 | |
| $2.00 | $12.00 | 1.0M tokens (~524 books) | $45.00 |
People also compared
Claude Mythos Preview vs Gemini 3.1 Pro PreviewGPT-5.4 Pro vs phi-3-small 7.4BGPT-5.4 Pro vs Qwen3 30B A3B Thinking 2507GPT-5.4 Pro vs Grok 3 Mini BetaGemini 2.0 Flash Lite vs Gemini 3.1 Pro PreviewGemini 3.1 Pro Preview vs Gemma 4 31BGemini 3.1 Pro Preview vs Qwen3 Next 80B A3B ThinkingGemini 3.1 Pro Preview vs o3 Pro