Compare · ModelsLive · 2 picked · head to head
Gemini 3 Pro vs GPT-5.2 Pro
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-5.2 Pro wins on 3/4 benchmarks
GPT-5.2 Pro wins 3 of 4 shared benchmarks. Leads in reasoning · math.
Category leads
reasoning·GPT-5.2 Promath·GPT-5.2 Pro
Hype vs Reality
Attention vs performance
Gemini 3 Pro
#40 by perf·no signal
GPT-5.2 Pro
#62 by perf·no signal
Vendor risk
Who is behind the model
Google DeepMind
$4.00T·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
4 benchmarks · 2 models
Gemini 3 ProGPT-5.2 Pro
ARC-AGI
GPT-5.2 Pro leads by +15.5
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Gemini 3 Pro
75.0
GPT-5.2 Pro
90.5
ARC-AGI-2
GPT-5.2 Pro leads by +23.0
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Gemini 3 Pro
31.1
GPT-5.2 Pro
54.2
FrontierMath-Tier-4-2025-07-01-Private
GPT-5.2 Pro leads by +12.6
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Gemini 3 Pro
18.8
GPT-5.2 Pro
31.3
SimpleBench
Gemini 3 Pro leads by +22.8
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Gemini 3 Pro
71.7
GPT-5.2 Pro
48.9
Full benchmark table
| Benchmark | Gemini 3 Pro | GPT-5.2 Pro |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 75.0 | 90.5 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 31.1 | 54.2 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 18.8 | 31.3 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 71.7 | 48.9 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| — | — | — | — | |
| $21.00 | $168.00 | 400K tokens (~200 books) | $577.50 |