Compare · ModelsLive · 2 picked · head to head
GPT-5.2 vs GPT-5.2 Pro
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-5.2 Pro wins on 4/4 benchmarks
GPT-5.2 Pro wins 4 of 4 shared benchmarks. Leads in reasoning · math.
Category leads
reasoning·GPT-5.2 Promath·GPT-5.2 Pro
Hype vs Reality
Attention vs performance
GPT-5.2
#76 by perf·no signal
GPT-5.2 Pro
#62 by perf·no signal
Best value
GPT-5.2
11.5x better value than GPT-5.2 Pro
GPT-5.2
6.9 pts/$
$7.88/M
GPT-5.2 Pro
0.6 pts/$
$94.50/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
4 benchmarks · 2 models
GPT-5.2GPT-5.2 Pro
ARC-AGI
GPT-5.2 Pro leads by +4.3
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
GPT-5.2
86.2
GPT-5.2 Pro
90.5
ARC-AGI-2
GPT-5.2 Pro leads by +1.3
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
GPT-5.2
52.9
GPT-5.2 Pro
54.2
FrontierMath-Tier-4-2025-07-01-Private
GPT-5.2 Pro leads by +12.5
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
GPT-5.2
18.8
GPT-5.2 Pro
31.3
SimpleBench
GPT-5.2 Pro leads by +13.9
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GPT-5.2
35.0
GPT-5.2 Pro
48.9
Full benchmark table
| Benchmark | GPT-5.2 | GPT-5.2 Pro |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 86.2 | 90.5 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 52.9 | 54.2 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 18.8 | 31.3 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 35.0 | 48.9 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $1.75 | $14.00 | 400K tokens (~200 books) | $48.13 | |
| $21.00 | $168.00 | 400K tokens (~200 books) | $577.50 |
People also compared