Home/Models/GPT-5.2 Pro
OpenAI logo

GPT-5.2 Pro

by OpenAI · Released Dec 2025

Multimodal
76.2
avg score
Rank #29
Compare
Better than 88% of all models
Context
400K tokens (~200 books)
Input $/1M
$21.00
Output $/1M
$168.00
Type
multimodal
License
Proprietary
Benchmarks
4 tested
Data updated today
About

GPT-5.2 Pro is OpenAI’s most advanced model, offering major improvements in agentic coding and long context performance over GPT-5 Pro. It is optimized for complex tasks that require step-by-step reasoning,...

Tested on 4 benchmarks with 56.2% average. Top scores: ARC-AGI (90.5%), ARC-AGI-2 (54.2%), SimpleBench (48.9%).

Looking for similar performance at lower cost?
Llama 3.3 70B Instruct scores 75.9 (100% as good) at $0.10/1M input · 100% cheaper
Capabilities
reasoning
64.5
#29 globally
math
31.3
#128 globally
Benchmark Scores
Compare All
Tested on 4 benchmarks · Ranked across 2 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

90.5
ARC-AGI-2

ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.

54.2
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

48.9
FrontierMath-Tier-4-2025-07-01-Private

Hardest tier of FrontierMath. Problems at the frontier of human mathematical ability, many unsolved by most mathematicians.

31.3
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
gpt-5-2-pro
Specifications
  • Typemultimodal
  • Context400K tokens (~200 books)
  • ReleasedDec 2025
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.210
Available On
OpenAI logoOpenAI$21.00
Categories
Share & Export
Tweet
GPT-5.2 Pro is a proprietary multimodal AI model by OpenAI, released in December 2025. It has an average benchmark score of 76.2. Context window: 400K tokens.