Home/Models/GPT-5.2
OpenAI logo

GPT-5.2

by OpenAI · Released Dec 2025

Multimodal
76.4
avg score
Rank #28
Compare
Better than 88% of all models
Context
400K tokens (~200 books)
Input $/1M
$1.75
Output $/1M
$14.00
Type
multimodal
License
Proprietary
Benchmarks
20 tested
Data updated today
About

GPT-5.2 is the latest frontier-grade model in the GPT-5 series, offering stronger agentic and long context perfomance compared to GPT-5.1. It uses adaptive reasoning to allocate computation dynamically, responding quickly...

Tested on 20 benchmarks with 54.0% average. Top scores: Chatbot Arena Elo — Overall (1439.5%), Chatbot Arena Elo — Coding (1403.1%), OTIS Mock AIME 2024-2025 (96.1%).

Looking for similar performance at lower cost?
Llama 3.3 70B Instruct scores 75.9 (99% as good) at $0.10/1M input · 94% cheaper
Capabilities
coding
62.0
#34 globally
reasoning
58.0
#34 globally
math
51.9
#71 globally
knowledge
49.7
#102 globally
agentic
34.3
#12 globally
Benchmark Scores
Compare All
Tested on 20 benchmarks · Ranked across 6 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
SWE-Bench verified

Real-world software engineering tasks from GitHub issues. Models must diagnose bugs and write patches that pass test suites. Human-verified subset of SWE-bench.

73.8
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

72.2
SWE-Bench Verified (Bash Only)

SWE-bench Verified solved using only bash commands, no specialized frameworks. Tests raw terminal-based problem solving.

71.8
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

86.2
ARC-AGI-2

ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.

52.9
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

35.0
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

96.1
FrontierMath-2025-02-28-Private

Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.

40.7
FrontierMath-Tier-4-2025-07-01-Private

Hardest tier of FrontierMath. Problems at the frontier of human mathematical ability, many unsolved by most mathematicians.

18.8
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
gpt-5-2
Specifications
  • Typemultimodal
  • Context400K tokens (~200 books)
  • ReleasedDec 2025
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.018
Available On
OpenAI logoOpenAI$1.75
Share & Export
Tweet
GPT-5.2 is a proprietary multimodal AI model by OpenAI, released in December 2025. It has an average benchmark score of 76.4. Context window: 400K tokens.