Beta
Home/Models/GPT-5.5
OpenAI logo

GPT-5.5

by OpenAI · Released Apr 2026

65.8
avg score
Rank #51
Compare
Better than 78% of all models
Context
400K tokens (~200 books)
Input $/1M
$5.00
Output $/1M
$30.00
Type
text
License
Proprietary
Benchmarks
6 tested
Data updated today
About

OpenAI's smartest model · GPT-5.5. Same speed as GPT-5.4. Plans, uses tools, checks its own work. Tops Terminal-Bench 2.0 (82.7%), GDPval (84.9%), ARC-AGI-2 (85.0%), CyberGym (81.8%). SOTA on AA Coding Index at half the cost.

Tested on 6 benchmarks with 85.0% average. Top scores: ARC-AGI (95.0%), GPQA diamond (93.6%), browsecomp (84.4%).

Looking for similar performance at lower cost?
Qwen2.5 72B Instruct scores 65.8 (100% as good) at $0.12/1M input · 98% cheaper
Capabilities
coding
82.7
#5 globally
reasoning
95.0
#1 globally
knowledge
93.6
#2 globally
agentic
78.7
#2 globally
other
79.8
#2 globally
Benchmark Scores
Compare All
Tested on 6 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Terminal Bench

Complex terminal-based engineering tasks. Models must use command-line tools, navigate filesystems, and debug systems through shell interaction.

82.7
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

95.0
GPQA diamond

Graduate-level science questions written by PhD experts. Diamond subset contains questions where experts disagree, testing deep understanding.

93.6
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
gpt-5-5
Specifications
  • Typetext
  • Context400K tokens (~200 books)
  • ReleasedApr 2026
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.040
Available On
OpenAI logoOpenAI$5.00
Share & Export
Tweet
GPT-5.5 is a proprietary text AI model by OpenAI, released in April 2026. It has an average benchmark score of 65.8. Context window: 400K tokens.