Compare · ModelsLive · 2 picked · head to head
GPT-5.5 vs GPT-5
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-5.5 wins on 3/3 benchmarks
GPT-5.5 wins 3 of 3 shared benchmarks. Leads in reasoning · knowledge · coding.
Category leads
reasoning·GPT-5.5knowledge·GPT-5.5coding·GPT-5.5
Hype vs Reality
Attention vs performance
GPT-5.5
#2 by perf·no signal
GPT-5
#74 by perf·no signal
Best value
GPT-5
2.0x better value than GPT-5.5
GPT-5.5
4.9 pts/$
$17.50/M
GPT-5
9.7 pts/$
$5.63/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
3 benchmarks · 2 models
GPT-5.5GPT-5
ARC-AGI
GPT-5.5 leads by +29.3
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
GPT-5.5
95.0
GPT-5
65.7
GPQA diamond
GPT-5.5 leads by +12.0
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-5.5
93.6
GPT-5
81.6
Terminal Bench
GPT-5.5 leads by +33.1
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
GPT-5.5
82.7
GPT-5
49.6
Full benchmark table
| Benchmark | GPT-5.5 | GPT-5 |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 95.0 | 65.7 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 93.6 | 81.6 |
Terminal Bench Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence. | 82.7 | 49.6 |
Pricing · per 1M tokens · projected $/mo at 10M tokens