Compare · ModelsLive · 2 picked · head to head
Gemini 3.1 Pro Preview vs GPT-5.5
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-5.5 wins on 2/3 benchmarks
GPT-5.5 wins 2 of 3 shared benchmarks. Leads in knowledge · coding.
Category leads
reasoning·Gemini 3.1 Pro Previewknowledge·GPT-5.5coding·GPT-5.5
Hype vs Reality
Attention vs performance
Gemini 3.1 Pro Preview
#38 by perf·no signal
GPT-5.5
#2 by perf·no signal
Best value
Gemini 3.1 Pro Preview
1.8x better value than GPT-5.5
Gemini 3.1 Pro Preview
8.7 pts/$
$7.00/M
GPT-5.5
4.9 pts/$
$17.50/M
Vendor risk
Who is behind the model
Google DeepMind
$4.00T·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
3 benchmarks · 2 models
Gemini 3.1 Pro PreviewGPT-5.5
ARC-AGI
Gemini 3.1 Pro Preview leads by +3.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Gemini 3.1 Pro Preview
98.0
GPT-5.5
95.0
GPQA diamond
GPT-5.5 leads by +1.5
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Gemini 3.1 Pro Preview
92.1
GPT-5.5
93.6
Terminal Bench
GPT-5.5 leads by +4.3
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
Gemini 3.1 Pro Preview
78.4
GPT-5.5
82.7
Full benchmark table
| Benchmark | Gemini 3.1 Pro Preview | GPT-5.5 |
|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 98.0 | 95.0 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 92.1 | 93.6 |
Terminal Bench Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence. | 78.4 | 82.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.00 | $12.00 | 1.0M tokens (~524 books) | $45.00 | |
| $5.00 | $30.00 | 400K tokens (~200 books) | $112.50 |