Compare · ModelsLive · 2 picked · head to head
o3 vs Gemini 3.1 Pro Preview
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Gemini 3.1 Pro Preview wins on 14/14 benchmarks
Gemini 3.1 Pro Preview wins 14 of 14 shared benchmarks. Leads in speed · reasoning · math.
Category leads
speed·Gemini 3.1 Pro Previewreasoning·Gemini 3.1 Pro Previewmath·Gemini 3.1 Pro Previewknowledge·Gemini 3.1 Pro Previewcoding·Gemini 3.1 Pro Preview
Hype vs Reality
Attention vs performance
o3
#67 by perf·no signal
Gemini 3.1 Pro Preview
#36 by perf·no signal
Best value
o3
1.3x better value than Gemini 3.1 Pro Preview
o3
11.0 pts/$
$5.00/M
Gemini 3.1 Pro Preview
8.7 pts/$
$7.00/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
Google DeepMind
$4.00T·Tier 1
Head to head
14 benchmarks · 2 models
o3Gemini 3.1 Pro Preview
Artificial Analysis · Agentic Index
Gemini 3.1 Pro Preview leads by +23.0
o3
36.1
Gemini 3.1 Pro Preview
59.1
Artificial Analysis · Coding Index
Gemini 3.1 Pro Preview leads by +17.1
o3
38.4
Gemini 3.1 Pro Preview
55.5
Artificial Analysis · Quality Index
Gemini 3.1 Pro Preview leads by +18.8
o3
38.4
Gemini 3.1 Pro Preview
57.2
ARC-AGI
Gemini 3.1 Pro Preview leads by +37.2
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
o3
60.8
Gemini 3.1 Pro Preview
98.0
ARC-AGI-2
Gemini 3.1 Pro Preview leads by +70.6
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
o3
6.5
Gemini 3.1 Pro Preview
77.1
FrontierMath-2025-02-28-Private
Gemini 3.1 Pro Preview leads by +18.2
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
o3
18.7
Gemini 3.1 Pro Preview
36.9
FrontierMath-Tier-4-2025-07-01-Private
Gemini 3.1 Pro Preview leads by +14.6
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
o3
2.1
Gemini 3.1 Pro Preview
16.7
GPQA diamond
Gemini 3.1 Pro Preview leads by +16.4
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
o3
75.8
Gemini 3.1 Pro Preview
92.1
OTIS Mock AIME 2024-2025
Gemini 3.1 Pro Preview leads by +11.7
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
o3
83.9
Gemini 3.1 Pro Preview
95.6
EnigmaEval
Gemini 3.1 Pro Preview leads by +6.7
o3
13.1
Gemini 3.1 Pro Preview
19.8
SimpleBench
Gemini 3.1 Pro Preview leads by +31.8
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
o3
43.7
Gemini 3.1 Pro Preview
75.5
SimpleQA Verified
Gemini 3.1 Pro Preview leads by +24.3
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
o3
53.0
Gemini 3.1 Pro Preview
77.3
SWE-Bench verified
Gemini 3.1 Pro Preview leads by +13.3
o3
62.3
Gemini 3.1 Pro Preview
75.6
WeirdML
Gemini 3.1 Pro Preview leads by +19.7
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
o3
52.4
Gemini 3.1 Pro Preview
72.1
Full benchmark table
| Benchmark | o3 | Gemini 3.1 Pro Preview |
|---|---|---|
Artificial Analysis · Agentic Index | 36.1 | 59.1 |
Artificial Analysis · Coding Index | 38.4 | 55.5 |
Artificial Analysis · Quality Index | 38.4 | 57.2 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 60.8 | 98.0 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 6.5 | 77.1 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 18.7 | 36.9 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 2.1 | 16.7 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 75.8 | 92.1 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 83.9 | 95.6 |
EnigmaEval | 13.1 | 19.8 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 43.7 | 75.5 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 53.0 | 77.3 |
SWE-Bench verified | 62.3 | 75.6 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 52.4 | 72.1 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.00 | $8.00 | 200K tokens (~100 books) | $35.00 | |
| $2.00 | $12.00 | 1.0M tokens (~524 books) | $45.00 |