Compare · ModelsLive · 2 picked · head to head
Gemini 3.1 Pro Preview vs Claude Opus 4.6
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Gemini 3.1 Pro Preview wins on 9/16 benchmarks
Gemini 3.1 Pro Preview wins 9 of 16 shared benchmarks. Leads in agentic · reasoning · knowledge.
Category leads
agentic·Gemini 3.1 Pro Previewreasoning·Gemini 3.1 Pro Previewarena·Claude Opus 4.6knowledge·Gemini 3.1 Pro Previewmath·Claude Opus 4.6coding·Claude Opus 4.6
Hype vs Reality
Attention vs performance
Gemini 3.1 Pro Preview
#36 by perf·no signal
Claude Opus 4.6
#54 by perf·#4 by attention
Best value
Gemini 3.1 Pro Preview
2.3x better value than Claude Opus 4.6
Gemini 3.1 Pro Preview
8.7 pts/$
$7.00/M
Claude Opus 4.6
3.8 pts/$
$15.00/M
Vendor risk
Who is behind the model
Google DeepMind
$4.00T·Tier 1
Anthropic
$380.0B·Tier 1
Head to head
16 benchmarks · 2 models
Gemini 3.1 Pro PreviewClaude Opus 4.6
APEX-Agents
Gemini 3.1 Pro Preview leads by +1.8
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
Gemini 3.1 Pro Preview
33.5
Claude Opus 4.6
31.7
ARC-AGI
Gemini 3.1 Pro Preview leads by +4.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Gemini 3.1 Pro Preview
98.0
Claude Opus 4.6
94.0
ARC-AGI-2
Gemini 3.1 Pro Preview leads by +7.9
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Gemini 3.1 Pro Preview
77.1
Claude Opus 4.6
69.2
Chatbot Arena Elo · Coding
Claude Opus 4.6 leads by +87.2
Gemini 3.1 Pro Preview
1455.7
Claude Opus 4.6
1542.9
Chatbot Arena Elo · Overall
Claude Opus 4.6 leads by +4.0
Gemini 3.1 Pro Preview
1492.6
Claude Opus 4.6
1496.6
Chess Puzzles
Gemini 3.1 Pro Preview leads by +38.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Gemini 3.1 Pro Preview
55.0
Claude Opus 4.6
17.0
FrontierMath-2025-02-28-Private
Claude Opus 4.6 leads by +3.8
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Gemini 3.1 Pro Preview
36.9
Claude Opus 4.6
40.7
FrontierMath-Tier-4-2025-07-01-Private
Claude Opus 4.6 leads by +6.2
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Gemini 3.1 Pro Preview
16.7
Claude Opus 4.6
22.9
GPQA diamond
Gemini 3.1 Pro Preview leads by +4.8
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Gemini 3.1 Pro Preview
92.1
Claude Opus 4.6
87.4
OTIS Mock AIME 2024-2025
Gemini 3.1 Pro Preview leads by +1.2
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 3.1 Pro Preview
95.6
Claude Opus 4.6
94.4
PostTrainBench
Claude Opus 4.6 leads by +1.6
Gemini 3.1 Pro Preview
21.6
Claude Opus 4.6
23.2
SimpleBench
Gemini 3.1 Pro Preview leads by +14.4
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Gemini 3.1 Pro Preview
75.5
Claude Opus 4.6
61.1
SimpleQA Verified
Gemini 3.1 Pro Preview leads by +30.8
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Gemini 3.1 Pro Preview
77.3
Claude Opus 4.6
46.5
SWE-Bench verified
Claude Opus 4.6 leads by +3.1
Gemini 3.1 Pro Preview
75.6
Claude Opus 4.6
78.7
Terminal Bench
Gemini 3.1 Pro Preview leads by +3.7
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
Gemini 3.1 Pro Preview
78.4
Claude Opus 4.6
74.7
WeirdML
Claude Opus 4.6 leads by +5.8
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Gemini 3.1 Pro Preview
72.1
Claude Opus 4.6
77.9
Full benchmark table
| Benchmark | Gemini 3.1 Pro Preview | Claude Opus 4.6 |
|---|---|---|
APEX-Agents APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments. | 33.5 | 31.7 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 98.0 | 94.0 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 77.1 | 69.2 |
Chatbot Arena Elo · Coding | 1455.7 | 1542.9 |
Chatbot Arena Elo · Overall | 1492.6 | 1496.6 |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | 55.0 | 17.0 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 36.9 | 40.7 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 16.7 | 22.9 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 92.1 | 87.4 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 95.6 | 94.4 |
PostTrainBench | 21.6 | 23.2 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 75.5 | 61.1 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 77.3 | 46.5 |
SWE-Bench verified | 75.6 | 78.7 |
Terminal Bench Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency. | 78.4 | 74.7 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 72.1 | 77.9 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.00 | $12.00 | 1.0M tokens (~524 books) | $45.00 | |
| $5.00 | $25.00 | 1.0M tokens (~500 books) | $100.00 |