Compare · ModelsLive · 3 picked · head to head

Claude Opus 4.6 (Fast) vs Claude Opus 4.6 vs Gemini 3.1 Pro Preview

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Gemini 3.1 Pro Preview wins 12 of 20 shared benchmarks. Leads in speed · agentic · reasoning.

Category leads
arena·Claude Opus 4.6 (Fast)speed·Gemini 3.1 Pro Previewagentic·Gemini 3.1 Pro Previewreasoning·Gemini 3.1 Pro Previewknowledge·Gemini 3.1 Pro Previewmath·Claude Opus 4.6coding·Claude Opus 4.6
Hype vs Reality
Claude Opus 4.6 (Fast)
#122 by perf·no signal
QUIET
Claude Opus 4.6
#56 by perf·#4 by attention
DESERVED
Gemini 3.1 Pro Preview
#38 by perf·no signal
QUIET
Best value
2.3x better value than Claude Opus 4.6
Claude Opus 4.6 (Fast)
0.5 pts/$
$90.00/M
Claude Opus 4.6
3.8 pts/$
$15.00/M
Gemini 3.1 Pro Preview
8.7 pts/$
$7.00/M
Vendor risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Head to head
Claude Opus 4.6 (Fast)Claude Opus 4.6Gemini 3.1 Pro Preview
Chatbot Arena Elo · Coding
Claude Opus 4.6 (Fast) leads by +3.3
Claude Opus 4.6 (Fast)
1546.2
Claude Opus 4.6
1542.9
Gemini 3.1 Pro Preview
1455.7
Chatbot Arena Elo · Overall
Claude Opus 4.6 (Fast) leads by +6.2
Claude Opus 4.6 (Fast)
1502.8
Claude Opus 4.6
1496.6
Gemini 3.1 Pro Preview
1492.6
Artificial Analysis · Agentic Index
Claude Opus 4.6 (Fast) leads by +8.5
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
Claude Opus 4.6 (Fast)
67.6
Gemini 3.1 Pro Preview
59.1
Artificial Analysis · Coding Index
Gemini 3.1 Pro Preview leads by +7.4
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
Claude Opus 4.6 (Fast)
48.1
Gemini 3.1 Pro Preview
55.5
Artificial Analysis · Quality Index
Gemini 3.1 Pro Preview leads by +4.2
Claude Opus 4.6 (Fast)
53.0
Gemini 3.1 Pro Preview
57.2
APEX-Agents
Gemini 3.1 Pro Preview leads by +1.8
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
Claude Opus 4.6
31.7
Gemini 3.1 Pro Preview
33.5
ARC-AGI
Gemini 3.1 Pro Preview leads by +4.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Opus 4.6
94.0
Gemini 3.1 Pro Preview
98.0
ARC-AGI-2
Gemini 3.1 Pro Preview leads by +7.9
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Opus 4.6
69.2
Gemini 3.1 Pro Preview
77.1
Chess Puzzles
Gemini 3.1 Pro Preview leads by +38.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Claude Opus 4.6
17.0
Gemini 3.1 Pro Preview
55.0
FrontierMath-2025-02-28-Private
Claude Opus 4.6 leads by +3.8
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Opus 4.6
40.7
Gemini 3.1 Pro Preview
36.9
FrontierMath-Tier-4-2025-07-01-Private
Claude Opus 4.6 leads by +6.2
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Claude Opus 4.6
22.9
Gemini 3.1 Pro Preview
16.7
GPQA diamond
Gemini 3.1 Pro Preview leads by +4.8
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Opus 4.6
87.4
Gemini 3.1 Pro Preview
92.1
OTIS Mock AIME 2024-2025
Gemini 3.1 Pro Preview leads by +1.2
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Opus 4.6
94.4
Gemini 3.1 Pro Preview
95.6
PostTrainBench
Claude Opus 4.6 leads by +1.6
Claude Opus 4.6
23.2
Gemini 3.1 Pro Preview
21.6
VisualToolBench (VTB)
Gemini 3.1 Pro Preview leads by +1.4
Claude Opus 4.6 (Fast)
27.5
Gemini 3.1 Pro Preview
29.0
SimpleBench
Gemini 3.1 Pro Preview leads by +14.4
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Claude Opus 4.6
61.1
Gemini 3.1 Pro Preview
75.5
SimpleQA Verified
Gemini 3.1 Pro Preview leads by +30.8
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Claude Opus 4.6
46.5
Gemini 3.1 Pro Preview
77.3
SWE-Bench verified
Claude Opus 4.6 leads by +3.1
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
Claude Opus 4.6
78.7
Gemini 3.1 Pro Preview
75.6
Terminal Bench
Gemini 3.1 Pro Preview leads by +3.7
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
Claude Opus 4.6
74.7
Gemini 3.1 Pro Preview
78.4
WeirdML
Claude Opus 4.6 leads by +5.8
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Opus 4.6
77.9
Gemini 3.1 Pro Preview
72.1
Full benchmark table
BenchmarkClaude Opus 4.6 (Fast)Claude Opus 4.6Gemini 3.1 Pro Preview
Chatbot Arena Elo · Coding
1546.21542.91455.7
Chatbot Arena Elo · Overall
1502.81496.61492.6
Artificial Analysis · Agentic Index
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
67.659.1
Artificial Analysis · Coding Index
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
48.155.5
Artificial Analysis · Quality Index
53.057.2
APEX-Agents
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
31.733.5
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
94.098.0
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
69.277.1
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
17.055.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
40.736.9
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
22.916.7
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
87.492.1
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
94.495.6
PostTrainBench
23.221.6
VisualToolBench (VTB)
27.529.0
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
61.175.5
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
46.577.3
SWE-Bench verified
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
78.775.6
Terminal Bench
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
74.778.4
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
77.972.1
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Anthropic logoClaude Opus 4.6 (Fast)$30.00$150.001.0M tokens (~500 books)$600.00
Anthropic logoClaude Opus 4.6$5.00$25.001.0M tokens (~500 books)$100.00
Google DeepMind logoGemini 3.1 Pro Preview$2.00$12.001.0M tokens (~524 books)$45.00