Beta
Compare · ModelsLive · 2 picked · head to head

Claude Opus 4.5 vs Gemini 3.1 Pro Preview

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Gemini 3.1 Pro Preview wins 14 of 16 shared benchmarks. Leads in agentic · reasoning · knowledge.

Category leads
agentic·Gemini 3.1 Pro Previewreasoning·Gemini 3.1 Pro Previewarena·Claude Opus 4.5knowledge·Gemini 3.1 Pro Previewmath·Gemini 3.1 Pro Previewcoding·Gemini 3.1 Pro Preview
Hype vs Reality
Claude Opus 4.5
#111 by perf·no signal
QUIET
Gemini 3.1 Pro Preview
#36 by perf·no signal
QUIET
Best value
2.9x better value than Claude Opus 4.5
Claude Opus 4.5
3.0 pts/$
$15.00/M
Gemini 3.1 Pro Preview
8.7 pts/$
$7.00/M
Vendor risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Head to head
Claude Opus 4.5Gemini 3.1 Pro Preview
APEX-Agents
Gemini 3.1 Pro Preview leads by +15.1
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
Claude Opus 4.5
18.4
Gemini 3.1 Pro Preview
33.5
ARC-AGI
Gemini 3.1 Pro Preview leads by +18.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Opus 4.5
80.0
Gemini 3.1 Pro Preview
98.0
ARC-AGI-2
Gemini 3.1 Pro Preview leads by +39.5
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Opus 4.5
37.6
Gemini 3.1 Pro Preview
77.1
Chatbot Arena Elo · Coding
Claude Opus 4.5 leads by +9.5
Claude Opus 4.5
1465.2
Gemini 3.1 Pro Preview
1455.7
Chatbot Arena Elo · Overall
Gemini 3.1 Pro Preview leads by +24.9
Claude Opus 4.5
1467.7
Gemini 3.1 Pro Preview
1492.6
Chess Puzzles
Gemini 3.1 Pro Preview leads by +43.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Claude Opus 4.5
12.0
Gemini 3.1 Pro Preview
55.0
FrontierMath-2025-02-28-Private
Gemini 3.1 Pro Preview leads by +16.2
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Opus 4.5
20.7
Gemini 3.1 Pro Preview
36.9
FrontierMath-Tier-4-2025-07-01-Private
Gemini 3.1 Pro Preview leads by +12.5
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Claude Opus 4.5
4.2
Gemini 3.1 Pro Preview
16.7
GPQA diamond
Gemini 3.1 Pro Preview leads by +10.7
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Opus 4.5
81.4
Gemini 3.1 Pro Preview
92.1
OTIS Mock AIME 2024-2025
Gemini 3.1 Pro Preview leads by +9.5
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Opus 4.5
86.1
Gemini 3.1 Pro Preview
95.6
PostTrainBench
Gemini 3.1 Pro Preview leads by +4.3
Claude Opus 4.5
17.3
Gemini 3.1 Pro Preview
21.6
SimpleBench
Gemini 3.1 Pro Preview leads by +21.1
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Claude Opus 4.5
54.4
Gemini 3.1 Pro Preview
75.5
SimpleQA Verified
Gemini 3.1 Pro Preview leads by +35.5
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Claude Opus 4.5
41.8
Gemini 3.1 Pro Preview
77.3
SWE-Bench verified
Claude Opus 4.5 leads by +1.0
Claude Opus 4.5
76.7
Gemini 3.1 Pro Preview
75.6
Terminal Bench
Gemini 3.1 Pro Preview leads by +15.3
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
Claude Opus 4.5
63.1
Gemini 3.1 Pro Preview
78.4
WeirdML
Gemini 3.1 Pro Preview leads by +8.4
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Opus 4.5
63.7
Gemini 3.1 Pro Preview
72.1
Full benchmark table
BenchmarkClaude Opus 4.5Gemini 3.1 Pro Preview
APEX-Agents
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
18.433.5
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
80.098.0
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
37.677.1
Chatbot Arena Elo · Coding
1465.21455.7
Chatbot Arena Elo · Overall
1467.71492.6
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
12.055.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
20.736.9
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
4.216.7
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
81.492.1
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
86.195.6
PostTrainBench
17.321.6
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
54.475.5
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
41.877.3
SWE-Bench verified
76.775.6
Terminal Bench
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
63.178.4
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
63.772.1
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Anthropic logoClaude Opus 4.5$5.00$25.00200K tokens (~100 books)$100.00
Google DeepMind logoGemini 3.1 Pro Preview$2.00$12.001.0M tokens (~524 books)$45.00