Compare · ModelsLive · 2 picked · head to head

Gemini 3.1 Pro Preview vs Gemini 3.1 Pro Preview

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Gemini 3.1 Pro Preview wins 23 of 23 shared benchmarks. Leads in speed · agentic · reasoning.

Category leads
speed·Gemini 3.1 Pro Previewagentic·Gemini 3.1 Pro Previewreasoning·Gemini 3.1 Pro Previewarena·Gemini 3.1 Pro Previewknowledge·Gemini 3.1 Pro Previewmath·Gemini 3.1 Pro Previewcoding·Gemini 3.1 Pro Preview
Hype vs Reality
Gemini 3.1 Pro Preview
#38 by perf·no signal
QUIET
Gemini 3.1 Pro Preview
#38 by perf·no signal
QUIET
Best value
Gemini 3.1 Pro Preview
8.7 pts/$
$7.00/M
Gemini 3.1 Pro Preview
8.7 pts/$
$7.00/M
Vendor risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Head to head
Gemini 3.1 Pro PreviewGemini 3.1 Pro Preview
Artificial Analysis · Agentic Index
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
Gemini 3.1 Pro Preview
59.1
Gemini 3.1 Pro Preview
59.1
Artificial Analysis · Coding Index
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
Gemini 3.1 Pro Preview
55.5
Gemini 3.1 Pro Preview
55.5
Artificial Analysis · Quality Index
Gemini 3.1 Pro Preview
57.2
Gemini 3.1 Pro Preview
57.2
APEX-Agents
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
Gemini 3.1 Pro Preview
33.5
Gemini 3.1 Pro Preview
33.5
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Gemini 3.1 Pro Preview
98.0
Gemini 3.1 Pro Preview
98.0
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Gemini 3.1 Pro Preview
77.1
Gemini 3.1 Pro Preview
77.1
Chatbot Arena Elo · Coding
Gemini 3.1 Pro Preview
1455.7
Gemini 3.1 Pro Preview
1455.7
Chatbot Arena Elo · Overall
Gemini 3.1 Pro Preview
1492.6
Gemini 3.1 Pro Preview
1492.6
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Gemini 3.1 Pro Preview
55.0
Gemini 3.1 Pro Preview
55.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Gemini 3.1 Pro Preview
36.9
Gemini 3.1 Pro Preview
36.9
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Gemini 3.1 Pro Preview
16.7
Gemini 3.1 Pro Preview
16.7
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Gemini 3.1 Pro Preview
92.1
Gemini 3.1 Pro Preview
92.1
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 3.1 Pro Preview
95.6
Gemini 3.1 Pro Preview
95.6
PostTrainBench
Gemini 3.1 Pro Preview
21.6
Gemini 3.1 Pro Preview
21.6
EnigmaEval
Gemini 3.1 Pro Preview
19.8
Gemini 3.1 Pro Preview
19.8
MultiChallenge
Gemini 3.1 Pro Preview
71.4
Gemini 3.1 Pro Preview
71.4
MultiNRC
Gemini 3.1 Pro Preview
64.7
Gemini 3.1 Pro Preview
64.7
VisualToolBench (VTB)
Gemini 3.1 Pro Preview
29.0
Gemini 3.1 Pro Preview
29.0
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Gemini 3.1 Pro Preview
75.5
Gemini 3.1 Pro Preview
75.5
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Gemini 3.1 Pro Preview
77.3
Gemini 3.1 Pro Preview
77.3
SWE-Bench verified
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
Gemini 3.1 Pro Preview
75.6
Gemini 3.1 Pro Preview
75.6
Terminal Bench
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
Gemini 3.1 Pro Preview
78.4
Gemini 3.1 Pro Preview
78.4
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Gemini 3.1 Pro Preview
72.1
Gemini 3.1 Pro Preview
72.1
Full benchmark table
BenchmarkGemini 3.1 Pro PreviewGemini 3.1 Pro Preview
Artificial Analysis · Agentic Index
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
59.159.1
Artificial Analysis · Coding Index
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
55.555.5
Artificial Analysis · Quality Index
57.257.2
APEX-Agents
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
33.533.5
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
98.098.0
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
77.177.1
Chatbot Arena Elo · Coding
1455.71455.7
Chatbot Arena Elo · Overall
1492.61492.6
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
55.055.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
36.936.9
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
16.716.7
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
92.192.1
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
95.695.6
PostTrainBench
21.621.6
EnigmaEval
19.819.8
MultiChallenge
71.471.4
MultiNRC
64.764.7
VisualToolBench (VTB)
29.029.0
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
75.575.5
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
77.377.3
SWE-Bench verified
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
75.675.6
Terminal Bench
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
78.478.4
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
72.172.1
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Google DeepMind logoGemini 3.1 Pro Preview$2.00$12.001.0M tokens (~524 books)$45.00
Google DeepMind logoGemini 3.1 Pro Preview$2.00$12.001.0M tokens (~524 books)$45.00