Compare · ModelsLive · 3 picked · head to head

Gemini 3 Flash Preview vs Grok 4 vs Gemini 2.5 Pro

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Gemini 3 Flash Preview wins 19 of 31 shared benchmarks. Leads in knowledge · math · coding.

Category leads
reasoning·Gemini 2.5 Proknowledge·Gemini 3 Flash Previewmath·Gemini 3 Flash Previewcoding·Gemini 3 Flash Previewspeed·Gemini 3 Flash Previewagentic·Gemini 3 Flash Previewarena·Gemini 3 Flash Previewlanguage·Grok 4
Hype vs Reality
Gemini 3 Flash Preview
#98 by perf·no signal
QUIET
Grok 4
#73 by perf·no signal
QUIET
Gemini 2.5 Pro
#61 by perf·no signal
QUIET
Best value
2.8x better value than Gemini 2.5 Pro
Gemini 3 Flash Preview
28.1 pts/$
$1.75/M
Grok 4
6.1 pts/$
$9.00/M
Gemini 2.5 Pro
10.0 pts/$
$5.63/M
Vendor risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
xAI logo
xAI
$250.0B·Tier 1
Medium risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Head to head
Gemini 3 Flash PreviewGrok 4Gemini 2.5 Pro
ARC-AGI
Grok 4 leads by +25.7
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Gemini 3 Flash Preview
21.5
Grok 4
66.7
Gemini 2.5 Pro
41.0
ARC-AGI-2
Gemini 3 Flash Preview leads by +17.6
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Gemini 3 Flash Preview
33.6
Grok 4
16.0
Gemini 2.5 Pro
4.9
Balrog
Gemini 3 Flash Preview leads by +4.5
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
Gemini 3 Flash Preview
48.1
Grok 4
43.6
Gemini 2.5 Pro
43.3
Chess Puzzles
Gemini 3 Flash Preview leads by +10.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Gemini 3 Flash Preview
38.0
Grok 4
28.0
Gemini 2.5 Pro
20.0
FrontierMath-2025-02-28-Private
Gemini 3 Flash Preview leads by +16.0
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Gemini 3 Flash Preview
35.6
Grok 4
19.7
Gemini 2.5 Pro
14.1
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Gemini 3 Flash Preview
4.2
Grok 4
2.1
Gemini 2.5 Pro
4.2
GeoBench
Gemini 3 Flash Preview leads by +7.0
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
Gemini 3 Flash Preview
88.0
Grok 4
45.0
Gemini 2.5 Pro
81.0
GPQA diamond
Grok 4 leads by +2.3
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Gemini 3 Flash Preview
77.6
Grok 4
82.7
Gemini 2.5 Pro
80.4
OTIS Mock AIME 2024-2025
Gemini 3 Flash Preview leads by +8.1
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 3 Flash Preview
92.8
Grok 4
84.0
Gemini 2.5 Pro
84.7
SimpleBench
Gemini 2.5 Pro leads by +1.6
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Gemini 3 Flash Preview
53.3
Grok 4
52.6
Gemini 2.5 Pro
54.9
SimpleQA Verified
Gemini 3 Flash Preview leads by +11.4
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Gemini 3 Flash Preview
67.4
Grok 4
47.9
Gemini 2.5 Pro
56.0
Terminal Bench
Gemini 3 Flash Preview leads by +31.7
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
Gemini 3 Flash Preview
64.3
Grok 4
27.2
Gemini 2.5 Pro
32.6
WeirdML
Gemini 3 Flash Preview leads by +7.6
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Gemini 3 Flash Preview
61.6
Grok 4
45.7
Gemini 2.5 Pro
54.0
Artificial Analysis · Agentic Index
Gemini 3 Flash Preview leads by +17.0
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
Gemini 3 Flash Preview
49.7
Gemini 2.5 Pro
32.7
Artificial Analysis · Coding Index
Gemini 3 Flash Preview leads by +10.7
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
Gemini 3 Flash Preview
42.6
Gemini 2.5 Pro
31.9
Artificial Analysis · Quality Index
Gemini 3 Flash Preview leads by +11.8
Gemini 3 Flash Preview
46.4
Gemini 2.5 Pro
34.6
Aider polyglot
Gemini 2.5 Pro leads by +3.5
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
Grok 4
79.6
Gemini 2.5 Pro
83.1
APEX-Agents
Gemini 3 Flash Preview leads by +8.8
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
Gemini 3 Flash Preview
24.0
Grok 4
15.2
Chatbot Arena Elo · Coding
Gemini 3 Flash Preview leads by +234.5
Gemini 3 Flash Preview
1436.4
Gemini 2.5 Pro
1202.0
Chatbot Arena Elo · Overall
Gemini 3 Flash Preview leads by +25.7
Gemini 3 Flash Preview
1473.9
Gemini 2.5 Pro
1448.2
DeepResearch Bench
Gemini 2.5 Pro leads by +1.8
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
Grok 4
47.9
Gemini 2.5 Pro
49.7
Fiction.LiveBench
Grok 4 leads by +2.7
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
Grok 4
94.4
Gemini 2.5 Pro
91.7
GSO-Bench
Gemini 3 Flash Preview leads by +5.9
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
Gemini 3 Flash Preview
9.8
Gemini 2.5 Pro
3.9
HELM · GPQA
Gemini 2.5 Pro leads by +2.3
Grok 4
72.6
Gemini 2.5 Pro
74.9
HELM · IFEval
Grok 4 leads by +10.9
Grok 4
94.9
Gemini 2.5 Pro
84.0
HELM · MMLU-Pro
Gemini 2.5 Pro leads by +1.2
Grok 4
85.1
Gemini 2.5 Pro
86.3
HELM · Omni-MATH
Grok 4 leads by +18.7
Grok 4
60.3
Gemini 2.5 Pro
41.6
HELM · WildBench
Gemini 2.5 Pro leads by +6.0
Grok 4
79.7
Gemini 2.5 Pro
85.7
Lech Mazur Writing
Gemini 2.5 Pro leads by +5.3
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
Grok 4
80.7
Gemini 2.5 Pro
86.0
SWE-Bench verified
Gemini 3 Flash Preview leads by +17.8
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
Gemini 3 Flash Preview
75.4
Gemini 2.5 Pro
57.6
VPCT
Gemini 3 Flash Preview leads by +39.3
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
Gemini 3 Flash Preview
58.9
Gemini 2.5 Pro
19.6
Full benchmark table
BenchmarkGemini 3 Flash PreviewGrok 4Gemini 2.5 Pro
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
21.566.741.0
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
33.616.04.9
Balrog
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
48.143.643.3
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
38.028.020.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
35.619.714.1
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
4.22.14.2
GeoBench
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
88.045.081.0
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
77.682.780.4
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
92.884.084.7
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
53.352.654.9
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
67.447.956.0
Terminal Bench
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
64.327.232.6
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
61.645.754.0
Artificial Analysis · Agentic Index
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
49.732.7
Artificial Analysis · Coding Index
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
42.631.9
Artificial Analysis · Quality Index
46.434.6
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
79.683.1
APEX-Agents
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
24.015.2
Chatbot Arena Elo · Coding
1436.41202.0
Chatbot Arena Elo · Overall
1473.91448.2
DeepResearch Bench
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
47.949.7
Fiction.LiveBench
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
94.491.7
GSO-Bench
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
9.83.9
HELM · GPQA
72.674.9
HELM · IFEval
94.984.0
HELM · MMLU-Pro
85.186.3
HELM · Omni-MATH
60.341.6
HELM · WildBench
79.785.7
Lech Mazur Writing
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
80.786.0
SWE-Bench verified
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
75.457.6
VPCT
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
58.919.6
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Google DeepMind logoGemini 3 Flash Preview$0.50$3.001.0M tokens (~524 books)$11.25
xAI logoGrok 4$3.00$15.00256K tokens (~128 books)$60.00
Google DeepMind logoGemini 2.5 Pro$1.25$10.001.0M tokens (~524 books)$34.38