Compare · ModelsLive · 2 picked · head to head

Gemini 3 Flash Preview vs GPT-5

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Gemini 3 Flash Preview wins 14 of 17 shared benchmarks. Leads in agentic · reasoning · knowledge.

Category leads
agentic·Gemini 3 Flash Previewreasoning·Gemini 3 Flash Previewknowledge·Gemini 3 Flash Previewmath·Gemini 3 Flash Previewcoding·Gemini 3 Flash Preview
Hype vs Reality
Gemini 3 Flash Preview
#98 by perf·no signal
QUIET
GPT-5
#74 by perf·no signal
QUIET
Best value
2.9x better value than GPT-5
Gemini 3 Flash Preview
28.1 pts/$
$1.75/M
GPT-5
9.7 pts/$
$5.63/M
Vendor risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
Gemini 3 Flash PreviewGPT-5
APEX-Agents
Gemini 3 Flash Preview leads by +5.7
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
Gemini 3 Flash Preview
24.0
GPT-5
18.3
ARC-AGI
GPT-5 leads by +44.2
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Gemini 3 Flash Preview
21.5
GPT-5
65.7
ARC-AGI-2
Gemini 3 Flash Preview leads by +23.8
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Gemini 3 Flash Preview
33.6
GPT-5
9.9
Balrog
Gemini 3 Flash Preview leads by +15.3
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
Gemini 3 Flash Preview
48.1
GPT-5
32.8
Chess Puzzles
Gemini 3 Flash Preview leads by +1.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Gemini 3 Flash Preview
38.0
GPT-5
37.0
FrontierMath-2025-02-28-Private
Gemini 3 Flash Preview leads by +3.2
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Gemini 3 Flash Preview
35.6
GPT-5
32.4
FrontierMath-Tier-4-2025-07-01-Private
GPT-5 leads by +8.3
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Gemini 3 Flash Preview
4.2
GPT-5
12.5
GeoBench
Gemini 3 Flash Preview leads by +7.0
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
Gemini 3 Flash Preview
88.0
GPT-5
81.0
GPQA diamond
GPT-5 leads by +4.0
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Gemini 3 Flash Preview
77.6
GPT-5
81.6
GSO-Bench
Gemini 3 Flash Preview leads by +2.9
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
Gemini 3 Flash Preview
9.8
GPT-5
6.9
OTIS Mock AIME 2024-2025
Gemini 3 Flash Preview leads by +1.4
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 3 Flash Preview
92.8
GPT-5
91.4
SimpleBench
Gemini 3 Flash Preview leads by +5.3
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Gemini 3 Flash Preview
53.3
GPT-5
48.0
SimpleQA Verified
Gemini 3 Flash Preview leads by +16.8
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Gemini 3 Flash Preview
67.4
GPT-5
50.6
SWE-Bench verified
Gemini 3 Flash Preview leads by +1.9
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
Gemini 3 Flash Preview
75.4
GPT-5
73.5
Terminal Bench
Gemini 3 Flash Preview leads by +14.7
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
Gemini 3 Flash Preview
64.3
GPT-5
49.6
VPCT
Gemini 3 Flash Preview leads by +9.9
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
Gemini 3 Flash Preview
58.9
GPT-5
49.0
WeirdML
Gemini 3 Flash Preview leads by +0.9
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Gemini 3 Flash Preview
61.6
GPT-5
60.7
Full benchmark table
BenchmarkGemini 3 Flash PreviewGPT-5
APEX-Agents
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
24.018.3
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
21.565.7
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
33.69.9
Balrog
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
48.132.8
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
38.037.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
35.632.4
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
4.212.5
GeoBench
GeoBench · tests geographic knowledge and spatial reasoning across countries, landmarks, coordinates, and geopolitical understanding.
88.081.0
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
77.681.6
GSO-Bench
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
9.86.9
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
92.891.4
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
53.348.0
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
67.450.6
SWE-Bench verified
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
75.473.5
Terminal Bench
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
64.349.6
VPCT
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
58.949.0
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
61.660.7
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Google DeepMind logoGemini 3 Flash Preview$0.50$3.001.0M tokens (~524 books)$11.25
OpenAI logoGPT-5$1.25$10.00400K tokens (~200 books)$34.38