Home/Models/Gemini 3 Pro
Google DeepMind logo

Gemini 3 Pro

by Google DeepMind · Released Jan 2024

76.4
avg score
Rank #28
Compare
Better than 88% of all models
Context
N/A
Input $/1M
TBD
Output $/1M
TBD
Type
text
License
Proprietary
Benchmarks
28 tested
Data updated today
About

Tested on 28 benchmarks with 60.5% average. Top scores: Chatbot Arena Elo — Overall (1486.2%), Chatbot Arena Elo — Coding (1437.6%), OTIS Mock AIME 2024-2025 (91.4%).

Capabilities
coding
57.7
#47 globally
reasoning
65.9
#26 globally
math
50.8
#74 globally
knowledge
65.3
#31 globally
agentic
18.4
#22 globally
speed
69.8
#22 globally
language
87.6
#21 globally
Benchmark Scores
Compare All
Tested on 28 benchmarks · Ranked across 8 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
SWE-Bench verified

Real-world software engineering tasks from GitHub issues. Models must diagnose bugs and write patches that pass test suites. Human-verified subset of SWE-bench.

72.9
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

69.9
Terminal Bench

Complex terminal-based engineering tasks. Models must use command-line tools, navigate filesystems, and debug systems through shell interaction.

69.4
HELM — WildBench

Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.

85.9
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

75.0
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

71.7
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

91.4
HELM — Omni-MATH

Stanford HELM evaluation of mathematical reasoning across diverse problem types.

55.6
FrontierMath-2025-02-28-Private

Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.

37.6
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Specifications
  • Typetext
  • ContextN/A
  • ReleasedJan 2024
  • LicenseProprietary
  • Statusbenchmark-only
Available On
Google DeepMind logoGoogle DeepMindTBD
Share & Export
Tweet
Gemini 3 Pro is a proprietary text AI model by Google DeepMind, released in January 2024. It has an average benchmark score of 76.4.