Home/Models/Gemini 2.5 Pro
Google DeepMind logo

Gemini 2.5 Pro

by Google DeepMind · Released Jun 2025

Multimodal1M Context
67.2
avg score
Rank #47
Compare
Better than 80% of all models
Context
1.0M tokens (~524 books)
Input $/1M
$1.25
Output $/1M
$10.00
Type
multimodal
License
Proprietary
Benchmarks
42 tested
Data updated today
About

Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through responses with enhanced accuracy...

Tested on 42 benchmarks with 56.2% average. Top scores: Chatbot Arena Elo — Overall (1448.2%), Chatbot Arena Elo — Coding (1202.0%), MATH level 5 (95.6%).

Looking for similar performance at lower cost?
Gemma 4 31B scores 68.2 (101% as good) at $0.13/1M input · 90% cheaper
Capabilities
coding
52.4
#60 globally
reasoning
46.6
#52 globally
math
54.8
#63 globally
knowledge
58.4
#56 globally
agentic
30.3
#18 globally
speed
55.1
#33 globally
language
87.0
#24 globally
Benchmark Scores
Compare All
Tested on 42 benchmarks · Ranked across 8 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Aider polyglot

Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.

83.1
OpenCompass — LiveCodeBenchV6

OpenCompass Live Code Bench v6. Fresh competitive programming problems to evaluate code generation without memorization.

71.3
CadEval

Computer-aided design evaluation. Tests understanding of CAD concepts, 3D modeling, and engineering design principles.

64.0
HELM — WildBench

Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.

85.7
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

54.9
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

41.0
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

95.6
OpenCompass — AIME2025

OpenCompass evaluation on AIME 2025 problems. Tests mathematical reasoning on fresh competition problems.

88.7
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

84.7
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Recently Happened
Gemini 2.5 Pro scores 94.1% on MMLU
Mar 29, 2026
Links
Documentation
BenchGecko API
gemini-2-5-pro
Specifications
  • Typemultimodal
  • Context1.0M tokens (~524 books)
  • ReleasedJun 2025
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.013
Available On
Google DeepMind logoGoogle DeepMind$1.25
Share & Export
Tweet
Gemini 2.5 Pro is a proprietary multimodal AI model by Google DeepMind, released in June 2025. It has an average benchmark score of 67.2. Context window: 1M tokens.