Home/Models/Gemini 2.5 Flash
Google DeepMind logo

Gemini 2.5 Flash

by Google DeepMind · Released Jun 2025

Multimodal1M Context
38.1
avg score
Rank #153
Compare
Better than 34% of all models
Context
1.0M tokens (~524 books)
Input $/1M
$0.30
Output $/1M
$2.50
Type
multimodal
License
Proprietary
Benchmarks
25 tested
Data updated today
About

Gemini 2.5 Flash is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater...

Tested on 25 benchmarks with 40.0% average. Top scores: Chatbot Arena Elo — Overall (1411.0%), HELM — IFEval (89.8%), HELM — WildBench (81.7%).

Looking for similar performance at lower cost?
GLM 4 32B scores 37.8 (99% as good) at $0.10/1M input · 67% cheaper
Capabilities
coding
35.1
#109 globally
reasoning
36.5
#74 globally
math
30.1
#132 globally
knowledge
41.5
#143 globally
agentic
41.1
#6 globally
language
89.8
#14 globally
Benchmark Scores
Compare All
Tested on 25 benchmarks · Ranked across 7 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Aider polyglot

Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.

47.1
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

41.0
Terminal Bench

Complex terminal-based engineering tasks. Models must use command-line tools, navigate filesystems, and debug systems through shell interaction.

17.1
HELM — WildBench

Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.

81.7
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

32.3
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

29.4
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

73.0
HELM — Omni-MATH

Stanford HELM evaluation of mathematical reasoning across diverse problem types.

38.4
FrontierMath-2025-02-28-Private

Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.

4.8
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Recently Happened
Gemini 2.5 Flash output price reduced to $2.50 per 1M tokens
Mar 5, 2026
Links
Documentation
BenchGecko API
gemini-2-5-flash
Specifications
  • Typemultimodal
  • Context1.0M tokens (~524 books)
  • ReleasedJun 2025
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.003
Available On
Google DeepMind logoGoogle DeepMind$0.30
Share & Export
Tweet
Gemini 2.5 Flash is a proprietary multimodal AI model by Google DeepMind, released in June 2025. It has an average benchmark score of 38.1. Context window: 1M tokens.