Home/Models/Gemini 2.0 Flash
Google DeepMind logo

Gemini 2.0 Flash

by Google DeepMind · Released Feb 2025

Multimodal1M Context
44.7
avg score
Rank #124
Compare
Better than 47% of all models
Context
1.0M tokens (~500 books)
Input $/1M
$0.10
Output $/1M
$0.40
Type
multimodal
License
Proprietary
Benchmarks
20 tested
Data updated today
About

Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro 1.5](/google/gemini-pro-1.5). It...

Tested on 20 benchmarks with 48.0% average. Top scores: Chatbot Arena Elo — Overall (1360.0%), HELM — IFEval (84.1%), MATH level 5 (82.2%).

Looking for similar performance at lower cost?
gpt-oss-20b scores 44.4 (99% as good) at $0.03/1M input · 70% cheaper
Capabilities
coding
31.3
#116 globally
reasoning
32.9
#79 globally
math
40.2
#105 globally
knowledge
66.3
#30 globally
agentic
11.4
#26 globally
language
84.1
#35 globally
Benchmark Scores
Compare All
Tested on 20 benchmarks · Ranked across 7 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Aider polyglot

Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.

38.2
CadEval

Computer-aided design evaluation. Tests understanding of CAD concepts, 3D modeling, and engineering design principles.

30.0
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

25.8
HELM — WildBench

Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.

80.0
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

17.3
ARC-AGI-2

ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.

1.3
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

82.2
HELM — Omni-MATH

Stanford HELM evaluation of mathematical reasoning across diverse problem types.

45.9
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

31.0
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
BenchGecko API
gemini-2-0-flash-001
Specifications
  • Typemultimodal
  • Context1.0M tokens (~500 books)
  • ReleasedFeb 2025
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.001
Available On
Google DeepMind logoGoogle DeepMind$0.10
Share & Export
Tweet
Gemini 2.0 Flash is a proprietary multimodal AI model by Google DeepMind, released in February 2025. It has an average benchmark score of 44.7. Context window: 1M tokens.