Home/Models/Gemini 2.5 Flash Lite
Google DeepMind logo

Gemini 2.5 Flash Lite

by Google DeepMind · Released Jul 2025

Multimodal1M Context
33.2
avg score
Rank #169
Compare
Better than 27% of all models
Context
1.0M tokens (~524 books)
Input $/1M
$0.10
Output $/1M
$0.40
Type
multimodal
License
Proprietary
Benchmarks
8 tested
Data updated today
About

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance...

Tested on 8 benchmarks with 59.1% average. Top scores: HELM — WildBench (81.8%), HELM — IFEval (81.0%), HELM — MMLU-Pro (53.7%).

Looking for similar performance at lower cost?
Llama 3.1 8B Instruct scores 34.3 (103% as good) at $0.02/1M input · 80% cheaper
Capabilities
reasoning
81.8
#7 globally
math
48.0
#80 globally
knowledge
42.3
#137 globally
speed
28.6
#49 globally
language
81.0
#49 globally
Benchmark Scores
Compare All
Tested on 8 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
HELM — WildBench

Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.

81.8
HELM — Omni-MATH

Stanford HELM evaluation of mathematical reasoning across diverse problem types.

48.0
HELM — MMLU-Pro

Stanford HELM evaluation of MMLU-Pro. Tests broad knowledge with increased difficulty.

53.7
HELM — GPQA

Stanford HELM evaluation of GPQA. Tests graduate-level scientific reasoning.

30.9
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Recently Happened
Gemini 2.5 Flash Lite priced at $0.10/$0.40 per 1M tokens
Mar 20, 2026
Links
Documentation
BenchGecko API
gemini-2-5-flash-lite
Specifications
  • Typemultimodal
  • Context1.0M tokens (~524 books)
  • ReleasedJul 2025
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.001
Available On
Google DeepMind logoGoogle DeepMind$0.10
Share & Export
Tweet
Gemini 2.5 Flash Lite is a proprietary multimodal AI model by Google DeepMind, released in July 2025. It has an average benchmark score of 33.2. Context window: 1M tokens.