Gemini 1.5 Flash (May 2024)
by Google DeepMind · Released Jan 2024
43.1
avg score
Rank #133
Better than 43% of all models
Context
N/A
Input $/1M
TBD
Output $/1M
TBD
Type
text
License
Proprietary
Benchmarks
17 tested
Data updated today
About
Tested on 17 benchmarks with 47.4% average. Top scores: Chatbot Arena Elo — Overall (1285.1%), HELM — IFEval (83.1%), GSM8K (82.4%).
Capabilities
coding
24.9
#122 globally
reasoning
79.2
#10 globally
math
28.4
#137 globally
knowledge
52.6
#87 globally
multimodal
60.4
#5 globally
language
83.1
#42 globally
Benchmark Scores
Compare AllTested on 17 benchmarks · Ranked across 7 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
codingCompare coding →
WeirdML
24.9—Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.
reasoningCompare reasoning →
HELM — WildBench
79.2—Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.
mathCompare math →
GSM8K
82.4—Grade school math word problems. 8,500 problems testing multi-step arithmetic reasoning. A foundational math benchmark.
HELM — Omni-MATH
30.5—Stanford HELM evaluation of mathematical reasoning across diverse problem types.
MATH level 5
25.1—Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Research
Documentation
Community
BenchGecko API
gemini-1-5-flash-may-2024
Specifications
- Typetext
- ContextN/A
- ReleasedJan 2024
- LicenseProprietary
- Statusbenchmark-only
Available On
Learn More
Share & Export
Frequently Asked Questions
Gemini 1.5 Flash (May 2024) is a proprietary text AI model by Google DeepMind, released in January 2024. It has an average benchmark score of 43.1.