Home/Models/Gemini 1.5 Pro (Feb 2024)
Google DeepMind logo

Gemini 1.5 Pro (Feb 2024)

by Google DeepMind · Released Jan 2024

41.4
avg score
Rank #139
Compare
Better than 40% of all models
Context
N/A
Input $/1M
TBD
Output $/1M
TBD
Type
text
License
Proprietary
Benchmarks
20 tested
Data updated today
About

Tested on 20 benchmarks with 41.3% average. Top scores: Chatbot Arena Elo — Overall (1322.5%), HELM — IFEval (83.7%), HELM — WildBench (81.3%).

Capabilities
coding
30.2
#118 globally
reasoning
43.3
#60 globally
math
28.0
#139 globally
knowledge
50.6
#97 globally
agentic
3.4
#35 globally
language
83.7
#38 globally
multimodal
66.7
#1 globally
Benchmark Scores
Compare All
Tested on 20 benchmarks · Ranked across 8 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Aider — Code Editing

Code editing benchmark from the Aider project. Measures ability to apply targeted code changes while maintaining correctness and style.

57.1
CadEval

Computer-aided design evaluation. Tests understanding of CAD concepts, 3D modeling, and engineering design principles.

34.0
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

22.2
HELM — WildBench

Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.

81.3
BBH

BIG-Bench Hard. 23 challenging tasks from BIG-Bench where prior language models fell below average human performance.

78.7
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

12.5
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

40.8
HELM — Omni-MATH

Stanford HELM evaluation of mathematical reasoning across diverse problem types.

36.4
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

6.7
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
BenchGecko API
gemini-1-5-pro-feb-2024
Specifications
  • Typetext
  • ContextN/A
  • ReleasedJan 2024
  • LicenseProprietary
  • Statusbenchmark-only
Available On
Google DeepMind logoGoogle DeepMindTBD
Share & Export
Tweet
Gemini 1.5 Pro (Feb 2024) is a proprietary text AI model by Google DeepMind, released in January 2024. It has an average benchmark score of 41.4.