Gemini 1.0 Pro
by Google DeepMind · Released Jan 2024
23.1
avg score
Rank #200
Better than 14% of all models
Context
N/A
Input $/1M
TBD
Output $/1M
TBD
Type
text
License
Proprietary
Benchmarks
4 tested
Data updated today
About
Tested on 4 benchmarks with 21.1% average. Top scores: MMLU (60.0%), GPQA diamond (11.9%), MATH level 5 (11.2%).
Capabilities
math
6.1
#195 globally
knowledge
36.0
#159 globally
Benchmark Scores
Compare AllTested on 4 benchmarks · Ranked across 2 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
mathCompare math →
MATH level 5
11.2—Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.
OTIS Mock AIME 2024-2025
1.0—Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.
knowledgeCompare knowledge →
MMLU
60.0—Massive Multitask Language Understanding. 57 subjects from STEM, humanities, and social sciences. The most widely-cited knowledge benchmark.
GPQA diamond
11.9—Graduate-level science questions written by PhD experts. Diamond subset contains questions where experts disagree, testing deep understanding.
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Research
Documentation
Community
BenchGecko API
gemini-1-0-pro
Specifications
- Typetext
- ContextN/A
- ReleasedJan 2024
- LicenseProprietary
- Statusbenchmark-only
Available On
Learn More
Share & Export
Frequently Asked Questions
Gemini 1.0 Pro is a proprietary text AI model by Google DeepMind, released in January 2024. It has an average benchmark score of 23.1.