Gemini 1.5 Pro (Feb 2024)
by Google DeepMind · Released Jan 2024
Tested on 20 benchmarks with 41.3% average. Top scores: Chatbot Arena Elo — Overall (1322.5%), HELM — IFEval (83.7%), HELM — WildBench (81.3%).
Code editing benchmark from the Aider project. Measures ability to apply targeted code changes while maintaining correctness and style.
Computer-aided design evaluation. Tests understanding of CAD concepts, 3D modeling, and engineering design principles.
Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.
Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.
BIG-Bench Hard. 23 challenging tasks from BIG-Bench where prior language models fell below average human performance.
Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.
Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.
Stanford HELM evaluation of mathematical reasoning across diverse problem types.
Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.
- Typetext
- ContextN/A
- ReleasedJan 2024
- LicenseProprietary
- Statusbenchmark-only