Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through responses with enhanced accuracy...
Tested on 42 benchmarks with 56.2% average. Top scores: Chatbot Arena Elo — Overall (1448.2%), Chatbot Arena Elo — Coding (1202.0%), MATH level 5 (95.6%).
Gemma 4 31B scores 68.2 (101% as good) at $0.13/1M input · 90% cheaper
Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.
OpenCompass Live Code Bench v6. Fresh competitive programming problems to evaluate code generation without memorization.
Computer-aided design evaluation. Tests understanding of CAD concepts, 3D modeling, and engineering design principles.
Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.
Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.
Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.
Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.
OpenCompass evaluation on AIME 2025 problems. Tests mathematical reasoning on fresh competition problems.
Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.
- Typemultimodal
- Context1.0M tokens (~524 books)
- ReleasedJun 2025
- LicenseProprietary
- StatusActive
- Cost / Message~$0.013