Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro 1.5](/google/gemini-pro-1.5). It...
Tested on 20 benchmarks with 48.0% average. Top scores: Chatbot Arena Elo — Overall (1360.0%), HELM — IFEval (84.1%), MATH level 5 (82.2%).
gpt-oss-20b scores 44.4 (99% as good) at $0.03/1M input · 70% cheaper
Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.
Computer-aided design evaluation. Tests understanding of CAD concepts, 3D modeling, and engineering design principles.
Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.
Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.
Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.
ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.
Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.
Stanford HELM evaluation of mathematical reasoning across diverse problem types.
Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.
- Typemultimodal
- Context1.0M tokens (~500 books)
- ReleasedFeb 2025
- LicenseProprietary
- StatusActive
- Cost / Message~$0.001