Gemini 1.5 Pro (Feb 2024)
di Google DeepMind · Rilascio 2024-01-01
41.3
punteggio medio
N/A
Prezzo Input
N/A
Prezzo Output
N/A
Finestra di Contesto
text
Tipo
Tested on 20 benchmarks with 41.3% average. Top scores: Chatbot Arena Elo — Overall (1322.5%), HELM — IFEval (83.7%), HELM — WildBench (81.3%).
Punteggi Benchmark
| Benchmark | Categoria | Punteggio | Bar |
|---|---|---|---|
| Chatbot Arena Elo — Overall | arena | 1322.5 | |
| HELM — IFEval | language | 83.7 | |
| HELM — WildBench | reasoning | 81.3 | |
| BBH | reasoning | 78.7 | |
| MMLU | knowledge | 76.9 | |
| HELM — MMLU-Pro | knowledge | 73.7 | |
| VideoMME | multimodal | 66.7 | |
| Aider — Code Editing | coding | 57.1 | |
| HELM — GPQA | knowledge | 53.4 | |
| MATH level 5 | math | 40.8 | |
| HELM — Omni-MATH | math | 36.4 | |
| CadEval | coding | 34.0 | |
| GPQA diamond | knowledge | 27.8 | |
| WeirdML | coding | 22.2 | |
| Balrog | knowledge | 21.0 | |
| SimpleBench | reasoning | 12.5 | |
| Cybench | coding | 7.5 | |
| OTIS Mock AIME 2024-2025 | math | 6.7 | |
| The Agent Company | agentic | 3.4 | |
| ARC-AGI-2 | reasoning | 0.8 |