Gemini 3 Pro
開発元 Google DeepMind · リリース日 2024-01-01
60.5
平均スコア
N/A
入力料金
N/A
出力料金
N/A
コンテキストウィンドウ
text
タイプ
Tested on 28 benchmarks with 60.5% average. Top scores: Chatbot Arena Elo — Overall (1486.2%), Chatbot Arena Elo — Coding (1437.6%), OTIS Mock AIME 2024-2025 (91.4%).
ベンチマークスコア
| ベンチマーク | カテゴリ | スコア | Bar |
|---|---|---|---|
| Chatbot Arena Elo — Overall | arena | 1486.2 | |
| Chatbot Arena Elo — Coding | arena | 1437.6 | |
| OTIS Mock AIME 2024-2025 | math | 91.4 | |
| HELM — MMLU-Pro | knowledge | 90.3 | |
| GPQA diamond | knowledge | 90.2 | |
| HELM — IFEval | language | 87.6 | |
| VPCT | knowledge | 86.5 | |
| HELM — WildBench | reasoning | 85.9 | |
| GeoBench | knowledge | 84.0 | |
| HELM — GPQA | knowledge | 80.3 | |
| ARC-AGI | reasoning | 75.0 | |
| SWE-Bench verified | coding | 72.9 | |
| SimpleQA Verified | knowledge | 72.9 | |
| SimpleBench | reasoning | 71.7 | |
| WeirdML | coding | 69.9 | |
| Terminal Bench | coding | 69.4 | |
| HELM — Omni-MATH | math | 55.6 | |
| Artificial Analysis — Agentic Index | speed | 45.0 | |
| Artificial Analysis — Quality Index | speed | 41.3 | |
| Artificial Analysis — Coding Index | speed | 39.4 | |
| FrontierMath-2025-02-28-Private | math | 37.6 | |
| HLE | knowledge | 34.4 | |
| ARC-AGI-2 | reasoning | 31.1 | |
| Chess Puzzles | knowledge | 31.0 | |
| FrontierMath-Tier-4-2025-07-01-Private | math | 18.8 | |
| GSO-Bench | coding | 18.6 | |
| APEX-Agents | agentic | 18.4 | |
| PostTrainBench | knowledge | 18.1 |