Compare · ModelsLive · 2 picked · head to head
Gemini 1.0 Pro vs Claude 2.1
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude 2.1 wins on 2/3 benchmarks
Claude 2.1 wins 2 of 3 shared benchmarks. Leads in math.
Category leads
knowledge·Gemini 1.0 Promath·Claude 2.1
Hype vs Reality
Attention vs performance
Gemini 1.0 Pro
#212 by perf·no signal
Claude 2.1
#213 by perf·no signal
Vendor risk
Who is behind the model
Google DeepMind
$4.00T·Tier 1
Anthropic
$380.0B·Tier 1
Head to head
3 benchmarks · 2 models
Gemini 1.0 ProClaude 2.1
GPQA diamond
Gemini 1.0 Pro leads by +1.3
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Gemini 1.0 Pro
11.9
Claude 2.1
10.6
MMLU
Claude 2.1 leads by +4.7
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Gemini 1.0 Pro
60.0
Claude 2.1
64.7
OTIS Mock AIME 2024-2025
Claude 2.1 leads by +0.8
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 1.0 Pro
1.0
Claude 2.1
1.9
Full benchmark table
| Benchmark | Gemini 1.0 Pro | Claude 2.1 |
|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 11.9 | 10.6 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 60.0 | 64.7 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 1.0 | 1.9 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| — | — | — | — | |
| — | — | — | — |