Compare · ModelsLive · 2 picked · head to head

Gemini 1.0 Pro vs Claude 2.1

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Claude 2.1 wins 2 of 3 shared benchmarks. Leads in math.

Category leads
knowledge·Gemini 1.0 Promath·Claude 2.1
Hype vs Reality
Gemini 1.0 Pro
#212 by perf·no signal
QUIET
Claude 2.1
#213 by perf·no signal
QUIET
Best value
Gemini 1.0 Pro
no price
Claude 2.1
no price
Vendor risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
Head to head
Gemini 1.0 ProClaude 2.1
GPQA diamond
Gemini 1.0 Pro leads by +1.3
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Gemini 1.0 Pro
11.9
Claude 2.1
10.6
MMLU
Claude 2.1 leads by +4.7
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Gemini 1.0 Pro
60.0
Claude 2.1
64.7
OTIS Mock AIME 2024-2025
Claude 2.1 leads by +0.8
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 1.0 Pro
1.0
Claude 2.1
1.9
Full benchmark table
BenchmarkGemini 1.0 ProClaude 2.1
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
11.910.6
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
60.064.7
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
1.01.9
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Google DeepMind logoGemini 1.0 Pro
Anthropic logoClaude 2.1