Compare · ModelsLive · 2 picked · head to head
Claude 2 vs Gemini 1.5 Pro (Feb 2024)
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Gemini 1.5 Pro (Feb 2024) wins on 4/4 benchmarks
Gemini 1.5 Pro (Feb 2024) wins 4 of 4 shared benchmarks. Leads in knowledge · math.
Category leads
knowledge·Gemini 1.5 Pro (Feb 2024)math·Gemini 1.5 Pro (Feb 2024)
Hype vs Reality
Attention vs performance
Claude 2
#158 by perf·no signal
Gemini 1.5 Pro (Feb 2024)
#138 by perf·no signal
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
Google DeepMind
$4.00T·Tier 1
Head to head
4 benchmarks · 2 models
Claude 2Gemini 1.5 Pro (Feb 2024)
GPQA diamond
Gemini 1.5 Pro (Feb 2024) leads by +14.9
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude 2
12.9
Gemini 1.5 Pro (Feb 2024)
27.8
MATH level 5
Gemini 1.5 Pro (Feb 2024) leads by +29.0
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Claude 2
11.7
Gemini 1.5 Pro (Feb 2024)
40.8
MMLU
Gemini 1.5 Pro (Feb 2024) leads by +5.6
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Claude 2
71.3
Gemini 1.5 Pro (Feb 2024)
76.9
OTIS Mock AIME 2024-2025
Gemini 1.5 Pro (Feb 2024) leads by +4.3
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude 2
2.4
Gemini 1.5 Pro (Feb 2024)
6.7
Full benchmark table
| Benchmark | Claude 2 | Gemini 1.5 Pro (Feb 2024) |
|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 12.9 | 27.8 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 11.7 | 40.8 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 71.3 | 76.9 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 2.4 | 6.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| — | — | — | — | |
| — | — | — | — |