Compare · ModelsLive · 3 picked · head to head
Claude 2 vs GPT-3.5 Turbo (older v0613) vs GPT-4 Turbo
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-4 Turbo wins on 6/9 benchmarks
GPT-4 Turbo wins 6 of 9 shared benchmarks. Leads in math · reasoning · coding.
Category leads
knowledge·Claude 2math·GPT-4 Turboreasoning·GPT-4 Turbocoding·GPT-4 Turbo
Hype vs Reality
Attention vs performance
Claude 2
#158 by perf·no signal
GPT-3.5 Turbo (older v0613)
#111 by perf·no signal
GPT-4 Turbo
#90 by perf·no signal
Best value
GPT-3.5 Turbo (older v0613)
12.0x better value than GPT-4 Turbo
Claude 2
—
no price
GPT-3.5 Turbo (older v0613)
30.5 pts/$
$1.50/M
GPT-4 Turbo
2.5 pts/$
$20.00/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
OpenAI
$840.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
9 benchmarks · 3 models
Claude 2GPT-3.5 Turbo (older v0613)GPT-4 Turbo
GPQA diamond
Claude 2 leads by +5.4
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude 2
12.9
GPT-3.5 Turbo (older v0613)
2.9
GPT-4 Turbo
7.5
MATH level 5
GPT-4 Turbo leads by +11.2
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Claude 2
11.7
GPT-3.5 Turbo (older v0613)
11.6
GPT-4 Turbo
23.0
MMLU
GPT-4 Turbo leads by +5.2
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Claude 2
71.3
GPT-3.5 Turbo (older v0613)
56.4
GPT-4 Turbo
76.5
TriviaQA
Claude 2 leads by +1.7
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
Claude 2
87.5
GPT-3.5 Turbo (older v0613)
85.8
GPT-4 Turbo
84.8
BBH
GPT-4 Turbo leads by +18.0
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
GPT-3.5 Turbo (older v0613)
48.8
GPT-4 Turbo
66.8
GSM8K
GPT-4 Turbo leads by +32.2
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
GPT-3.5 Turbo (older v0613)
57.8
GPT-4 Turbo
90.0
OTIS Mock AIME 2024-2025
Claude 2 leads by +1.4
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude 2
2.4
GPT-4 Turbo
1.0
WeirdML
GPT-4 Turbo leads by +9.0
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-3.5 Turbo (older v0613)
3.5
GPT-4 Turbo
12.4
Winogrande
GPT-4 Turbo leads by +11.8
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
GPT-3.5 Turbo (older v0613)
63.2
GPT-4 Turbo
75.0
Full benchmark table
| Benchmark | Claude 2 | GPT-3.5 Turbo (older v0613) | GPT-4 Turbo |
|---|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 12.9 | 2.9 | 7.5 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 11.7 | 11.6 | 23.0 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 71.3 | 56.4 | 76.5 |
TriviaQA TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents. | 87.5 | 85.8 | 84.8 |
BBH BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans. | — | 48.8 | 66.8 |
GSM8K Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve. | — | 57.8 | 90.0 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 2.4 | — | 1.0 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | — | 3.5 | 12.4 |
Winogrande WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs. | — | 63.2 | 75.0 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| — | — | — | — | |
| $1.00 | $2.00 | 4K tokens (~2 books) | $12.50 | |
| $10.00 | $30.00 | 128K tokens (~64 books) | $150.00 |