Compare · ModelsLive · 2 picked · head to head
Claude 3 Opus vs GPT-4 (older v0314)
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude 3 Opus wins on 3/4 benchmarks
Claude 3 Opus wins 3 of 4 shared benchmarks. Leads in knowledge · math.
Category leads
knowledge·Claude 3 Opusmath·Claude 3 Opus
Hype vs Reality
Attention vs performance
Claude 3 Opus
#174 by perf·no signal
GPT-4 (older v0314)
#72 by perf·no signal
Best value
GPT-4 (older v0314)
Claude 3 Opus
—
no price
GPT-4 (older v0314)
1.2 pts/$
$45.00/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
4 benchmarks · 2 models
Claude 3 OpusGPT-4 (older v0314)
GPQA diamond
Claude 3 Opus leads by +15.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude 3 Opus
29.6
GPT-4 (older v0314)
14.3
MMLU
GPT-4 (older v0314) leads by +2.4
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Claude 3 Opus
79.5
GPT-4 (older v0314)
81.9
OTIS Mock AIME 2024-2025
Claude 3 Opus leads by +4.2
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude 3 Opus
4.6
GPT-4 (older v0314)
0.5
Winogrande
Claude 3 Opus leads by +2.0
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Claude 3 Opus
77.0
GPT-4 (older v0314)
75.0
Full benchmark table
| Benchmark | Claude 3 Opus | GPT-4 (older v0314) |
|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 29.6 | 14.3 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 79.5 | 81.9 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 4.6 | 0.5 |
Winogrande WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs. | 77.0 | 75.0 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| — | — | — | — | |
| $30.00 | $60.00 | 8K tokens (~4 books) | $375.00 |