Compare · ModelsLive · 2 picked · head to head
Claude 3.5 Sonnet vs GPT-4 (older v0314)
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude 3.5 Sonnet wins on 5/5 benchmarks
Claude 3.5 Sonnet wins 5 of 5 shared benchmarks. Leads in coding · arena · knowledge.
Category leads
coding·Claude 3.5 Sonnetarena·Claude 3.5 Sonnetknowledge·Claude 3.5 Sonnetmath·Claude 3.5 Sonnet
Hype vs Reality
Attention vs performance
Claude 3.5 Sonnet
#129 by perf·no signal
GPT-4 (older v0314)
#72 by perf·no signal
Best value
GPT-4 (older v0314)
Claude 3.5 Sonnet
—
no price
GPT-4 (older v0314)
1.2 pts/$
$45.00/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
5 benchmarks · 2 models
Claude 3.5 SonnetGPT-4 (older v0314)
Aider · Code Editing
Claude 3.5 Sonnet leads by +18.0
Claude 3.5 Sonnet
84.2
GPT-4 (older v0314)
66.2
Chatbot Arena Elo · Overall
Claude 3.5 Sonnet leads by +85.6
Claude 3.5 Sonnet
1371.4
GPT-4 (older v0314)
1285.8
GPQA diamond
Claude 3.5 Sonnet leads by +24.4
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude 3.5 Sonnet
38.7
GPT-4 (older v0314)
14.3
MMLU
Claude 3.5 Sonnet leads by +0.1
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Claude 3.5 Sonnet
82.0
GPT-4 (older v0314)
81.9
OTIS Mock AIME 2024-2025
Claude 3.5 Sonnet leads by +6.0
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude 3.5 Sonnet
6.4
GPT-4 (older v0314)
0.5
Full benchmark table
| Benchmark | Claude 3.5 Sonnet | GPT-4 (older v0314) |
|---|---|---|
Aider · Code Editing | 84.2 | 66.2 |
Chatbot Arena Elo · Overall | 1371.4 | 1285.8 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 38.7 | 14.3 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 82.0 | 81.9 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 6.4 | 0.5 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| — | — | — | — | |
| $30.00 | $60.00 | 8K tokens (~4 books) | $375.00 |