Compare · ModelsLive · 3 picked · head to head

DeepSeek V3 vs Claude 3.5 Sonnet vs GPT-4 (older v0314)

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

DeepSeek V3 wins 8 of 17 shared benchmarks. Leads in math · reasoning.

Category leads
arena·Claude 3.5 Sonnetknowledge·Claude 3.5 Sonnetmath·DeepSeek V3coding·Claude 3.5 Sonnetlanguage·Claude 3.5 Sonnetreasoning·DeepSeek V3
Hype vs Reality
DeepSeek V3
#45 by perf·no signal
QUIET
Claude 3.5 Sonnet
#129 by perf·no signal
QUIET
GPT-4 (older v0314)
#72 by perf·no signal
QUIET
Best value
79.8x better value than GPT-4 (older v0314)
DeepSeek V3
97.5 pts/$
$0.60/M
Claude 3.5 Sonnet
no price
GPT-4 (older v0314)
1.2 pts/$
$45.00/M
Vendor risk
One or more vendors flagged
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
DeepSeek V3Claude 3.5 SonnetGPT-4 (older v0314)
Chatbot Arena Elo · Overall
Claude 3.5 Sonnet leads by +13.2
DeepSeek V3
1358.2
Claude 3.5 Sonnet
1371.4
GPT-4 (older v0314)
1285.8
GPQA diamond
DeepSeek V3 leads by +3.3
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
DeepSeek V3
42.0
Claude 3.5 Sonnet
38.7
GPT-4 (older v0314)
14.3
MMLU
DeepSeek V3 leads by +0.9
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
DeepSeek V3
82.9
Claude 3.5 Sonnet
82.0
GPT-4 (older v0314)
81.9
OTIS Mock AIME 2024-2025
DeepSeek V3 leads by +9.3
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
DeepSeek V3
15.8
Claude 3.5 Sonnet
6.4
GPT-4 (older v0314)
0.5
Aider · Code Editing
Claude 3.5 Sonnet leads by +18.0
Claude 3.5 Sonnet
84.2
GPT-4 (older v0314)
66.2
Aider polyglot
Claude 3.5 Sonnet leads by +3.2
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
DeepSeek V3
48.4
Claude 3.5 Sonnet
51.6
FrontierMath-2025-02-28-Private
DeepSeek V3 leads by +0.7
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
DeepSeek V3
1.7
Claude 3.5 Sonnet
1.0
HELM · GPQA
Claude 3.5 Sonnet leads by +2.7
DeepSeek V3
53.8
Claude 3.5 Sonnet
56.5
HELM · IFEval
Claude 3.5 Sonnet leads by +2.4
DeepSeek V3
83.2
Claude 3.5 Sonnet
85.6
HELM · MMLU-Pro
Claude 3.5 Sonnet leads by +5.4
DeepSeek V3
72.3
Claude 3.5 Sonnet
77.7
HELM · Omni-MATH
DeepSeek V3 leads by +12.7
DeepSeek V3
40.3
Claude 3.5 Sonnet
27.6
HELM · WildBench
DeepSeek V3 leads by +3.9
DeepSeek V3
83.1
Claude 3.5 Sonnet
79.2
Lech Mazur Writing
Claude 3.5 Sonnet leads by +3.3
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
DeepSeek V3
77.0
Claude 3.5 Sonnet
80.3
MATH level 5
DeepSeek V3 leads by +13.2
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
DeepSeek V3
64.8
Claude 3.5 Sonnet
51.7
SimpleBench
Claude 3.5 Sonnet leads by +10.3
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
DeepSeek V3
2.7
Claude 3.5 Sonnet
13.0
WeirdML
DeepSeek V3 leads by +5.1
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
DeepSeek V3
36.1
Claude 3.5 Sonnet
31.0
Winogrande
GPT-4 (older v0314) leads by +4.6
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
DeepSeek V3
70.4
GPT-4 (older v0314)
75.0
Full benchmark table
BenchmarkDeepSeek V3Claude 3.5 SonnetGPT-4 (older v0314)
Chatbot Arena Elo · Overall
1358.21371.41285.8
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
42.038.714.3
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
82.982.081.9
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
15.86.40.5
Aider · Code Editing
84.266.2
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
48.451.6
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
1.71.0
HELM · GPQA
53.856.5
HELM · IFEval
83.285.6
HELM · MMLU-Pro
72.377.7
HELM · Omni-MATH
40.327.6
HELM · WildBench
83.179.2
Lech Mazur Writing
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
77.080.3
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
64.851.7
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
2.713.0
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
36.131.0
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
70.475.0
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
DeepSeek logoDeepSeek V3$0.32$0.89164K tokens (~82 books)$4.63
Anthropic logoClaude 3.5 Sonnet
OpenAI logoGPT-4 (older v0314)$30.00$60.008K tokens (~4 books)$375.00