Beta
Compare · ModelsLive · 2 picked · head to head

DeepSeek V3 vs DeepSeek-V2 (MoE-236B, May 2024)

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

DeepSeek V3 wins 6 of 7 shared benchmarks. Leads in knowledge · reasoning.

Category leads
knowledge·DeepSeek V3reasoning·DeepSeek V3
Hype vs Reality
DeepSeek V3
#43 by perf·no signal
QUIET
DeepSeek-V2 (MoE-236B, May 2024)
#8 by perf·no signal
QUIET
Best value
DeepSeek V3
97.5 pts/$
$0.60/M
DeepSeek-V2 (MoE-236B, May 2024)
no price
Vendor risk
One or more vendors flagged
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
Head to head
DeepSeek V3DeepSeek-V2 (MoE-236B, May 2024)
ARC AI2
DeepSeek V3 leads by +4.1
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
DeepSeek V3
93.7
DeepSeek-V2 (MoE-236B, May 2024)
89.6
BBH
DeepSeek V3 leads by +11.6
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
DeepSeek V3
83.3
DeepSeek-V2 (MoE-236B, May 2024)
71.7
HellaSwag
DeepSeek V3 leads by +2.4
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
DeepSeek V3
85.2
DeepSeek-V2 (MoE-236B, May 2024)
82.8
MMLU
DeepSeek V3 leads by +11.7
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
DeepSeek V3
82.9
DeepSeek-V2 (MoE-236B, May 2024)
71.2
PIQA
DeepSeek V3 leads by +1.6
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
DeepSeek V3
69.4
DeepSeek-V2 (MoE-236B, May 2024)
67.8
TriviaQA
DeepSeek V3 leads by +2.9
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
DeepSeek V3
82.9
DeepSeek-V2 (MoE-236B, May 2024)
80.0
Winogrande
DeepSeek-V2 (MoE-236B, May 2024) leads by +2.2
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
DeepSeek V3
70.4
DeepSeek-V2 (MoE-236B, May 2024)
72.6
Full benchmark table
BenchmarkDeepSeek V3DeepSeek-V2 (MoE-236B, May 2024)
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
93.789.6
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
83.371.7
HellaSwag
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
85.282.8
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
82.971.2
PIQA
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
69.467.8
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
82.980.0
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
70.472.6
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
DeepSeek logoDeepSeek V3$0.32$0.89164K tokens (~82 books)$4.63
DeepSeek logoDeepSeek-V2 (MoE-236B, May 2024)