Beta
Compare · ModelsLive · 2 picked · head to head

DeepSeek-V2 (MoE-236B, May 2024) vs phi-3-medium 14B

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

DeepSeek-V2 (MoE-236B, May 2024) wins 5 of 6 shared benchmarks. Leads in knowledge.

Category leads
knowledge·DeepSeek-V2 (MoE-236B, May 2024)reasoning·phi-3-medium 14B
Hype vs Reality
DeepSeek-V2 (MoE-236B, May 2024)
#8 by perf·no signal
QUIET
phi-3-medium 14B
#46 by perf·no signal
QUIET
Best value
DeepSeek-V2 (MoE-236B, May 2024)
no price
phi-3-medium 14B
no price
Vendor risk
One or more vendors flagged
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
Microsoft logo
Microsoft
$3.00T·Big Tech
Low risk
Head to head
DeepSeek-V2 (MoE-236B, May 2024)phi-3-medium 14B
ARC AI2
DeepSeek-V2 (MoE-236B, May 2024) leads by +0.8
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
DeepSeek-V2 (MoE-236B, May 2024)
89.6
phi-3-medium 14B
88.8
BBH
phi-3-medium 14B leads by +3.5
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
DeepSeek-V2 (MoE-236B, May 2024)
71.7
phi-3-medium 14B
75.2
HellaSwag
DeepSeek-V2 (MoE-236B, May 2024) leads by +6.3
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
DeepSeek-V2 (MoE-236B, May 2024)
82.8
phi-3-medium 14B
76.5
MMLU
DeepSeek-V2 (MoE-236B, May 2024) leads by +0.5
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
DeepSeek-V2 (MoE-236B, May 2024)
71.2
phi-3-medium 14B
70.7
TriviaQA
DeepSeek-V2 (MoE-236B, May 2024) leads by +6.1
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
DeepSeek-V2 (MoE-236B, May 2024)
80.0
phi-3-medium 14B
73.9
Winogrande
DeepSeek-V2 (MoE-236B, May 2024) leads by +9.6
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
DeepSeek-V2 (MoE-236B, May 2024)
72.6
phi-3-medium 14B
63.0
Full benchmark table
BenchmarkDeepSeek-V2 (MoE-236B, May 2024)phi-3-medium 14B
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
89.688.8
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
71.775.2
HellaSwag
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
82.876.5
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
71.270.7
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
80.073.9
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
72.663.0
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
DeepSeek logoDeepSeek-V2 (MoE-236B, May 2024)
Microsoft logophi-3-medium 14B