Beta
Compare · ModelsLive · 2 picked · head to head

DeepSeek-V2 (MoE-236B, May 2024) vs Llama 2-13B

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

DeepSeek-V2 (MoE-236B, May 2024) wins 7 of 7 shared benchmarks. Leads in knowledge · reasoning.

Category leads
knowledge·DeepSeek-V2 (MoE-236B, May 2024)reasoning·DeepSeek-V2 (MoE-236B, May 2024)
Hype vs Reality
DeepSeek-V2 (MoE-236B, May 2024)
#8 by perf·no signal
QUIET
Llama 2-13B
#126 by perf·no signal
QUIET
Best value
DeepSeek-V2 (MoE-236B, May 2024)
no price
Llama 2-13B
no price
Vendor risk
One or more vendors flagged
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Head to head
DeepSeek-V2 (MoE-236B, May 2024)Llama 2-13B
ARC AI2
DeepSeek-V2 (MoE-236B, May 2024) leads by +42.5
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
DeepSeek-V2 (MoE-236B, May 2024)
89.6
Llama 2-13B
47.1
BBH
DeepSeek-V2 (MoE-236B, May 2024) leads by +27.5
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
DeepSeek-V2 (MoE-236B, May 2024)
71.7
Llama 2-13B
44.3
HellaSwag
DeepSeek-V2 (MoE-236B, May 2024) leads by +8.5
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
DeepSeek-V2 (MoE-236B, May 2024)
82.8
Llama 2-13B
74.3
MMLU
DeepSeek-V2 (MoE-236B, May 2024) leads by +30.4
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
DeepSeek-V2 (MoE-236B, May 2024)
71.2
Llama 2-13B
40.8
PIQA
DeepSeek-V2 (MoE-236B, May 2024) leads by +6.2
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
DeepSeek-V2 (MoE-236B, May 2024)
67.8
Llama 2-13B
61.6
TriviaQA
DeepSeek-V2 (MoE-236B, May 2024) leads by +0.4
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
DeepSeek-V2 (MoE-236B, May 2024)
80.0
Llama 2-13B
79.6
Winogrande
DeepSeek-V2 (MoE-236B, May 2024) leads by +27.0
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
DeepSeek-V2 (MoE-236B, May 2024)
72.6
Llama 2-13B
45.6
Full benchmark table
BenchmarkDeepSeek-V2 (MoE-236B, May 2024)Llama 2-13B
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
89.647.1
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
71.744.3
HellaSwag
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
82.874.3
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
71.240.8
PIQA
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
67.861.6
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
80.079.6
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
72.645.6
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
DeepSeek logoDeepSeek-V2 (MoE-236B, May 2024)
Meta logoLlama 2-13B