Compare · ModelsLive · 2 picked · head to head

Llama 2-13B vs LLaMA-13B

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Llama 2-13B wins 10 of 11 shared benchmarks. Leads in knowledge · reasoning · math.

Category leads
knowledge·Llama 2-13Breasoning·Llama 2-13Bmath·Llama 2-13B
Hype vs Reality
Llama 2-13B
#128 by perf·no signal
QUIET
LLaMA-13B
#170 by perf·no signal
QUIET
Best value
Llama 2-13B
no price
LLaMA-13B
no price
Vendor risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Head to head
Llama 2-13BLLaMA-13B
ARC AI2
Llama 2-13B leads by +10.1
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
Llama 2-13B
47.1
LLaMA-13B
36.9
BBH
Llama 2-13B leads by +27.1
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
Llama 2-13B
44.3
LLaMA-13B
17.2
GSM8K
Llama 2-13B leads by +16.3
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
Llama 2-13B
36.9
LLaMA-13B
20.6
HellaSwag
Llama 2-13B leads by +2.0
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
Llama 2-13B
74.3
LLaMA-13B
72.3
LAMBADA
Llama 2-13B leads by +1.3
LAMBADA · measures the ability to predict the final word of a passage, requiring broad contextual understanding across long text spans.
Llama 2-13B
76.5
LLaMA-13B
75.2
MMLU
Llama 2-13B leads by +10.5
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Llama 2-13B
40.8
LLaMA-13B
30.3
OpenBookQA
Llama 2-13B leads by +0.8
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
Llama 2-13B
42.7
LLaMA-13B
41.9
PIQA
Llama 2-13B leads by +1.4
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
Llama 2-13B
61.6
LLaMA-13B
60.2
ScienceQA
Llama 2-13B leads by +16.6
ScienceQA · multimodal science questions spanning natural science, social science, and language science with diverse question formats and image context.
Llama 2-13B
41.0
LLaMA-13B
24.4
TriviaQA
Llama 2-13B leads by +1.7
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
Llama 2-13B
79.6
LLaMA-13B
77.9
Winogrande
LLaMA-13B leads by +0.4
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Llama 2-13B
45.6
LLaMA-13B
46.0
Full benchmark table
BenchmarkLlama 2-13BLLaMA-13B
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
47.136.9
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
44.317.2
GSM8K
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
36.920.6
HellaSwag
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
74.372.3
LAMBADA
LAMBADA · measures the ability to predict the final word of a passage, requiring broad contextual understanding across long text spans.
76.575.2
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
40.830.3
OpenBookQA
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
42.741.9
PIQA
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
61.660.2
ScienceQA
ScienceQA · multimodal science questions spanning natural science, social science, and language science with diverse question formats and image context.
41.024.4
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
79.677.9
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
45.646.0
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Meta logoLlama 2-13B
Meta logoLLaMA-13B