Beta
Compare · ModelsLive · 2 picked · head to head

LLaMA-13B vs Llama 2-13B

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Llama 2-13B wins 10 of 11 shared benchmarks. Leads in knowledge · reasoning · math.

Category leads
knowledge·Llama 2-13Breasoning·Llama 2-13Bmath·Llama 2-13B
Hype vs Reality
LLaMA-13B
#168 by perf·no signal
QUIET
Llama 2-13B
#126 by perf·no signal
QUIET
Best value
LLaMA-13B
no price
Llama 2-13B
no price
Vendor risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Head to head
LLaMA-13BLlama 2-13B
ARC AI2
Llama 2-13B leads by +10.1
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
LLaMA-13B
36.9
Llama 2-13B
47.1
BBH
Llama 2-13B leads by +27.1
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
LLaMA-13B
17.2
Llama 2-13B
44.3
GSM8K
Llama 2-13B leads by +16.3
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
LLaMA-13B
20.6
Llama 2-13B
36.9
HellaSwag
Llama 2-13B leads by +2.0
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
LLaMA-13B
72.3
Llama 2-13B
74.3
LAMBADA
Llama 2-13B leads by +1.3
LAMBADA · measures the ability to predict the final word of a passage, requiring broad contextual understanding across long text spans.
LLaMA-13B
75.2
Llama 2-13B
76.5
MMLU
Llama 2-13B leads by +10.5
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
LLaMA-13B
30.3
Llama 2-13B
40.8
OpenBookQA
Llama 2-13B leads by +0.8
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
LLaMA-13B
41.9
Llama 2-13B
42.7
PIQA
Llama 2-13B leads by +1.4
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
LLaMA-13B
60.2
Llama 2-13B
61.6
ScienceQA
Llama 2-13B leads by +16.6
ScienceQA · multimodal science questions spanning natural science, social science, and language science with diverse question formats and image context.
LLaMA-13B
24.4
Llama 2-13B
41.0
TriviaQA
Llama 2-13B leads by +1.7
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
LLaMA-13B
77.9
Llama 2-13B
79.6
Winogrande
LLaMA-13B leads by +0.4
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
LLaMA-13B
46.0
Llama 2-13B
45.6
Full benchmark table
BenchmarkLLaMA-13BLlama 2-13B
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
36.947.1
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
17.244.3
GSM8K
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
20.636.9
HellaSwag
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
72.374.3
LAMBADA
LAMBADA · measures the ability to predict the final word of a passage, requiring broad contextual understanding across long text spans.
75.276.5
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
30.340.8
OpenBookQA
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
41.942.7
PIQA
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
60.261.6
ScienceQA
ScienceQA · multimodal science questions spanning natural science, social science, and language science with diverse question formats and image context.
24.441.0
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
77.979.6
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
46.045.6
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Meta logoLLaMA-13B
Meta logoLlama 2-13B