Compare · ModelsLive · 3 picked · head to head

Falcon-180B vs Llama 2-13B vs LLaMA-13B

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Falcon-180B wins 12 of 18 shared benchmarks. Leads in knowledge · math · language.

Category leads
knowledge·Falcon-180Breasoning·Llama 2-13Bmath·Falcon-180Bgeneral·LLaMA-13Blanguage·Falcon-180B
Hype vs Reality
Falcon-180B
#119 by perf·no signal
QUIET
Llama 2-13B
#128 by perf·no signal
QUIET
LLaMA-13B
#170 by perf·no signal
QUIET
Best value
Falcon-180B
no price
Llama 2-13B
no price
LLaMA-13B
no price
Vendor risk
TII logo
TII
private · undisclosed
Unknown
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Head to head
Falcon-180BLlama 2-13BLLaMA-13B
ARC AI2
Falcon-180B leads by +10.0
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
Falcon-180B
57.1
Llama 2-13B
47.1
LLaMA-13B
36.9
BBH
Llama 2-13B leads by +27.1
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
Falcon-180B
16.1
Llama 2-13B
44.3
LLaMA-13B
17.2
GSM8K
Falcon-180B leads by +17.5
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
Falcon-180B
54.4
Llama 2-13B
36.9
LLaMA-13B
20.6
HellaSwag
Falcon-180B leads by +11.1
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
Falcon-180B
85.3
Llama 2-13B
74.3
LLaMA-13B
72.3
LAMBADA
Falcon-180B leads by +3.3
LAMBADA · measures the ability to predict the final word of a passage, requiring broad contextual understanding across long text spans.
Falcon-180B
79.8
Llama 2-13B
76.5
LLaMA-13B
75.2
MMLU
Falcon-180B leads by +20.0
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Falcon-180B
60.8
Llama 2-13B
40.8
LLaMA-13B
30.3
OpenBookQA
Falcon-180B leads by +9.6
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
Falcon-180B
52.3
Llama 2-13B
42.7
LLaMA-13B
41.9
PIQA
Falcon-180B leads by +8.2
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
Falcon-180B
69.8
Llama 2-13B
61.6
LLaMA-13B
60.2
TriviaQA
Falcon-180B leads by +0.3
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
Falcon-180B
79.9
Llama 2-13B
79.6
LLaMA-13B
77.9
Winogrande
Falcon-180B leads by +28.2
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Falcon-180B
74.2
Llama 2-13B
45.6
LLaMA-13B
46.0
CMMLU
Falcon-180B leads by +1.7
Falcon-180B
41.5
LLaMA-13B
39.8
BBH (HuggingFace)
LLaMA-13B leads by +3.3
Falcon-180B
21.9
LLaMA-13B
25.3
GPQA
LLaMA-13B leads by +0.7
Falcon-180B
2.8
LLaMA-13B
3.5
IFEval
Falcon-180B leads by +7.3
Falcon-180B
32.6
LLaMA-13B
25.3
MATH Level 5
LLaMA-13B leads by +0.3
Falcon-180B
2.8
LLaMA-13B
3.1
MMLU-PRO
LLaMA-13B leads by +7.6
Falcon-180B
15.4
LLaMA-13B
23.1
MUSR
Falcon-180B leads by +5.6
Falcon-180B
7.5
LLaMA-13B
2.0
ScienceQA
Llama 2-13B leads by +16.6
ScienceQA · multimodal science questions spanning natural science, social science, and language science with diverse question formats and image context.
Llama 2-13B
41.0
LLaMA-13B
24.4
Full benchmark table
BenchmarkFalcon-180BLlama 2-13BLLaMA-13B
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
57.147.136.9
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
16.144.317.2
GSM8K
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
54.436.920.6
HellaSwag
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
85.374.372.3
LAMBADA
LAMBADA · measures the ability to predict the final word of a passage, requiring broad contextual understanding across long text spans.
79.876.575.2
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
60.840.830.3
OpenBookQA
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
52.342.741.9
PIQA
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
69.861.660.2
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
79.979.677.9
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
74.245.646.0
CMMLU
41.539.8
BBH (HuggingFace)
21.925.3
GPQA
2.83.5
IFEval
32.625.3
MATH Level 5
2.83.1
MMLU-PRO
15.423.1
MUSR
7.52.0
ScienceQA
ScienceQA · multimodal science questions spanning natural science, social science, and language science with diverse question formats and image context.
41.024.4
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
TII logoFalcon-180B
Meta logoLlama 2-13B
Meta logoLLaMA-13B