Compare · ModelsLive · 3 picked · head to head
Falcon-180B vs Llama 2-13B vs LLaMA-13B
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Falcon-180B wins on 12/18 benchmarks
Falcon-180B wins 12 of 18 shared benchmarks. Leads in knowledge · math · language.
Category leads
knowledge·Falcon-180Breasoning·Llama 2-13Bmath·Falcon-180Bgeneral·LLaMA-13Blanguage·Falcon-180B
Hype vs Reality
Attention vs performance
Falcon-180B
#119 by perf·no signal
Llama 2-13B
#128 by perf·no signal
LLaMA-13B
#170 by perf·no signal
Best value
Pricing unknown
Falcon-180B
—
no price
Llama 2-13B
—
no price
LLaMA-13B
—
no price
Vendor risk
Who is behind the model
TII
private · undisclosed
Meta AI
$1.50T·Tier 1
Meta AI
$1.50T·Tier 1
Head to head
18 benchmarks · 3 models
Falcon-180BLlama 2-13BLLaMA-13B
ARC AI2
Falcon-180B leads by +10.0
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
Falcon-180B
57.1
Llama 2-13B
47.1
LLaMA-13B
36.9
BBH
Llama 2-13B leads by +27.1
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
Falcon-180B
16.1
Llama 2-13B
44.3
LLaMA-13B
17.2
GSM8K
Falcon-180B leads by +17.5
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
Falcon-180B
54.4
Llama 2-13B
36.9
LLaMA-13B
20.6
HellaSwag
Falcon-180B leads by +11.1
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
Falcon-180B
85.3
Llama 2-13B
74.3
LLaMA-13B
72.3
LAMBADA
Falcon-180B leads by +3.3
LAMBADA · measures the ability to predict the final word of a passage, requiring broad contextual understanding across long text spans.
Falcon-180B
79.8
Llama 2-13B
76.5
LLaMA-13B
75.2
MMLU
Falcon-180B leads by +20.0
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Falcon-180B
60.8
Llama 2-13B
40.8
LLaMA-13B
30.3
OpenBookQA
Falcon-180B leads by +9.6
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
Falcon-180B
52.3
Llama 2-13B
42.7
LLaMA-13B
41.9
PIQA
Falcon-180B leads by +8.2
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
Falcon-180B
69.8
Llama 2-13B
61.6
LLaMA-13B
60.2
TriviaQA
Falcon-180B leads by +0.3
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
Falcon-180B
79.9
Llama 2-13B
79.6
LLaMA-13B
77.9
Winogrande
Falcon-180B leads by +28.2
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Falcon-180B
74.2
Llama 2-13B
45.6
LLaMA-13B
46.0
CMMLU
Falcon-180B leads by +1.7
Falcon-180B
41.5
LLaMA-13B
39.8
BBH (HuggingFace)
LLaMA-13B leads by +3.3
Falcon-180B
21.9
LLaMA-13B
25.3
GPQA
LLaMA-13B leads by +0.7
Falcon-180B
2.8
LLaMA-13B
3.5
IFEval
Falcon-180B leads by +7.3
Falcon-180B
32.6
LLaMA-13B
25.3
MATH Level 5
LLaMA-13B leads by +0.3
Falcon-180B
2.8
LLaMA-13B
3.1
MMLU-PRO
LLaMA-13B leads by +7.6
Falcon-180B
15.4
LLaMA-13B
23.1
MUSR
Falcon-180B leads by +5.6
Falcon-180B
7.5
LLaMA-13B
2.0
ScienceQA
Llama 2-13B leads by +16.6
ScienceQA · multimodal science questions spanning natural science, social science, and language science with diverse question formats and image context.
Llama 2-13B
41.0
LLaMA-13B
24.4
Full benchmark table
| Benchmark | Falcon-180B | Llama 2-13B | LLaMA-13B |
|---|---|---|---|
ARC AI2 AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval. | 57.1 | 47.1 | 36.9 |
BBH BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans. | 16.1 | 44.3 | 17.2 |
GSM8K Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve. | 54.4 | 36.9 | 20.6 |
HellaSwag HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios. | 85.3 | 74.3 | 72.3 |
LAMBADA LAMBADA · measures the ability to predict the final word of a passage, requiring broad contextual understanding across long text spans. | 79.8 | 76.5 | 75.2 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 60.8 | 40.8 | 30.3 |
OpenBookQA OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting. | 52.3 | 42.7 | 41.9 |
PIQA PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks. | 69.8 | 61.6 | 60.2 |
TriviaQA TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents. | 79.9 | 79.6 | 77.9 |
Winogrande WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs. | 74.2 | 45.6 | 46.0 |
CMMLU | 41.5 | — | 39.8 |
BBH (HuggingFace) | 21.9 | — | 25.3 |
GPQA | 2.8 | — | 3.5 |
IFEval | 32.6 | — | 25.3 |
MATH Level 5 | 2.8 | — | 3.1 |
MMLU-PRO | 15.4 | — | 23.1 |
MUSR | 7.5 | — | 2.0 |
ScienceQA ScienceQA · multimodal science questions spanning natural science, social science, and language science with diverse question formats and image context. | — | 41.0 | 24.4 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| — | — | — | — | |
| — | — | — | — | |
| — | — | — | — |