Compare · ModelsLive · 3 picked · head to head
GPT-4 Turbo vs Llama 3.1 405B vs Falcon-180B
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Llama 3.1 405B wins on 11/21 benchmarks
Llama 3.1 405B wins 11 of 21 shared benchmarks. Leads in reasoning · knowledge · math.
Category leads
reasoning·Llama 3.1 405Bknowledge·Llama 3.1 405Bmath·Llama 3.1 405Bgeneral·Falcon-180Blanguage·Falcon-180Bcoding·Llama 3.1 405B
Hype vs Reality
Attention vs performance
GPT-4 Turbo
#90 by perf·no signal
Llama 3.1 405B
#153 by perf·no signal
Falcon-180B
#119 by perf·no signal
Best value
GPT-4 Turbo
GPT-4 Turbo
2.5 pts/$
$20.00/M
Llama 3.1 405B
—
no price
Falcon-180B
—
no price
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
Meta AI
$1.50T·Tier 1
TII
private · undisclosed
Head to head
21 benchmarks · 3 models
GPT-4 TurboLlama 3.1 405BFalcon-180B
BBH
Llama 3.1 405B leads by +10.4
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
GPT-4 Turbo
66.8
Llama 3.1 405B
77.2
Falcon-180B
16.1
HellaSwag
GPT-4 Turbo leads by +8.1
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
GPT-4 Turbo
93.7
Llama 3.1 405B
85.6
Falcon-180B
85.3
MMLU
Llama 3.1 405B leads by +2.8
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
GPT-4 Turbo
76.5
Llama 3.1 405B
79.3
Falcon-180B
60.8
TriviaQA
GPT-4 Turbo leads by +2.1
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
GPT-4 Turbo
84.8
Llama 3.1 405B
82.7
Falcon-180B
79.9
Winogrande
Llama 3.1 405B leads by +3.4
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
GPT-4 Turbo
75.0
Llama 3.1 405B
78.4
Falcon-180B
74.2
ARC AI2
Llama 3.1 405B leads by +36.7
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
Llama 3.1 405B
93.7
Falcon-180B
57.1
CMMLU
GPT-4 Turbo leads by +29.5
GPT-4 Turbo
71.0
Falcon-180B
41.5
GPQA diamond
Llama 3.1 405B leads by +27.0
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4 Turbo
7.5
Llama 3.1 405B
34.5
GSM8K
GPT-4 Turbo leads by +35.6
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
GPT-4 Turbo
90.0
Falcon-180B
54.4
BBH (HuggingFace)
Falcon-180B leads by +14.2
Llama 3.1 405B
7.8
Falcon-180B
21.9
GPQA
Llama 3.1 405B leads by +3.1
Llama 3.1 405B
5.9
Falcon-180B
2.8
IFEval
Falcon-180B leads by +14.5
Llama 3.1 405B
18.1
Falcon-180B
32.6
MATH Level 5
Falcon-180B leads by +2.8
Llama 3.1 405B
0.0
Falcon-180B
2.8
MMLU-PRO
Llama 3.1 405B leads by +10.2
Llama 3.1 405B
25.7
Falcon-180B
15.4
MUSR
Falcon-180B leads by +5.3
Llama 3.1 405B
2.2
Falcon-180B
7.5
MATH level 5
Llama 3.1 405B leads by +26.8
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4 Turbo
23.0
Llama 3.1 405B
49.8
OpenBookQA
Falcon-180B leads by +20.0
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
Llama 3.1 405B
32.3
Falcon-180B
52.3
OTIS Mock AIME 2024-2025
Llama 3.1 405B leads by +8.6
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4 Turbo
1.0
Llama 3.1 405B
9.6
PIQA
Llama 3.1 405B leads by +2.0
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
Llama 3.1 405B
71.8
Falcon-180B
69.8
SimpleBench
GPT-4 Turbo leads by +2.5
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GPT-4 Turbo
10.1
Llama 3.1 405B
7.6
WeirdML
Llama 3.1 405B leads by +8.9
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4 Turbo
12.4
Llama 3.1 405B
21.4
Full benchmark table
| Benchmark | GPT-4 Turbo | Llama 3.1 405B | Falcon-180B |
|---|---|---|---|
BBH BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans. | 66.8 | 77.2 | 16.1 |
HellaSwag HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios. | 93.7 | 85.6 | 85.3 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 76.5 | 79.3 | 60.8 |
TriviaQA TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents. | 84.8 | 82.7 | 79.9 |
Winogrande WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs. | 75.0 | 78.4 | 74.2 |
ARC AI2 AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval. | — | 93.7 | 57.1 |
CMMLU | 71.0 | — | 41.5 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 7.5 | 34.5 | — |
GSM8K Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve. | 90.0 | — | 54.4 |
BBH (HuggingFace) | — | 7.8 | 21.9 |
GPQA | — | 5.9 | 2.8 |
IFEval | — | 18.1 | 32.6 |
MATH Level 5 | — | 0.0 | 2.8 |
MMLU-PRO | — | 25.7 | 15.4 |
MUSR | — | 2.2 | 7.5 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 23.0 | 49.8 | — |
OpenBookQA OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting. | — | 32.3 | 52.3 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 1.0 | 9.6 | — |
PIQA PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks. | — | 71.8 | 69.8 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 10.1 | 7.6 | — |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 12.4 | 21.4 | — |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $10.00 | $30.00 | 128K tokens (~64 books) | $150.00 | |
| — | — | — | — | |
| — | — | — | — |
People also compared