Compare · ModelsLive · 3 picked · head to head
Llama 3.1 405B vs Claude 3 Opus vs GPT-4 (older v0314)
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Llama 3.1 405B wins on 4/8 benchmarks
Llama 3.1 405B wins 4 of 8 shared benchmarks. Leads in knowledge · math.
Category leads
knowledge·Llama 3.1 405Bmath·Llama 3.1 405Bcoding·Claude 3 Opusreasoning·Claude 3 Opus
Hype vs Reality
Attention vs performance
Llama 3.1 405B
#153 by perf·no signal
Claude 3 Opus
#174 by perf·no signal
GPT-4 (older v0314)
#72 by perf·no signal
Best value
GPT-4 (older v0314)
Llama 3.1 405B
—
no price
Claude 3 Opus
—
no price
GPT-4 (older v0314)
1.2 pts/$
$45.00/M
Vendor risk
Who is behind the model
Meta AI
$1.50T·Tier 1
Anthropic
$380.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
8 benchmarks · 3 models
Llama 3.1 405BClaude 3 OpusGPT-4 (older v0314)
GPQA diamond
Llama 3.1 405B leads by +5.0
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Llama 3.1 405B
34.5
Claude 3 Opus
29.6
GPT-4 (older v0314)
14.3
MMLU
GPT-4 (older v0314) leads by +2.4
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Llama 3.1 405B
79.3
Claude 3 Opus
79.5
GPT-4 (older v0314)
81.9
OTIS Mock AIME 2024-2025
Llama 3.1 405B leads by +5.0
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Llama 3.1 405B
9.6
Claude 3 Opus
4.6
GPT-4 (older v0314)
0.5
Winogrande
Llama 3.1 405B leads by +1.4
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Llama 3.1 405B
78.4
Claude 3 Opus
77.0
GPT-4 (older v0314)
75.0
Cybench
Claude 3 Opus leads by +2.5
Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning.
Llama 3.1 405B
7.5
Claude 3 Opus
10.0
MATH level 5
Llama 3.1 405B leads by +12.3
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Llama 3.1 405B
49.8
Claude 3 Opus
37.5
SimpleBench
Claude 3 Opus leads by +0.6
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Llama 3.1 405B
7.6
Claude 3 Opus
8.2
WeirdML
Claude 3 Opus leads by +1.8
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Llama 3.1 405B
21.4
Claude 3 Opus
23.2
Full benchmark table
| Benchmark | Llama 3.1 405B | Claude 3 Opus | GPT-4 (older v0314) |
|---|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 34.5 | 29.6 | 14.3 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 79.3 | 79.5 | 81.9 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 9.6 | 4.6 | 0.5 |
Winogrande WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs. | 78.4 | 77.0 | 75.0 |
Cybench Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning. | 7.5 | 10.0 | — |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 49.8 | 37.5 | — |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 7.6 | 8.2 | — |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 21.4 | 23.2 | — |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| — | — | — | — | |
| — | — | — | — | |
| $30.00 | $60.00 | 8K tokens (~4 books) | $375.00 |
People also compared