Compare · ModelsLive · 3 picked · head to head

Llama 3.1 405B vs Claude 3 Opus vs GPT-4 (older v0314)

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Llama 3.1 405B wins 4 of 8 shared benchmarks. Leads in knowledge · math.

Category leads
knowledge·Llama 3.1 405Bmath·Llama 3.1 405Bcoding·Claude 3 Opusreasoning·Claude 3 Opus
Hype vs Reality
Llama 3.1 405B
#153 by perf·no signal
QUIET
Claude 3 Opus
#174 by perf·no signal
QUIET
GPT-4 (older v0314)
#72 by perf·no signal
QUIET
Best value
Llama 3.1 405B
no price
Claude 3 Opus
no price
GPT-4 (older v0314)
1.2 pts/$
$45.00/M
Vendor risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
Llama 3.1 405BClaude 3 OpusGPT-4 (older v0314)
GPQA diamond
Llama 3.1 405B leads by +5.0
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Llama 3.1 405B
34.5
Claude 3 Opus
29.6
GPT-4 (older v0314)
14.3
MMLU
GPT-4 (older v0314) leads by +2.4
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Llama 3.1 405B
79.3
Claude 3 Opus
79.5
GPT-4 (older v0314)
81.9
OTIS Mock AIME 2024-2025
Llama 3.1 405B leads by +5.0
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Llama 3.1 405B
9.6
Claude 3 Opus
4.6
GPT-4 (older v0314)
0.5
Winogrande
Llama 3.1 405B leads by +1.4
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Llama 3.1 405B
78.4
Claude 3 Opus
77.0
GPT-4 (older v0314)
75.0
Cybench
Claude 3 Opus leads by +2.5
Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning.
Llama 3.1 405B
7.5
Claude 3 Opus
10.0
MATH level 5
Llama 3.1 405B leads by +12.3
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Llama 3.1 405B
49.8
Claude 3 Opus
37.5
SimpleBench
Claude 3 Opus leads by +0.6
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Llama 3.1 405B
7.6
Claude 3 Opus
8.2
WeirdML
Claude 3 Opus leads by +1.8
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Llama 3.1 405B
21.4
Claude 3 Opus
23.2
Full benchmark table
BenchmarkLlama 3.1 405BClaude 3 OpusGPT-4 (older v0314)
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
34.529.614.3
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
79.379.581.9
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
9.64.60.5
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
78.477.075.0
Cybench
Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning.
7.510.0
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
49.837.5
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
7.68.2
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
21.423.2
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Meta logoLlama 3.1 405B
Anthropic logoClaude 3 Opus
OpenAI logoGPT-4 (older v0314)$30.00$60.008K tokens (~4 books)$375.00
People also compared