Compare · ModelsLive · 2 picked · head to head

Llama 3.1 405B vs phi-3-medium 14B

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Llama 3.1 405B wins 8 of 9 shared benchmarks. Leads in knowledge · reasoning · math.

Category leads
knowledge·Llama 3.1 405Breasoning·Llama 3.1 405Bmath·Llama 3.1 405B
Hype vs Reality
Llama 3.1 405B
#153 by perf·no signal
QUIET
phi-3-medium 14B
#48 by perf·no signal
QUIET
Best value
Llama 3.1 405B
no price
phi-3-medium 14B
no price
Vendor risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Microsoft logo
Microsoft
$3.00T·Big Tech
Low risk
Head to head
Llama 3.1 405Bphi-3-medium 14B
ARC AI2
Llama 3.1 405B leads by +4.9
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
Llama 3.1 405B
93.7
phi-3-medium 14B
88.8
BBH
Llama 3.1 405B leads by +2.0
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
Llama 3.1 405B
77.2
phi-3-medium 14B
75.2
GPQA diamond
Llama 3.1 405B leads by +31.1
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Llama 3.1 405B
34.5
phi-3-medium 14B
3.5
HellaSwag
Llama 3.1 405B leads by +9.1
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
Llama 3.1 405B
85.6
phi-3-medium 14B
76.5
MATH level 5
Llama 3.1 405B leads by +32.2
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Llama 3.1 405B
49.8
phi-3-medium 14B
17.6
MMLU
Llama 3.1 405B leads by +8.7
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Llama 3.1 405B
79.3
phi-3-medium 14B
70.7
OpenBookQA
phi-3-medium 14B leads by +50.9
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
Llama 3.1 405B
32.3
phi-3-medium 14B
83.2
TriviaQA
Llama 3.1 405B leads by +8.8
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
Llama 3.1 405B
82.7
phi-3-medium 14B
73.9
Winogrande
Llama 3.1 405B leads by +15.4
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Llama 3.1 405B
78.4
phi-3-medium 14B
63.0
Full benchmark table
BenchmarkLlama 3.1 405Bphi-3-medium 14B
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
93.788.8
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
77.275.2
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
34.53.5
HellaSwag
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
85.676.5
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
49.817.6
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
79.370.7
OpenBookQA
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
32.383.2
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
82.773.9
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
78.463.0
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Meta logoLlama 3.1 405B
Microsoft logophi-3-medium 14B