Compare · ModelsLive · 2 picked · head to head
Llama 3 8B Instruct vs phi-3-medium 14B
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
phi-3-medium 14B wins on 7/8 benchmarks
phi-3-medium 14B wins 7 of 8 shared benchmarks. Leads in knowledge · math.
Category leads
knowledge·phi-3-medium 14Bmath·phi-3-medium 14B
Hype vs Reality
Attention vs performance
Llama 3 8B Instruct
#182 by perf·no signal
phi-3-medium 14B
#46 by perf·no signal
Best value
Llama 3 8B Instruct
Llama 3 8B Instruct
880.0 pts/$
$0.04/M
phi-3-medium 14B
—
no price
Vendor risk
Who is behind the model
Meta AI
$1.50T·Tier 1
Microsoft
$3.00T·Big Tech
Head to head
8 benchmarks · 2 models
Llama 3 8B Instructphi-3-medium 14B
ANLI
Llama 3 8B Instruct leads by +2.3
ANLI (Adversarial NLI) · adversarially constructed natural language inference dataset where each round targets weaknesses found in previous model generations.
Llama 3 8B Instruct
36.0
phi-3-medium 14B
33.7
ARC AI2
phi-3-medium 14B leads by +11.7
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
Llama 3 8B Instruct
77.1
phi-3-medium 14B
88.8
GPQA diamond
phi-3-medium 14B leads by +2.0
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Llama 3 8B Instruct
1.4
phi-3-medium 14B
3.5
MATH level 5
phi-3-medium 14B leads by +11.4
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Llama 3 8B Instruct
6.1
phi-3-medium 14B
17.6
MMLU
phi-3-medium 14B leads by +12.3
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Llama 3 8B Instruct
58.4
phi-3-medium 14B
70.7
OpenBookQA
phi-3-medium 14B leads by +6.4
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
Llama 3 8B Instruct
76.8
phi-3-medium 14B
83.2
TriviaQA
phi-3-medium 14B leads by +6.2
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
Llama 3 8B Instruct
67.7
phi-3-medium 14B
73.9
Winogrande
phi-3-medium 14B leads by +11.6
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Llama 3 8B Instruct
51.4
phi-3-medium 14B
63.0
Full benchmark table
| Benchmark | Llama 3 8B Instruct | phi-3-medium 14B |
|---|---|---|
ANLI ANLI (Adversarial NLI) · adversarially constructed natural language inference dataset where each round targets weaknesses found in previous model generations. | 36.0 | 33.7 |
ARC AI2 AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval. | 77.1 | 88.8 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 1.4 | 3.5 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 6.1 | 17.6 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 58.4 | 70.7 |
OpenBookQA OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting. | 76.8 | 83.2 |
TriviaQA TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents. | 67.7 | 73.9 |
Winogrande WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs. | 51.4 | 63.0 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.03 | $0.04 | 8K tokens (~4 books) | $0.33 | |
| — | — | — | — |