Compare · ModelsLive · 2 picked · head to head
phi-3-medium 14B vs phi-3-small 7.4B
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
phi-3-medium 14B wins on 6/8 benchmarks
phi-3-medium 14B wins 6 of 8 shared benchmarks. Leads in knowledge · reasoning.
Category leads
knowledge·phi-3-medium 14Breasoning·phi-3-medium 14B
Hype vs Reality
Attention vs performance
phi-3-medium 14B
#46 by perf·no signal
phi-3-small 7.4B
#23 by perf·no signal
Vendor risk
Who is behind the model
Microsoft
$3.00T·Big Tech
Microsoft
$3.00T·Big Tech
Head to head
8 benchmarks · 2 models
phi-3-medium 14Bphi-3-small 7.4B
ANLI
phi-3-small 7.4B leads by +3.4
ANLI (Adversarial NLI) · adversarially constructed natural language inference dataset where each round targets weaknesses found in previous model generations.
phi-3-medium 14B
33.7
phi-3-small 7.4B
37.1
ARC AI2
phi-3-medium 14B leads by +1.2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
phi-3-medium 14B
88.8
phi-3-small 7.4B
87.6
BBH
phi-3-medium 14B leads by +3.1
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
phi-3-medium 14B
75.2
phi-3-small 7.4B
72.1
HellaSwag
phi-3-medium 14B leads by +7.2
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
phi-3-medium 14B
76.5
phi-3-small 7.4B
69.3
MMLU
phi-3-medium 14B leads by +3.1
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
phi-3-medium 14B
70.7
phi-3-small 7.4B
67.6
OpenBookQA
phi-3-small 7.4B leads by +0.8
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
phi-3-medium 14B
83.2
phi-3-small 7.4B
84.0
TriviaQA
phi-3-medium 14B leads by +15.8
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
phi-3-medium 14B
73.9
phi-3-small 7.4B
58.1
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
phi-3-medium 14B
63.0
phi-3-small 7.4B
63.0
Full benchmark table
| Benchmark | phi-3-medium 14B | phi-3-small 7.4B |
|---|---|---|
ANLI ANLI (Adversarial NLI) · adversarially constructed natural language inference dataset where each round targets weaknesses found in previous model generations. | 33.7 | 37.1 |
ARC AI2 AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval. | 88.8 | 87.6 |
BBH BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans. | 75.2 | 72.1 |
HellaSwag HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios. | 76.5 | 69.3 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 70.7 | 67.6 |
OpenBookQA OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting. | 83.2 | 84.0 |
TriviaQA TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents. | 73.9 | 58.1 |
Winogrande WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs. | 63.0 | 63.0 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| — | — | — | — | |
| — | — | — | — |
People also compared
GPT-5 Chat vs phi-3-small 7.4BClaude Mythos Preview vs phi-3-small 7.4Bphi-3-small 7.4B vs Qwen3.5 397B A17BDeepSeek V3.2 Speciale vs phi-3-small 7.4BClaude Instant vs phi-3-small 7.4Bphi-3-small 7.4B vs Step 3.5 FlashDeepSeek-V2 (MoE-236B, May 2024) vs phi-3-small 7.4BMiMo-V2-Flash vs phi-3-small 7.4B