Beta
Compare · ModelsLive · 2 picked · head to head

phi-3-medium 14B vs phi-3-small 7.4B

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

phi-3-medium 14B wins 6 of 8 shared benchmarks. Leads in knowledge · reasoning.

Category leads
knowledge·phi-3-medium 14Breasoning·phi-3-medium 14B
Hype vs Reality
phi-3-medium 14B
#46 by perf·no signal
QUIET
phi-3-small 7.4B
#23 by perf·no signal
QUIET
Best value
phi-3-medium 14B
no price
phi-3-small 7.4B
no price
Vendor risk
Microsoft logo
Microsoft
$3.00T·Big Tech
Low risk
Microsoft logo
Microsoft
$3.00T·Big Tech
Low risk
Head to head
phi-3-medium 14Bphi-3-small 7.4B
ANLI
phi-3-small 7.4B leads by +3.4
ANLI (Adversarial NLI) · adversarially constructed natural language inference dataset where each round targets weaknesses found in previous model generations.
phi-3-medium 14B
33.7
phi-3-small 7.4B
37.1
ARC AI2
phi-3-medium 14B leads by +1.2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
phi-3-medium 14B
88.8
phi-3-small 7.4B
87.6
BBH
phi-3-medium 14B leads by +3.1
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
phi-3-medium 14B
75.2
phi-3-small 7.4B
72.1
HellaSwag
phi-3-medium 14B leads by +7.2
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
phi-3-medium 14B
76.5
phi-3-small 7.4B
69.3
MMLU
phi-3-medium 14B leads by +3.1
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
phi-3-medium 14B
70.7
phi-3-small 7.4B
67.6
OpenBookQA
phi-3-small 7.4B leads by +0.8
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
phi-3-medium 14B
83.2
phi-3-small 7.4B
84.0
TriviaQA
phi-3-medium 14B leads by +15.8
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
phi-3-medium 14B
73.9
phi-3-small 7.4B
58.1
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
phi-3-medium 14B
63.0
phi-3-small 7.4B
63.0
Full benchmark table
Benchmarkphi-3-medium 14Bphi-3-small 7.4B
ANLI
ANLI (Adversarial NLI) · adversarially constructed natural language inference dataset where each round targets weaknesses found in previous model generations.
33.737.1
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
88.887.6
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
75.272.1
HellaSwag
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
76.569.3
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
70.767.6
OpenBookQA
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
83.284.0
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
73.958.1
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
63.063.0
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Microsoft logophi-3-medium 14B
Microsoft logophi-3-small 7.4B