Compare · ModelsLive · 2 picked · head to head

phi-3-medium 14B vs GPT-3.5 Turbo (older v0613)

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

phi-3-medium 14B wins 6 of 9 shared benchmarks. Leads in knowledge · reasoning · math.

Category leads
knowledge·phi-3-medium 14Breasoning·phi-3-medium 14Bmath·phi-3-medium 14B
Hype vs Reality
phi-3-medium 14B
#48 by perf·no signal
QUIET
GPT-3.5 Turbo (older v0613)
#111 by perf·no signal
QUIET
Best value
phi-3-medium 14B
no price
GPT-3.5 Turbo (older v0613)
30.5 pts/$
$1.50/M
Vendor risk
Microsoft logo
Microsoft
$3.00T·Big Tech
Low risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
phi-3-medium 14BGPT-3.5 Turbo (older v0613)
ANLI
GPT-3.5 Turbo (older v0613) leads by +3.4
ANLI (Adversarial NLI) · adversarially constructed natural language inference dataset where each round targets weaknesses found in previous model generations.
phi-3-medium 14B
33.7
GPT-3.5 Turbo (older v0613)
37.1
ARC AI2
phi-3-medium 14B leads by +5.6
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
phi-3-medium 14B
88.8
GPT-3.5 Turbo (older v0613)
83.2
BBH
phi-3-medium 14B leads by +26.4
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
phi-3-medium 14B
75.2
GPT-3.5 Turbo (older v0613)
48.8
GPQA diamond
phi-3-medium 14B leads by +0.6
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
phi-3-medium 14B
3.5
GPT-3.5 Turbo (older v0613)
2.9
MATH level 5
phi-3-medium 14B leads by +5.9
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
phi-3-medium 14B
17.6
GPT-3.5 Turbo (older v0613)
11.6
MMLU
phi-3-medium 14B leads by +14.3
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
phi-3-medium 14B
70.7
GPT-3.5 Turbo (older v0613)
56.4
OpenBookQA
phi-3-medium 14B leads by +1.9
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
phi-3-medium 14B
83.2
GPT-3.5 Turbo (older v0613)
81.3
TriviaQA
GPT-3.5 Turbo (older v0613) leads by +11.9
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
phi-3-medium 14B
73.9
GPT-3.5 Turbo (older v0613)
85.8
Winogrande
GPT-3.5 Turbo (older v0613) leads by +0.2
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
phi-3-medium 14B
63.0
GPT-3.5 Turbo (older v0613)
63.2
Full benchmark table
Benchmarkphi-3-medium 14BGPT-3.5 Turbo (older v0613)
ANLI
ANLI (Adversarial NLI) · adversarially constructed natural language inference dataset where each round targets weaknesses found in previous model generations.
33.737.1
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
88.883.2
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
75.248.8
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
3.52.9
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
17.611.6
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
70.756.4
OpenBookQA
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
83.281.3
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
73.985.8
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
63.063.2
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Microsoft logophi-3-medium 14B
OpenAI logoGPT-3.5 Turbo (older v0613)$1.00$2.004K tokens (~2 books)$12.50