Compare · ModelsLive · 3 picked · head to head

GPT-3.5 Turbo (older v0613) vs phi-3-small 7.4B vs Llama 3 8B Instruct

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-3.5 Turbo (older v0613) wins 5 of 9 shared benchmarks. Leads in knowledge · math.

Category leads
knowledge·GPT-3.5 Turbo (older v0613)reasoning·phi-3-small 7.4Bmath·GPT-3.5 Turbo (older v0613)
Hype vs Reality
GPT-3.5 Turbo (older v0613)
#111 by perf·no signal
QUIET
phi-3-small 7.4B
#25 by perf·no signal
QUIET
Llama 3 8B Instruct
#184 by perf·no signal
QUIET
Best value
28.8x better value than GPT-3.5 Turbo (older v0613)
GPT-3.5 Turbo (older v0613)
30.5 pts/$
$1.50/M
phi-3-small 7.4B
no price
Llama 3 8B Instruct
880.0 pts/$
$0.04/M
Vendor risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Microsoft logo
Microsoft
$3.00T·Big Tech
Low risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Head to head
GPT-3.5 Turbo (older v0613)phi-3-small 7.4BLlama 3 8B Instruct
ANLI
ANLI (Adversarial NLI) · adversarially constructed natural language inference dataset where each round targets weaknesses found in previous model generations.
GPT-3.5 Turbo (older v0613)
37.1
phi-3-small 7.4B
37.1
Llama 3 8B Instruct
36.0
ARC AI2
phi-3-small 7.4B leads by +4.4
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
GPT-3.5 Turbo (older v0613)
83.2
phi-3-small 7.4B
87.6
Llama 3 8B Instruct
77.1
MMLU
phi-3-small 7.4B leads by +9.2
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
GPT-3.5 Turbo (older v0613)
56.4
phi-3-small 7.4B
67.6
Llama 3 8B Instruct
58.4
OpenBookQA
phi-3-small 7.4B leads by +2.7
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
GPT-3.5 Turbo (older v0613)
81.3
phi-3-small 7.4B
84.0
Llama 3 8B Instruct
76.8
TriviaQA
GPT-3.5 Turbo (older v0613) leads by +18.1
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
GPT-3.5 Turbo (older v0613)
85.8
phi-3-small 7.4B
58.1
Llama 3 8B Instruct
67.7
Winogrande
GPT-3.5 Turbo (older v0613) leads by +0.2
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
GPT-3.5 Turbo (older v0613)
63.2
phi-3-small 7.4B
63.0
Llama 3 8B Instruct
51.4
BBH
phi-3-small 7.4B leads by +23.3
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
GPT-3.5 Turbo (older v0613)
48.8
phi-3-small 7.4B
72.1
GPQA diamond
GPT-3.5 Turbo (older v0613) leads by +1.5
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-3.5 Turbo (older v0613)
2.9
Llama 3 8B Instruct
1.4
MATH level 5
GPT-3.5 Turbo (older v0613) leads by +5.5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-3.5 Turbo (older v0613)
11.6
Llama 3 8B Instruct
6.1
Full benchmark table
BenchmarkGPT-3.5 Turbo (older v0613)phi-3-small 7.4BLlama 3 8B Instruct
ANLI
ANLI (Adversarial NLI) · adversarially constructed natural language inference dataset where each round targets weaknesses found in previous model generations.
37.137.136.0
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
83.287.677.1
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
56.467.658.4
OpenBookQA
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
81.384.076.8
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
85.858.167.7
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
63.263.051.4
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
48.872.1
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
2.91.4
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
11.66.1
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
OpenAI logoGPT-3.5 Turbo (older v0613)$1.00$2.004K tokens (~2 books)$12.50
Microsoft logophi-3-small 7.4B
Meta logoLlama 3 8B Instruct$0.03$0.048K tokens (~4 books)$0.33