Beta
Compare · ModelsLive · 2 picked · head to head

Llama 2-13B vs GPT-3.5 Turbo (older v0613)

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-3.5 Turbo (older v0613) wins 10 of 10 shared benchmarks. Leads in knowledge · reasoning · math.

Category leads
knowledge·GPT-3.5 Turbo (older v0613)reasoning·GPT-3.5 Turbo (older v0613)math·GPT-3.5 Turbo (older v0613)
Hype vs Reality
Llama 2-13B
#126 by perf·no signal
QUIET
GPT-3.5 Turbo (older v0613)
#109 by perf·no signal
QUIET
Best value
Llama 2-13B
no price
GPT-3.5 Turbo (older v0613)
30.5 pts/$
$1.50/M
Vendor risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
Llama 2-13BGPT-3.5 Turbo (older v0613)
ARC AI2
GPT-3.5 Turbo (older v0613) leads by +36.1
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
Llama 2-13B
47.1
GPT-3.5 Turbo (older v0613)
83.2
BBH
GPT-3.5 Turbo (older v0613) leads by +4.5
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
Llama 2-13B
44.3
GPT-3.5 Turbo (older v0613)
48.8
CSQA2
GPT-3.5 Turbo (older v0613) leads by +13.9
Llama 2-13B
0.1
GPT-3.5 Turbo (older v0613)
14.0
GPQA diamond
GPT-3.5 Turbo (older v0613) leads by +1.1
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Llama 2-13B
1.8
GPT-3.5 Turbo (older v0613)
2.9
GSM8K
GPT-3.5 Turbo (older v0613) leads by +20.9
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
Llama 2-13B
36.9
GPT-3.5 Turbo (older v0613)
57.8
MATH level 5
GPT-3.5 Turbo (older v0613) leads by +8.3
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Llama 2-13B
3.3
GPT-3.5 Turbo (older v0613)
11.6
MMLU
GPT-3.5 Turbo (older v0613) leads by +15.6
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Llama 2-13B
40.8
GPT-3.5 Turbo (older v0613)
56.4
OpenBookQA
GPT-3.5 Turbo (older v0613) leads by +38.7
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
Llama 2-13B
42.7
GPT-3.5 Turbo (older v0613)
81.3
TriviaQA
GPT-3.5 Turbo (older v0613) leads by +6.2
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
Llama 2-13B
79.6
GPT-3.5 Turbo (older v0613)
85.8
Winogrande
GPT-3.5 Turbo (older v0613) leads by +17.6
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Llama 2-13B
45.6
GPT-3.5 Turbo (older v0613)
63.2
Full benchmark table
BenchmarkLlama 2-13BGPT-3.5 Turbo (older v0613)
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
47.183.2
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
44.348.8
CSQA2
0.114.0
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
1.82.9
GSM8K
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
36.957.8
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
3.311.6
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
40.856.4
OpenBookQA
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
42.781.3
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
79.685.8
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
45.663.2
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Meta logoLlama 2-13B
OpenAI logoGPT-3.5 Turbo (older v0613)$1.00$2.004K tokens (~2 books)$12.50