Compare · ModelsLive · 2 picked · head to head
Qwen2.5 72B Instruct vs GPT-4 Turbo
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Qwen2.5 72B Instruct wins on 6/9 benchmarks
Qwen2.5 72B Instruct wins 6 of 9 shared benchmarks. Leads in reasoning · knowledge · math.
Category leads
reasoning·Qwen2.5 72B Instructknowledge·Qwen2.5 72B Instructmath·Qwen2.5 72B Instruct
Hype vs Reality
Attention vs performance
Qwen2.5 72B Instruct
#80 by perf·no signal
GPT-4 Turbo
#88 by perf·no signal
Best value
Qwen2.5 72B Instruct
81.8x better value than GPT-4 Turbo
Qwen2.5 72B Instruct
208.6 pts/$
$0.26/M
GPT-4 Turbo
2.5 pts/$
$20.00/M
Vendor risk
Who is behind the model
Alibaba (Qwen)
$293.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
9 benchmarks · 2 models
Qwen2.5 72B InstructGPT-4 Turbo
BBH
Qwen2.5 72B Instruct leads by +6.2
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
Qwen2.5 72B Instruct
73.1
GPT-4 Turbo
66.8
CMMLU
Qwen2.5 72B Instruct leads by +14.7
Qwen2.5 72B Instruct
85.7
GPT-4 Turbo
71.0
GPQA diamond
Qwen2.5 72B Instruct leads by +24.7
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Qwen2.5 72B Instruct
32.2
GPT-4 Turbo
7.5
HellaSwag
GPT-4 Turbo leads by +14.0
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
Qwen2.5 72B Instruct
79.7
GPT-4 Turbo
93.7
MATH level 5
Qwen2.5 72B Instruct leads by +40.2
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Qwen2.5 72B Instruct
63.2
GPT-4 Turbo
23.0
MMLU
Qwen2.5 72B Instruct leads by +3.9
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Qwen2.5 72B Instruct
80.4
GPT-4 Turbo
76.5
OTIS Mock AIME 2024-2025
Qwen2.5 72B Instruct leads by +7.0
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Qwen2.5 72B Instruct
8.0
GPT-4 Turbo
1.0
TriviaQA
GPT-4 Turbo leads by +12.9
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
Qwen2.5 72B Instruct
71.9
GPT-4 Turbo
84.8
Winogrande
GPT-4 Turbo leads by +10.4
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Qwen2.5 72B Instruct
64.6
GPT-4 Turbo
75.0
Full benchmark table
| Benchmark | Qwen2.5 72B Instruct | GPT-4 Turbo |
|---|---|---|
BBH BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans. | 73.1 | 66.8 |
CMMLU | 85.7 | 71.0 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 32.2 | 7.5 |
HellaSwag HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios. | 79.7 | 93.7 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 63.2 | 23.0 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 80.4 | 76.5 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 8.0 | 1.0 |
TriviaQA TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents. | 71.9 | 84.8 |
Winogrande WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs. | 64.6 | 75.0 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.12 | $0.39 | 33K tokens (~16 books) | $1.88 | |
| $10.00 | $30.00 | 128K tokens (~64 books) | $150.00 |