Compare · ModelsLive · 3 picked · head to head

Qwen2-72B vs Qwen2.5 72B Instruct vs GPT-4 Turbo

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Qwen2.5 72B Instruct wins 10 of 17 shared benchmarks. Leads in math · coding · reasoning.

Category leads
knowledge·Qwen2-72Bmath·Qwen2.5 72B Instructcoding·Qwen2.5 72B Instructreasoning·Qwen2.5 72B Instructgeneral·Qwen2.5 72B Instructlanguage·Qwen2.5 72B Instructagentic·Qwen2.5 72B Instruct
Hype vs Reality
Qwen2-72B
#139 by perf·no signal
QUIET
Qwen2.5 72B Instruct
#82 by perf·no signal
QUIET
GPT-4 Turbo
#90 by perf·no signal
QUIET
Best value
54.9x better value than GPT-4 Turbo
Qwen2-72B
no price
Qwen2.5 72B Instruct
140.0 pts/$
$0.38/M
GPT-4 Turbo
2.5 pts/$
$20.00/M
Vendor risk
Alibaba Qwen logo
Alibaba (Qwen)
$293.0B·Tier 1
Low risk
Alibaba Qwen logo
Alibaba (Qwen)
$293.0B·Tier 1
Low risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
Qwen2-72BQwen2.5 72B InstructGPT-4 Turbo
CMMLU
Qwen2-72B leads by +4.0
Qwen2-72B
89.7
Qwen2.5 72B Instruct
85.7
GPT-4 Turbo
71.0
GPQA diamond
Qwen2.5 72B Instruct leads by +11.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Qwen2-72B
21.0
Qwen2.5 72B Instruct
32.2
GPT-4 Turbo
7.5
MATH level 5
Qwen2.5 72B Instruct leads by +24.1
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Qwen2-72B
39.1
Qwen2.5 72B Instruct
63.2
GPT-4 Turbo
23.0
MMLU
Qwen2.5 72B Instruct leads by +3.9
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Qwen2-72B
76.5
Qwen2.5 72B Instruct
80.4
GPT-4 Turbo
76.5
Aider · Code Editing
Qwen2.5 72B Instruct leads by +9.8
Qwen2-72B
55.6
Qwen2.5 72B Instruct
65.4
BBH
Qwen2.5 72B Instruct leads by +6.2
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
Qwen2.5 72B Instruct
73.1
GPT-4 Turbo
66.8
HellaSwag
GPT-4 Turbo leads by +14.0
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
Qwen2.5 72B Instruct
79.7
GPT-4 Turbo
93.7
BBH (HuggingFace)
Qwen2.5 72B Instruct leads by +10.0
Qwen2-72B
51.9
Qwen2.5 72B Instruct
61.9
GPQA
Qwen2-72B leads by +2.6
Qwen2-72B
19.2
Qwen2.5 72B Instruct
16.7
IFEval
Qwen2.5 72B Instruct leads by +48.1
Qwen2-72B
38.2
Qwen2.5 72B Instruct
86.4
MATH Level 5
Qwen2.5 72B Instruct leads by +28.7
Qwen2-72B
31.1
Qwen2.5 72B Instruct
59.8
MMLU-PRO
Qwen2-72B leads by +1.2
Qwen2-72B
52.6
Qwen2.5 72B Instruct
51.4
MUSR
Qwen2-72B leads by +8.0
Qwen2-72B
19.7
Qwen2.5 72B Instruct
11.7
OTIS Mock AIME 2024-2025
Qwen2.5 72B Instruct leads by +7.0
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Qwen2.5 72B Instruct
8.0
GPT-4 Turbo
1.0
The Agent Company
Qwen2.5 72B Instruct leads by +4.6
The Agent Company · tests AI agents on realistic corporate tasks like email management, code review, data analysis, and cross-tool workflows.
Qwen2-72B
1.1
Qwen2.5 72B Instruct
5.7
TriviaQA
GPT-4 Turbo leads by +12.9
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
Qwen2.5 72B Instruct
71.9
GPT-4 Turbo
84.8
Winogrande
GPT-4 Turbo leads by +10.4
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Qwen2.5 72B Instruct
64.6
GPT-4 Turbo
75.0
Full benchmark table
BenchmarkQwen2-72BQwen2.5 72B InstructGPT-4 Turbo
CMMLU
89.785.771.0
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
21.032.27.5
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
39.163.223.0
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
76.580.476.5
Aider · Code Editing
55.665.4
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
73.166.8
HellaSwag
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
79.793.7
BBH (HuggingFace)
51.961.9
GPQA
19.216.7
IFEval
38.286.4
MATH Level 5
31.159.8
MMLU-PRO
52.651.4
MUSR
19.711.7
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
8.01.0
The Agent Company
The Agent Company · tests AI agents on realistic corporate tasks like email management, code review, data analysis, and cross-tool workflows.
1.15.7
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
71.984.8
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
64.675.0
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Alibaba Qwen logoQwen2-72B
Alibaba Qwen logoQwen2.5 72B Instruct$0.36$0.4033K tokens (~16 books)$3.70
OpenAI logoGPT-4 Turbo$10.00$30.00128K tokens (~64 books)$150.00