Compare · ModelsLive · 3 picked · head to head
Qwen2-72B vs Qwen2.5 32B Instruct vs Qwen2.5 72B Instruct
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Qwen2.5 72B Instruct wins on 7/12 benchmarks
Qwen2.5 72B Instruct wins 7 of 12 shared benchmarks. Leads in general · language · coding.
Category leads
general·Qwen2.5 72B Instructknowledge·Qwen2-72Blanguage·Qwen2.5 72B Instructmath·Qwen2.5 32B Instructreasoning·Qwen2-72Bcoding·Qwen2.5 72B Instructagentic·Qwen2.5 72B Instruct
Hype vs Reality
Attention vs performance
Qwen2-72B
#139 by perf·no signal
Qwen2.5 32B Instruct
#127 by perf·no signal
Qwen2.5 72B Instruct
#82 by perf·no signal
Best value
Qwen2.5 72B Instruct
Qwen2-72B
—
no price
Qwen2.5 32B Instruct
—
no price
Qwen2.5 72B Instruct
140.0 pts/$
$0.38/M
Vendor risk
Who is behind the model
Alibaba (Qwen)
$293.0B·Tier 1
Alibaba (Qwen)
$293.0B·Tier 1
Alibaba (Qwen)
$293.0B·Tier 1
Head to head
12 benchmarks · 3 models
Qwen2-72BQwen2.5 32B InstructQwen2.5 72B Instruct
BBH (HuggingFace)
Qwen2.5 72B Instruct leads by +5.4
Qwen2-72B
51.9
Qwen2.5 32B Instruct
56.5
Qwen2.5 72B Instruct
61.9
GPQA
Qwen2-72B leads by +2.6
Qwen2-72B
19.2
Qwen2.5 32B Instruct
11.7
Qwen2.5 72B Instruct
16.7
IFEval
Qwen2.5 72B Instruct leads by +2.9
Qwen2-72B
38.2
Qwen2.5 32B Instruct
83.5
Qwen2.5 72B Instruct
86.4
MATH Level 5
Qwen2.5 32B Instruct leads by +2.7
Qwen2-72B
31.1
Qwen2.5 32B Instruct
62.5
Qwen2.5 72B Instruct
59.8
MMLU-PRO
Qwen2-72B leads by +0.7
Qwen2-72B
52.6
Qwen2.5 32B Instruct
51.9
Qwen2.5 72B Instruct
51.4
MUSR
Qwen2-72B leads by +6.2
Qwen2-72B
19.7
Qwen2.5 32B Instruct
13.5
Qwen2.5 72B Instruct
11.7
Aider · Code Editing
Qwen2.5 72B Instruct leads by +9.8
Qwen2-72B
55.6
Qwen2.5 72B Instruct
65.4
CMMLU
Qwen2-72B leads by +4.0
Qwen2-72B
89.7
Qwen2.5 72B Instruct
85.7
GPQA diamond
Qwen2.5 72B Instruct leads by +11.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Qwen2-72B
21.0
Qwen2.5 72B Instruct
32.2
MATH level 5
Qwen2.5 72B Instruct leads by +24.1
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Qwen2-72B
39.1
Qwen2.5 72B Instruct
63.2
MMLU
Qwen2.5 72B Instruct leads by +3.9
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Qwen2-72B
76.5
Qwen2.5 72B Instruct
80.4
The Agent Company
Qwen2.5 72B Instruct leads by +4.6
The Agent Company · tests AI agents on realistic corporate tasks like email management, code review, data analysis, and cross-tool workflows.
Qwen2-72B
1.1
Qwen2.5 72B Instruct
5.7
Full benchmark table
| Benchmark | Qwen2-72B | Qwen2.5 32B Instruct | Qwen2.5 72B Instruct |
|---|---|---|---|
BBH (HuggingFace) | 51.9 | 56.5 | 61.9 |
GPQA | 19.2 | 11.7 | 16.7 |
IFEval | 38.2 | 83.5 | 86.4 |
MATH Level 5 | 31.1 | 62.5 | 59.8 |
MMLU-PRO | 52.6 | 51.9 | 51.4 |
MUSR | 19.7 | 13.5 | 11.7 |
Aider · Code Editing | 55.6 | — | 65.4 |
CMMLU | 89.7 | — | 85.7 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 21.0 | — | 32.2 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 39.1 | — | 63.2 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 76.5 | — | 80.4 |
The Agent Company The Agent Company · tests AI agents on realistic corporate tasks like email management, code review, data analysis, and cross-tool workflows. | 1.1 | — | 5.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| — | — | — | — | |
| — | — | — | — | |
| $0.36 | $0.40 | 33K tokens (~16 books) | $3.70 |