Compare · ModelsLive · 2 picked · head to head
Qwen2.5 Coder 32B Instruct vs GPT-4 Turbo
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-4 Turbo wins on 3/4 benchmarks
GPT-4 Turbo wins 3 of 4 shared benchmarks. Leads in knowledge.
Category leads
math·Qwen2.5 Coder 32B Instructknowledge·GPT-4 Turbo
Hype vs Reality
Attention vs performance
Qwen2.5 Coder 32B Instruct
#83 by perf·no signal
GPT-4 Turbo
#90 by perf·no signal
Best value
Qwen2.5 Coder 32B Instruct
25.1x better value than GPT-4 Turbo
Qwen2.5 Coder 32B Instruct
64.0 pts/$
$0.83/M
GPT-4 Turbo
2.5 pts/$
$20.00/M
Vendor risk
Who is behind the model
Alibaba (Qwen)
$293.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
4 benchmarks · 2 models
Qwen2.5 Coder 32B InstructGPT-4 Turbo
GSM8K
Qwen2.5 Coder 32B Instruct leads by +1.1
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
Qwen2.5 Coder 32B Instruct
91.1
GPT-4 Turbo
90.0
HellaSwag
GPT-4 Turbo leads by +16.4
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
Qwen2.5 Coder 32B Instruct
77.3
GPT-4 Turbo
93.7
MMLU
GPT-4 Turbo leads by +4.4
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Qwen2.5 Coder 32B Instruct
72.1
GPT-4 Turbo
76.5
Winogrande
GPT-4 Turbo leads by +13.4
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Qwen2.5 Coder 32B Instruct
61.6
GPT-4 Turbo
75.0
Full benchmark table
| Benchmark | Qwen2.5 Coder 32B Instruct | GPT-4 Turbo |
|---|---|---|
GSM8K Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve. | 91.1 | 90.0 |
HellaSwag HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios. | 77.3 | 93.7 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 72.1 | 76.5 |
Winogrande WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs. | 61.6 | 75.0 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.66 | $1.00 | 33K tokens (~16 books) | $7.45 | |
| $10.00 | $30.00 | 128K tokens (~64 books) | $150.00 |
People also compared