Compare · ModelsLive · 2 picked · head to head
R1 0528 vs GLM 4.7
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GLM 4.7 wins on 11/11 benchmarks
GLM 4.7 wins 11 of 11 shared benchmarks. Leads in arena · knowledge · math.
Category leads
arena·GLM 4.7knowledge·GLM 4.7math·GLM 4.7language·GLM 4.7coding·GLM 4.7reasoning·GLM 4.7
Hype vs Reality
Attention vs performance
R1 0528
#53 by perf·no signal
GLM 4.7
#93 by perf·no signal
Best value
GLM 4.7
1.1x better value than R1 0528
R1 0528
43.7 pts/$
$1.32/M
GLM 4.7
47.6 pts/$
$1.06/M
Vendor risk
Mixed exposure
One or more vendors flagged
DeepSeek
$3.4B·Tier 1
z-ai
private · undisclosed
Head to head
11 benchmarks · 2 models
R1 0528GLM 4.7
Chatbot Arena Elo · Overall
GLM 4.7 leads by +21.0
R1 0528
1421.7
GLM 4.7
1442.7
GPQA diamond
GLM 4.7 leads by +9.3
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
R1 0528
68.4
GLM 4.7
77.8
OpenCompass · AIME2025
GLM 4.7 leads by +6.4
R1 0528
89.0
GLM 4.7
95.4
OpenCompass · GPQA-Diamond
GLM 4.7 leads by +6.3
R1 0528
80.6
GLM 4.7
86.9
OpenCompass · HLE
GLM 4.7 leads by +11.0
R1 0528
14.4
GLM 4.7
25.4
OpenCompass · IFEval
GLM 4.7 leads by +10.2
R1 0528
80.0
GLM 4.7
90.2
OpenCompass · LiveCodeBenchV6
GLM 4.7 leads by +22.8
R1 0528
61.0
GLM 4.7
83.8
OpenCompass · MMLU-Pro
GLM 4.7 leads by +0.5
R1 0528
83.5
GLM 4.7
84.0
OTIS Mock AIME 2024-2025
GLM 4.7 leads by +17.0
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
R1 0528
66.4
GLM 4.7
83.3
SimpleBench
GLM 4.7 leads by +8.3
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
R1 0528
29.0
GLM 4.7
37.2
SimpleQA Verified
GLM 4.7 leads by +4.1
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
R1 0528
27.4
GLM 4.7
31.5
Full benchmark table
| Benchmark | R1 0528 | GLM 4.7 |
|---|---|---|
Chatbot Arena Elo · Overall | 1421.7 | 1442.7 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 68.4 | 77.8 |
OpenCompass · AIME2025 | 89.0 | 95.4 |
OpenCompass · GPQA-Diamond | 80.6 | 86.9 |
OpenCompass · HLE | 14.4 | 25.4 |
OpenCompass · IFEval | 80.0 | 90.2 |
OpenCompass · LiveCodeBenchV6 | 61.0 | 83.8 |
OpenCompass · MMLU-Pro | 83.5 | 84.0 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 66.4 | 83.3 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 29.0 | 37.2 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 27.4 | 31.5 |
Pricing · per 1M tokens · projected $/mo at 10M tokens