Compare · ModelsLive · 2 picked · head to head

GLM 4.7 vs R1 0528

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GLM 4.7 wins 11 of 11 shared benchmarks. Leads in arena · knowledge · math.

Category leads
arena·GLM 4.7knowledge·GLM 4.7math·GLM 4.7language·GLM 4.7coding·GLM 4.7reasoning·GLM 4.7
Hype vs Reality
GLM 4.7
#93 by perf·no signal
QUIET
R1 0528
#53 by perf·no signal
QUIET
Best value
1.1x better value than R1 0528
GLM 4.7
47.6 pts/$
$1.06/M
R1 0528
43.7 pts/$
$1.32/M
Vendor risk
One or more vendors flagged
z-ai logo
z-ai
private · undisclosed
Unknown
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
Head to head
GLM 4.7R1 0528
Chatbot Arena Elo · Overall
GLM 4.7 leads by +21.0
GLM 4.7
1442.7
R1 0528
1421.7
GPQA diamond
GLM 4.7 leads by +9.3
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GLM 4.7
77.8
R1 0528
68.4
OpenCompass · AIME2025
GLM 4.7 leads by +6.4
GLM 4.7
95.4
R1 0528
89.0
OpenCompass · GPQA-Diamond
GLM 4.7 leads by +6.3
GLM 4.7
86.9
R1 0528
80.6
OpenCompass · HLE
GLM 4.7 leads by +11.0
GLM 4.7
25.4
R1 0528
14.4
OpenCompass · IFEval
GLM 4.7 leads by +10.2
GLM 4.7
90.2
R1 0528
80.0
OpenCompass · LiveCodeBenchV6
GLM 4.7 leads by +22.8
GLM 4.7
83.8
R1 0528
61.0
OpenCompass · MMLU-Pro
GLM 4.7 leads by +0.5
GLM 4.7
84.0
R1 0528
83.5
OTIS Mock AIME 2024-2025
GLM 4.7 leads by +17.0
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GLM 4.7
83.3
R1 0528
66.4
SimpleBench
GLM 4.7 leads by +8.3
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GLM 4.7
37.2
R1 0528
29.0
SimpleQA Verified
GLM 4.7 leads by +4.1
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
GLM 4.7
31.5
R1 0528
27.4
Full benchmark table
BenchmarkGLM 4.7R1 0528
Chatbot Arena Elo · Overall
1442.71421.7
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
77.868.4
OpenCompass · AIME2025
95.489.0
OpenCompass · GPQA-Diamond
86.980.6
OpenCompass · HLE
25.414.4
OpenCompass · IFEval
90.280.0
OpenCompass · LiveCodeBenchV6
83.861.0
OpenCompass · MMLU-Pro
84.083.5
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
83.366.4
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
37.229.0
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
31.527.4
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
z-ai logoGLM 4.7$0.38$1.74203K tokens (~101 books)$7.20
DeepSeek logoR1 0528$0.50$2.15164K tokens (~82 books)$9.13