Compare · ModelsLive · 2 picked · head to head

R1 0528 vs GLM 4.7

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GLM 4.7 wins 11 of 11 shared benchmarks. Leads in arena · knowledge · math.

Category leads
arena·GLM 4.7knowledge·GLM 4.7math·GLM 4.7language·GLM 4.7coding·GLM 4.7reasoning·GLM 4.7
Hype vs Reality
R1 0528
#53 by perf·no signal
QUIET
GLM 4.7
#93 by perf·no signal
QUIET
Best value
1.1x better value than R1 0528
R1 0528
43.7 pts/$
$1.32/M
GLM 4.7
47.6 pts/$
$1.06/M
Vendor risk
One or more vendors flagged
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
z-ai logo
z-ai
private · undisclosed
Unknown
Head to head
R1 0528GLM 4.7
Chatbot Arena Elo · Overall
GLM 4.7 leads by +21.0
R1 0528
1421.7
GLM 4.7
1442.7
GPQA diamond
GLM 4.7 leads by +9.3
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
R1 0528
68.4
GLM 4.7
77.8
OpenCompass · AIME2025
GLM 4.7 leads by +6.4
R1 0528
89.0
GLM 4.7
95.4
OpenCompass · GPQA-Diamond
GLM 4.7 leads by +6.3
R1 0528
80.6
GLM 4.7
86.9
OpenCompass · HLE
GLM 4.7 leads by +11.0
R1 0528
14.4
GLM 4.7
25.4
OpenCompass · IFEval
GLM 4.7 leads by +10.2
R1 0528
80.0
GLM 4.7
90.2
OpenCompass · LiveCodeBenchV6
GLM 4.7 leads by +22.8
R1 0528
61.0
GLM 4.7
83.8
OpenCompass · MMLU-Pro
GLM 4.7 leads by +0.5
R1 0528
83.5
GLM 4.7
84.0
OTIS Mock AIME 2024-2025
GLM 4.7 leads by +17.0
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
R1 0528
66.4
GLM 4.7
83.3
SimpleBench
GLM 4.7 leads by +8.3
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
R1 0528
29.0
GLM 4.7
37.2
SimpleQA Verified
GLM 4.7 leads by +4.1
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
R1 0528
27.4
GLM 4.7
31.5
Full benchmark table
BenchmarkR1 0528GLM 4.7
Chatbot Arena Elo · Overall
1421.71442.7
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
68.477.8
OpenCompass · AIME2025
89.095.4
OpenCompass · GPQA-Diamond
80.686.9
OpenCompass · HLE
14.425.4
OpenCompass · IFEval
80.090.2
OpenCompass · LiveCodeBenchV6
61.083.8
OpenCompass · MMLU-Pro
83.584.0
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
66.483.3
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
29.037.2
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
27.431.5
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
DeepSeek logoR1 0528$0.50$2.15164K tokens (~82 books)$9.13
z-ai logoGLM 4.7$0.38$1.74203K tokens (~101 books)$7.20