Beta
Compare · ModelsLive · 2 picked · head to head

GLM 4.7 vs DeepSeek V3.2 Speciale

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

DeepSeek V3.2 Speciale wins 4 of 6 shared benchmarks. Leads in math · knowledge · language.

Category leads
math·DeepSeek V3.2 Specialeknowledge·DeepSeek V3.2 Specialelanguage·DeepSeek V3.2 Specialecoding·GLM 4.7
Hype vs Reality
GLM 4.7
#91 by perf·no signal
QUIET
DeepSeek V3.2 Speciale
#4 by perf·#5 by attention
DESERVED
Best value
2.1x better value than GLM 4.7
GLM 4.7
47.2 pts/$
$1.07/M
DeepSeek V3.2 Speciale
97.8 pts/$
$0.80/M
Vendor risk
One or more vendors flagged
z-ai logo
z-ai
private · undisclosed
Unknown
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
Head to head
GLM 4.7DeepSeek V3.2 Speciale
OpenCompass · AIME2025
DeepSeek V3.2 Speciale leads by +0.6
GLM 4.7
95.4
DeepSeek V3.2 Speciale
96.0
OpenCompass · GPQA-Diamond
GLM 4.7 leads by +0.2
GLM 4.7
86.9
DeepSeek V3.2 Speciale
86.7
OpenCompass · HLE
DeepSeek V3.2 Speciale leads by +3.2
GLM 4.7
25.4
DeepSeek V3.2 Speciale
28.6
OpenCompass · IFEval
DeepSeek V3.2 Speciale leads by +1.5
GLM 4.7
90.2
DeepSeek V3.2 Speciale
91.7
OpenCompass · LiveCodeBenchV6
GLM 4.7 leads by +2.9
GLM 4.7
83.8
DeepSeek V3.2 Speciale
80.9
OpenCompass · MMLU-Pro
DeepSeek V3.2 Speciale leads by +1.5
GLM 4.7
84.0
DeepSeek V3.2 Speciale
85.5
Full benchmark table
BenchmarkGLM 4.7DeepSeek V3.2 Speciale
OpenCompass · AIME2025
95.496.0
OpenCompass · GPQA-Diamond
86.986.7
OpenCompass · HLE
25.428.6
OpenCompass · IFEval
90.291.7
OpenCompass · LiveCodeBenchV6
83.880.9
OpenCompass · MMLU-Pro
84.085.5
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
z-ai logoGLM 4.7$0.39$1.75203K tokens (~101 books)$7.30
DeepSeek logoDeepSeek V3.2 Speciale$0.40$1.20164K tokens (~82 books)$6.00