Home/Models/GLM 4.5
z-ai logo

GLM 4.5

by z-ai · Released Jul 2025

Open Source
72.3
avg score
Rank #35
Compare
Better than 85% of all models
Context
131K tokens (~66 books)
Input $/1M
$0.60
Output $/1M
$2.20
Type
text
License
Open Source
Benchmarks
7 tested
Data updated today
About

GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly...

Tested on 7 benchmarks with 69.2% average. Top scores: Chatbot Arena Elo — Overall (1410.9%), OpenCompass — AIME2025 (85.8%), OpenCompass — IFEval (85.4%).

Looking for similar performance at lower cost?
MiniMax M2 scores 72.4 (100% as good) at $0.26/1M input · 57% cheaper
Capabilities
coding
65.0
#29 globally
math
85.8
#11 globally
knowledge
59.7
#49 globally
language
85.4
#32 globally
Benchmark Scores
Compare All
Tested on 7 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
OpenCompass — LiveCodeBenchV6

OpenCompass Live Code Bench v6. Fresh competitive programming problems to evaluate code generation without memorization.

65.0
OpenCompass — AIME2025

OpenCompass evaluation on AIME 2025 problems. Tests mathematical reasoning on fresh competition problems.

85.8
OpenCompass — MMLU-Pro

OpenCompass MMLU-Pro evaluation. Harder knowledge test with more answer choices.

82.7
OpenCompass — GPQA-Diamond

OpenCompass evaluation of GPQA Diamond. PhD-level science questions from the hardest subset.

79.5
OpenCompass — HLE

OpenCompass evaluation of Humanitys Last Exam. Expert-level cross-discipline knowledge test.

16.9
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
glm-4-5
Specifications
  • Typetext
  • Context131K tokens (~66 books)
  • ReleasedJul 2025
  • LicenseOpen Source
  • StatusActive
  • Cost / Message~$0.003
Available On
z-ai logoz-ai$0.60
Share & Export
Tweet
GLM 4.5 is an open-source text AI model by z-ai, released in July 2025. It has an average benchmark score of 72.3. Context window: 131K tokens.