Home/Models/GLM 4.7
z-ai logo

GLM 4.7

by z-ai · Released Dec 2025

Open Source
56.6
avg score
Rank #84
Compare
Better than 64% of all models
Context
203K tokens (~101 books)
Input $/1M
$0.38
Output $/1M
$1.74
Type
text
License
Open Source
Benchmarks
26 tested
Data updated today
About

GLM-4.7 is Z.ai’s latest flagship model, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution. It demonstrates significant improvements in executing complex agent tasks while...

Tested on 26 benchmarks with 50.5% average. Top scores: Chatbot Arena Elo — Overall (1442.7%), Chatbot Arena Elo — Coding (1439.2%), OpenCompass — AIME2025 (95.4%).

Capabilities
coding
58.0
#45 globally
reasoning
50.7
#46 globally
math
51.5
#72 globally
knowledge
47.1
#114 globally
agentic
3.1
#36 globally
language
63.7
#82 globally
Benchmark Scores
Compare All
Tested on 26 benchmarks · Ranked across 7 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
OpenCompass — LiveCodeBenchV6

OpenCompass Live Code Bench v6. Fresh competitive programming problems to evaluate code generation without memorization.

83.8
LiveBench — Coding

Regularly refreshed coding problems that avoid data contamination. New problems added monthly to prevent memorization.

73.1
LiveBench — Agentic Coding

LiveBench coding tasks that require multi-step reasoning and tool use. Tests planning and execution of complex coding workflows.

41.7
LiveBench — Reasoning

Regularly refreshed reasoning problems testing logical deduction, spatial reasoning, and analytical thinking.

59.7
LiveBench — Data Analysis

Fresh data analysis tasks testing ability to interpret tables, charts, and statistical data.

55.2
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

37.2
OpenCompass — AIME2025

OpenCompass evaluation on AIME 2025 problems. Tests mathematical reasoning on fresh competition problems.

95.4
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

83.3
LiveBench — Mathematics

Regularly updated math problems that test numerical reasoning, algebra, calculus, and combinatorics.

76.0
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
glm-4-7
Specifications
  • Typetext
  • Context203K tokens (~101 books)
  • ReleasedDec 2025
  • LicenseOpen Source
  • StatusActive
  • Cost / Message~$0.003
Available On
z-ai logoz-ai$0.38
Share & Export
Tweet
GLM 4.7 is an open-source text AI model by z-ai, released in December 2025. It has an average benchmark score of 56.6. Context window: 203K tokens.