GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly...
Tested on 7 benchmarks with 69.2% average. Top scores: Chatbot Arena Elo — Overall (1410.9%), OpenCompass — AIME2025 (85.8%), OpenCompass — IFEval (85.4%).
MiniMax M2 scores 72.4 (100% as good) at $0.26/1M input · 57% cheaper
OpenCompass Live Code Bench v6. Fresh competitive programming problems to evaluate code generation without memorization.
OpenCompass evaluation on AIME 2025 problems. Tests mathematical reasoning on fresh competition problems.
OpenCompass MMLU-Pro evaluation. Harder knowledge test with more answer choices.
OpenCompass evaluation of GPQA Diamond. PhD-level science questions from the hardest subset.
OpenCompass evaluation of Humanitys Last Exam. Expert-level cross-discipline knowledge test.
- Typetext
- Context131K tokens (~66 books)
- ReleasedJul 2025
- LicenseOpen Source
- StatusActive
- Cost / Message~$0.003