GLM 5
开源来自 z-ai · 发布于 2026-02-11
57.6
平均分
$0.65/1M
输入价格
$2.08/1M
输出价格
203K tokens (~101 books)
上下文窗口
text
类型
Tested on 28 benchmarks with 57.6% average. Top scores: Chatbot Arena Elo — Overall (1455.6%), Chatbot Arena Elo — Coding (1441.0%), OpenCompass — AIME2025 (95.8%).
基准测试分数
| 基准测试 | 类别 | 分数 | Bar |
|---|---|---|---|
| Chatbot Arena Elo — Overall | arena | 1455.6 | |
| Chatbot Arena Elo — Coding | arena | 1441.0 | |
| OpenCompass — AIME2025 | math | 95.8 | |
| OpenCompass — IFEval | language | 93.2 | |
| OpenCompass — LiveCodeBenchV6 | coding | 86.2 | |
| OpenCompass — GPQA-Diamond | knowledge | 85.3 | |
| OpenCompass — MMLU-Pro | knowledge | 85.2 | |
| GPQA diamond | knowledge | 83.8 | |
| LiveBench — Mathematics | math | 83.5 | |
| OTIS Mock AIME 2024-2025 | math | 80.0 | |
| LiveBench — Language | language | 77.5 | |
| LiveBench — Coding | coding | 73.6 | |
| SWE-Bench verified | coding | 72.1 | |
| LiveBench — Reasoning | reasoning | 69.1 | |
| LiveBench — Overall | knowledge | 68.8 | |
| LiveBench — Data Analysis | reasoning | 67.9 | |
| LiveBench — If | language | 55.3 | |
| LiveBench — Agentic Coding | coding | 55.0 | |
| Terminal Bench | coding | 52.4 | |
| WeirdML | coding | 48.2 | |
| ARC-AGI | reasoning | 44.7 | |
| SimpleBench | reasoning | 43.8 | |
| OpenCompass — HLE | knowledge | 28.1 | |
| FrontierMath-2025-02-28-Private | math | 16.4 | |
| PostTrainBench | knowledge | 13.9 | |
| Chess Puzzles | knowledge | 10.0 | |
| ARC-AGI-2 | reasoning | 4.9 | |
| FrontierMath-Tier-4-2025-07-01-Private | math | 2.1 |