GLM 4.7
开源来自 z-ai · 发布于 2025-12-22
50.5
平均分
$0.38/1M
输入价格
$1.74/1M
输出价格
203K tokens (~101 books)
上下文窗口
text
类型
Tested on 26 benchmarks with 50.5% average. Top scores: Chatbot Arena Elo — Overall (1442.7%), Chatbot Arena Elo — Coding (1439.2%), OpenCompass — AIME2025 (95.4%).
基准测试分数
| 基准测试 | 类别 | 分数 | Bar |
|---|---|---|---|
| Chatbot Arena Elo — Overall | arena | 1442.7 | |
| Chatbot Arena Elo — Coding | arena | 1439.2 | |
| OpenCompass — AIME2025 | math | 95.4 | |
| OpenCompass — IFEval | language | 90.2 | |
| OpenCompass — GPQA-Diamond | knowledge | 86.9 | |
| OpenCompass — MMLU-Pro | knowledge | 84.0 | |
| OpenCompass — LiveCodeBenchV6 | coding | 83.8 | |
| OTIS Mock AIME 2024-2025 | math | 83.3 | |
| GPQA diamond | knowledge | 77.8 | |
| LiveBench — Mathematics | math | 76.0 | |
| LiveBench — Coding | coding | 73.1 | |
| LiveBench — Language | language | 65.2 | |
| LiveBench — Reasoning | reasoning | 59.7 | |
| LiveBench — Overall | knowledge | 58.1 | |
| LiveBench — Data Analysis | reasoning | 55.2 | |
| LiveBench — Agentic Coding | coding | 41.7 | |
| SimpleBench | reasoning | 37.2 | |
| LiveBench — If | language | 35.7 | |
| Terminal Bench | coding | 33.4 | |
| SimpleQA Verified | knowledge | 31.5 | |
| OpenCompass — HLE | knowledge | 25.4 | |
| PostTrainBench | knowledge | 7.5 | |
| Chess Puzzles | knowledge | 6.0 | |
| APEX-Agents | agentic | 3.1 | |
| FrontierMath-2025-02-28-Private | math | 2.4 | |
| FrontierMath-Tier-4-2025-07-01-Private | math | 0.1 |