Beta
Home/Comparar/GLM 4.7 vs Kimi K2 Thinking

GLM 4.7 vs Kimi K2 Thinking

Lado a lado. Cada métrica. Cada benchmark.

z-ai
50.5
puntuación promedio
9/23
benchmarks
moonshotai
53.3
puntuación promedio
13/23
benchmarks
TipoGLM 4.7Kimi K2 Thinking
Providerz-ai logoz-aimoonshotai logomoonshotai
puntuación promedio50.553.3
Precio de entrada$0.39$0.60
Precio de salida$1.75$2.50
Ventana de contexto203K tokens (~101 books)262K tokens (~131 books)
Publicado el2025-12-222025-11-06
Código abiertoOpen SourceOpen Source

23 benchmarks · GLM 4.7: 9, Kimi K2 Thinking: 13

BenchmarkCategoríaGLM 4.7Kimi K2 Thinking
APEX-Agentsagentic3.14.0
Chess Puzzlesknowledge6.020.0
FrontierMath-2025-02-28-Privatemath2.421.4
FrontierMath-Tier-4-2025-07-01-Privatemath0.10.1
GPQA diamondknowledge77.879.0
LiveBench — Agentic Codingcoding41.738.3
LiveBench — Codingcoding73.167.4
LiveBench — Data Analysisreasoning55.252.3
LiveBench — Iflanguage35.762.0
LiveBench — Languagelanguage65.266.5
LiveBench — Mathematicsmath76.081.1
LiveBench — Overallknowledge58.161.6
LiveBench — Reasoningreasoning59.763.5
OpenCompass — AIME2025math95.494.1
OpenCompass — GPQA-Diamondknowledge86.982.7
OpenCompass — HLEknowledge25.421.3
OpenCompass — IFEvallanguage90.292.4
OpenCompass — LiveCodeBenchV6coding83.877.1
OpenCompass — MMLU-Proknowledge84.084.3
OTIS Mock AIME 2024-2025math83.383.0
PostTrainBenchknowledge7.57.3
SimpleQA Verifiedknowledge31.531.6
Terminal Benchcoding33.435.7