Compare · ModelsLive · 3 picked · head to head
Qwen3.5 397B A17B vs Kimi K2.5 vs GLM 4.7
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Kimi K2.5 wins on 13/21 benchmarks
Kimi K2.5 wins 13 of 21 shared benchmarks. Leads in math · knowledge · language.
Category leads
math·Kimi K2.5knowledge·Kimi K2.5language·Kimi K2.5coding·GLM 4.7speed·Kimi K2.5agentic·Kimi K2.5arena·GLM 4.7reasoning·GLM 4.7
Hype vs Reality
Attention vs performance
Qwen3.5 397B A17B
#5 by perf·no signal
Kimi K2.5
#87 by perf·no signal
GLM 4.7
#93 by perf·no signal
Best value
Qwen3.5 397B A17B
1.2x better value than GLM 4.7
Qwen3.5 397B A17B
57.4 pts/$
$1.36/M
Kimi K2.5
42.6 pts/$
$1.22/M
GLM 4.7
47.6 pts/$
$1.06/M
Vendor risk
Who is behind the model
Alibaba (Qwen)
$293.0B·Tier 1
moonshotai
private · undisclosed
z-ai
private · undisclosed
Head to head
21 benchmarks · 3 models
Qwen3.5 397B A17BKimi K2.5GLM 4.7
OpenCompass · AIME2025
GLM 4.7 leads by +3.1
Qwen3.5 397B A17B
92.3
Kimi K2.5
91.9
GLM 4.7
95.4
OpenCompass · GPQA-Diamond
Qwen3.5 397B A17B leads by +0.3
Qwen3.5 397B A17B
88.4
Kimi K2.5
88.1
GLM 4.7
86.9
OpenCompass · HLE
Kimi K2.5 leads by +1.1
Qwen3.5 397B A17B
27.5
Kimi K2.5
28.6
GLM 4.7
25.4
OpenCompass · IFEval
Kimi K2.5 leads by +2.4
Qwen3.5 397B A17B
91.5
Kimi K2.5
93.9
GLM 4.7
90.2
OpenCompass · LiveCodeBenchV6
GLM 4.7 leads by +0.8
Qwen3.5 397B A17B
83.0
Kimi K2.5
80.6
GLM 4.7
83.8
OpenCompass · MMLU-Pro
Qwen3.5 397B A17B leads by +1.4
Qwen3.5 397B A17B
87.6
Kimi K2.5
86.2
GLM 4.7
84.0
Artificial Analysis · Agentic Index
Kimi K2.5 leads by +3.1
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
Qwen3.5 397B A17B
55.8
Kimi K2.5
58.9
Artificial Analysis · Coding Index
Qwen3.5 397B A17B leads by +1.7
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
Qwen3.5 397B A17B
41.3
Kimi K2.5
39.5
Artificial Analysis · Quality Index
Kimi K2.5 leads by +1.8
Qwen3.5 397B A17B
45.0
Kimi K2.5
46.8
APEX-Agents
Kimi K2.5 leads by +11.3
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
Kimi K2.5
14.4
GLM 4.7
3.1
Chatbot Arena Elo · Coding
GLM 4.7 leads by +53.1
Qwen3.5 397B A17B
1386.1
GLM 4.7
1439.2
Chatbot Arena Elo · Overall
Qwen3.5 397B A17B leads by +5.0
Qwen3.5 397B A17B
1447.7
GLM 4.7
1442.7
Chess Puzzles
Kimi K2.5 leads by +6.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Kimi K2.5
12.0
GLM 4.7
6.0
FrontierMath-2025-02-28-Private
Kimi K2.5 leads by +25.5
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Kimi K2.5
27.9
GLM 4.7
2.4
FrontierMath-Tier-4-2025-07-01-Private
Kimi K2.5 leads by +4.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Kimi K2.5
4.2
GLM 4.7
0.1
GPQA diamond
Kimi K2.5 leads by +5.7
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Kimi K2.5
83.5
GLM 4.7
77.8
OTIS Mock AIME 2024-2025
Kimi K2.5 leads by +8.9
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Kimi K2.5
92.2
GLM 4.7
83.3
PostTrainBench
Kimi K2.5 leads by +2.8
Kimi K2.5
10.3
GLM 4.7
7.5
SimpleBench
GLM 4.7 leads by +1.1
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Kimi K2.5
36.2
GLM 4.7
37.2
SimpleQA Verified
Kimi K2.5 leads by +2.4
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Kimi K2.5
33.9
GLM 4.7
31.5
Terminal Bench
Kimi K2.5 leads by +9.8
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
Kimi K2.5
43.2
GLM 4.7
33.4
Full benchmark table
| Benchmark | Qwen3.5 397B A17B | Kimi K2.5 | GLM 4.7 |
|---|---|---|---|
OpenCompass · AIME2025 | 92.3 | 91.9 | 95.4 |
OpenCompass · GPQA-Diamond | 88.4 | 88.1 | 86.9 |
OpenCompass · HLE | 27.5 | 28.6 | 25.4 |
OpenCompass · IFEval | 91.5 | 93.9 | 90.2 |
OpenCompass · LiveCodeBenchV6 | 83.0 | 80.6 | 83.8 |
OpenCompass · MMLU-Pro | 87.6 | 86.2 | 84.0 |
Artificial Analysis · Agentic Index Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?" | 55.8 | 58.9 | — |
Artificial Analysis · Coding Index Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads. | 41.3 | 39.5 | — |
Artificial Analysis · Quality Index | 45.0 | 46.8 | — |
APEX-Agents APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments. | — | 14.4 | 3.1 |
Chatbot Arena Elo · Coding | 1386.1 | — | 1439.2 |
Chatbot Arena Elo · Overall | 1447.7 | — | 1442.7 |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | — | 12.0 | 6.0 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | — | 27.9 | 2.4 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | — | 4.2 | 0.1 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | — | 83.5 | 77.8 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | — | 92.2 | 83.3 |
PostTrainBench | — | 10.3 | 7.5 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | — | 36.2 | 37.2 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | — | 33.9 | 31.5 |
Terminal Bench Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence. | — | 43.2 | 33.4 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.39 | $2.34 | 262K tokens (~131 books) | $8.78 | |
| $0.44 | $2.00 | 262K tokens (~131 books) | $8.30 | |
| $0.38 | $1.74 | 203K tokens (~101 books) | $7.20 |
People also compared