Compare · ModelsLive · 3 picked · head to head
Qwen3.5 397B A17B vs Kimi K2.5 vs DeepSeek V3.2
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Kimi K2.5 wins on 12/20 benchmarks
Kimi K2.5 wins 12 of 20 shared benchmarks. Leads in speed · math · knowledge.
Category leads
speed·Kimi K2.5math·Kimi K2.5knowledge·Kimi K2.5language·Kimi K2.5coding·Qwen3.5 397B A17Breasoning·Kimi K2.5arena·Qwen3.5 397B A17B
Hype vs Reality
Attention vs performance
Qwen3.5 397B A17B
#5 by perf·no signal
Kimi K2.5
#87 by perf·no signal
DeepSeek V3.2
#84 by perf·no signal
Best value
DeepSeek V3.2
2.9x better value than Qwen3.5 397B A17B
Qwen3.5 397B A17B
57.4 pts/$
$1.36/M
Kimi K2.5
42.6 pts/$
$1.22/M
DeepSeek V3.2
168.3 pts/$
$0.32/M
Vendor risk
Mixed exposure
One or more vendors flagged
Alibaba (Qwen)
$293.0B·Tier 1
moonshotai
private · undisclosed
DeepSeek
$3.4B·Tier 1
Head to head
20 benchmarks · 3 models
Qwen3.5 397B A17BKimi K2.5DeepSeek V3.2
Artificial Analysis · Agentic Index
Kimi K2.5 leads by +3.1
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
Qwen3.5 397B A17B
55.8
Kimi K2.5
58.9
DeepSeek V3.2
52.9
Artificial Analysis · Coding Index
Qwen3.5 397B A17B leads by +1.7
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
Qwen3.5 397B A17B
41.3
Kimi K2.5
39.5
DeepSeek V3.2
36.7
Artificial Analysis · Quality Index
Kimi K2.5 leads by +1.8
Qwen3.5 397B A17B
45.0
Kimi K2.5
46.8
DeepSeek V3.2
41.7
OpenCompass · AIME2025
DeepSeek V3.2 leads by +0.7
Qwen3.5 397B A17B
92.3
Kimi K2.5
91.9
DeepSeek V3.2
93.0
OpenCompass · GPQA-Diamond
Qwen3.5 397B A17B leads by +0.3
Qwen3.5 397B A17B
88.4
Kimi K2.5
88.1
DeepSeek V3.2
84.6
OpenCompass · HLE
Kimi K2.5 leads by +1.1
Qwen3.5 397B A17B
27.5
Kimi K2.5
28.6
DeepSeek V3.2
23.2
OpenCompass · IFEval
Kimi K2.5 leads by +2.4
Qwen3.5 397B A17B
91.5
Kimi K2.5
93.9
DeepSeek V3.2
89.7
OpenCompass · LiveCodeBenchV6
Qwen3.5 397B A17B leads by +2.4
Qwen3.5 397B A17B
83.0
Kimi K2.5
80.6
DeepSeek V3.2
75.4
OpenCompass · MMLU-Pro
Qwen3.5 397B A17B leads by +1.4
Qwen3.5 397B A17B
87.6
Kimi K2.5
86.2
DeepSeek V3.2
85.8
ARC-AGI
Kimi K2.5 leads by +8.3
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Kimi K2.5
65.3
DeepSeek V3.2
57.0
ARC-AGI-2
Kimi K2.5 leads by +7.8
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Kimi K2.5
11.8
DeepSeek V3.2
4.0
Chatbot Arena Elo · Coding
Qwen3.5 397B A17B leads by +59.2
Qwen3.5 397B A17B
1386.1
DeepSeek V3.2
1326.9
Chatbot Arena Elo · Overall
Qwen3.5 397B A17B leads by +23.3
Qwen3.5 397B A17B
1447.7
DeepSeek V3.2
1424.4
Chess Puzzles
DeepSeek V3.2 leads by +2.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Kimi K2.5
12.0
DeepSeek V3.2
14.0
FrontierMath-2025-02-28-Private
Kimi K2.5 leads by +5.8
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Kimi K2.5
27.9
DeepSeek V3.2
22.1
FrontierMath-Tier-4-2025-07-01-Private
Kimi K2.5 leads by +2.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Kimi K2.5
4.2
DeepSeek V3.2
2.1
GPQA diamond
Kimi K2.5 leads by +5.6
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Kimi K2.5
83.5
DeepSeek V3.2
77.9
OTIS Mock AIME 2024-2025
Kimi K2.5 leads by +4.4
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Kimi K2.5
92.2
DeepSeek V3.2
87.8
SimpleQA Verified
Kimi K2.5 leads by +6.4
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Kimi K2.5
33.9
DeepSeek V3.2
27.5
Terminal Bench
Kimi K2.5 leads by +3.6
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
Kimi K2.5
43.2
DeepSeek V3.2
39.6
Full benchmark table
| Benchmark | Qwen3.5 397B A17B | Kimi K2.5 | DeepSeek V3.2 |
|---|---|---|---|
Artificial Analysis · Agentic Index Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?" | 55.8 | 58.9 | 52.9 |
Artificial Analysis · Coding Index Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads. | 41.3 | 39.5 | 36.7 |
Artificial Analysis · Quality Index | 45.0 | 46.8 | 41.7 |
OpenCompass · AIME2025 | 92.3 | 91.9 | 93.0 |
OpenCompass · GPQA-Diamond | 88.4 | 88.1 | 84.6 |
OpenCompass · HLE | 27.5 | 28.6 | 23.2 |
OpenCompass · IFEval | 91.5 | 93.9 | 89.7 |
OpenCompass · LiveCodeBenchV6 | 83.0 | 80.6 | 75.4 |
OpenCompass · MMLU-Pro | 87.6 | 86.2 | 85.8 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | — | 65.3 | 57.0 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | — | 11.8 | 4.0 |
Chatbot Arena Elo · Coding | 1386.1 | — | 1326.9 |
Chatbot Arena Elo · Overall | 1447.7 | — | 1424.4 |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | — | 12.0 | 14.0 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | — | 27.9 | 22.1 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | — | 4.2 | 2.1 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | — | 83.5 | 77.9 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | — | 92.2 | 87.8 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | — | 33.9 | 27.5 |
Terminal Bench Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence. | — | 43.2 | 39.6 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.39 | $2.34 | 262K tokens (~131 books) | $8.78 | |
| $0.44 | $2.00 | 262K tokens (~131 books) | $8.30 | |
| $0.25 | $0.38 | 131K tokens (~66 books) | $2.83 |
People also compared