Compare · ModelsLive · 2 picked · head to head
Kimi K2.5 vs DeepSeek V3.2
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Kimi K2.5 wins on 16/18 benchmarks
Kimi K2.5 wins 16 of 18 shared benchmarks. Leads in speed · reasoning · knowledge.
Category leads
speed·Kimi K2.5reasoning·Kimi K2.5knowledge·Kimi K2.5math·Kimi K2.5language·Kimi K2.5coding·Kimi K2.5
Hype vs Reality
Attention vs performance
Kimi K2.5
#85 by perf·no signal
DeepSeek V3.2
#82 by perf·no signal
Best value
DeepSeek V3.2
3.3x better value than Kimi K2.5
Kimi K2.5
49.5 pts/$
$1.05/M
DeepSeek V3.2
165.6 pts/$
$0.32/M
Vendor risk
Mixed exposure
One or more vendors flagged
moonshotai
private · undisclosed
DeepSeek
$3.4B·Tier 1
Head to head
18 benchmarks · 2 models
Kimi K2.5DeepSeek V3.2
Artificial Analysis · Agentic Index
Kimi K2.5 leads by +6.0
Kimi K2.5
58.9
DeepSeek V3.2
52.9
Artificial Analysis · Coding Index
Kimi K2.5 leads by +2.8
Kimi K2.5
39.5
DeepSeek V3.2
36.7
Artificial Analysis · Quality Index
Kimi K2.5 leads by +5.1
Kimi K2.5
46.8
DeepSeek V3.2
41.7
ARC-AGI
Kimi K2.5 leads by +8.3
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Kimi K2.5
65.3
DeepSeek V3.2
57.0
ARC-AGI-2
Kimi K2.5 leads by +7.8
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Kimi K2.5
11.8
DeepSeek V3.2
4.0
Chess Puzzles
DeepSeek V3.2 leads by +2.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Kimi K2.5
12.0
DeepSeek V3.2
14.0
FrontierMath-2025-02-28-Private
Kimi K2.5 leads by +5.8
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Kimi K2.5
27.9
DeepSeek V3.2
22.1
FrontierMath-Tier-4-2025-07-01-Private
Kimi K2.5 leads by +2.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Kimi K2.5
4.2
DeepSeek V3.2
2.1
GPQA diamond
Kimi K2.5 leads by +5.6
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Kimi K2.5
83.5
DeepSeek V3.2
77.9
OpenCompass · AIME2025
DeepSeek V3.2 leads by +1.1
Kimi K2.5
91.9
DeepSeek V3.2
93.0
OpenCompass · GPQA-Diamond
Kimi K2.5 leads by +3.5
Kimi K2.5
88.1
DeepSeek V3.2
84.6
OpenCompass · HLE
Kimi K2.5 leads by +5.4
Kimi K2.5
28.6
DeepSeek V3.2
23.2
OpenCompass · IFEval
Kimi K2.5 leads by +4.2
Kimi K2.5
93.9
DeepSeek V3.2
89.7
OpenCompass · LiveCodeBenchV6
Kimi K2.5 leads by +5.2
Kimi K2.5
80.6
DeepSeek V3.2
75.4
OpenCompass · MMLU-Pro
Kimi K2.5 leads by +0.4
Kimi K2.5
86.2
DeepSeek V3.2
85.8
OTIS Mock AIME 2024-2025
Kimi K2.5 leads by +4.4
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Kimi K2.5
92.2
DeepSeek V3.2
87.8
SimpleQA Verified
Kimi K2.5 leads by +6.4
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Kimi K2.5
33.9
DeepSeek V3.2
27.5
Terminal Bench
Kimi K2.5 leads by +3.6
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
Kimi K2.5
43.2
DeepSeek V3.2
39.6
Full benchmark table
| Benchmark | Kimi K2.5 | DeepSeek V3.2 |
|---|---|---|
Artificial Analysis · Agentic Index | 58.9 | 52.9 |
Artificial Analysis · Coding Index | 39.5 | 36.7 |
Artificial Analysis · Quality Index | 46.8 | 41.7 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 65.3 | 57.0 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 11.8 | 4.0 |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | 12.0 | 14.0 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 27.9 | 22.1 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 4.2 | 2.1 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 83.5 | 77.9 |
OpenCompass · AIME2025 | 91.9 | 93.0 |
OpenCompass · GPQA-Diamond | 88.1 | 84.6 |
OpenCompass · HLE | 28.6 | 23.2 |
OpenCompass · IFEval | 93.9 | 89.7 |
OpenCompass · LiveCodeBenchV6 | 80.6 | 75.4 |
OpenCompass · MMLU-Pro | 86.2 | 85.8 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 92.2 | 87.8 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 33.9 | 27.5 |
Terminal Bench Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency. | 43.2 | 39.6 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.38 | $1.72 | 262K tokens (~131 books) | $7.17 | |
| $0.26 | $0.38 | 164K tokens (~82 books) | $2.90 |
People also compared