Compare · ModelsLive · 2 picked · head to head
DeepSeek V3.2 Speciale vs Kimi K2.5
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Kimi K2.5 wins on 6/9 benchmarks
Kimi K2.5 wins 6 of 9 shared benchmarks. Leads in speed · knowledge · language.
Category leads
speed·Kimi K2.5math·DeepSeek V3.2 Specialeknowledge·Kimi K2.5language·Kimi K2.5coding·DeepSeek V3.2 Speciale
Hype vs Reality
Attention vs performance
DeepSeek V3.2 Speciale
#4 by perf·#5 by attention
Kimi K2.5
#85 by perf·no signal
Best value
DeepSeek V3.2 Speciale
2.0x better value than Kimi K2.5
DeepSeek V3.2 Speciale
97.8 pts/$
$0.80/M
Kimi K2.5
49.5 pts/$
$1.05/M
Vendor risk
Mixed exposure
One or more vendors flagged
DeepSeek
$3.4B·Tier 1
moonshotai
private · undisclosed
Head to head
9 benchmarks · 2 models
DeepSeek V3.2 SpecialeKimi K2.5
Artificial Analysis · Agentic Index
Kimi K2.5 leads by +58.9
DeepSeek V3.2 Speciale
0.0
Kimi K2.5
58.9
Artificial Analysis · Coding Index
Kimi K2.5 leads by +1.7
DeepSeek V3.2 Speciale
37.9
Kimi K2.5
39.5
Artificial Analysis · Quality Index
Kimi K2.5 leads by +17.4
DeepSeek V3.2 Speciale
29.4
Kimi K2.5
46.8
OpenCompass · AIME2025
DeepSeek V3.2 Speciale leads by +4.1
DeepSeek V3.2 Speciale
96.0
Kimi K2.5
91.9
OpenCompass · GPQA-Diamond
Kimi K2.5 leads by +1.4
DeepSeek V3.2 Speciale
86.7
Kimi K2.5
88.1
OpenCompass · HLE
DeepSeek V3.2 Speciale
28.6
Kimi K2.5
28.6
OpenCompass · IFEval
Kimi K2.5 leads by +2.2
DeepSeek V3.2 Speciale
91.7
Kimi K2.5
93.9
OpenCompass · LiveCodeBenchV6
DeepSeek V3.2 Speciale leads by +0.3
DeepSeek V3.2 Speciale
80.9
Kimi K2.5
80.6
OpenCompass · MMLU-Pro
Kimi K2.5 leads by +0.7
DeepSeek V3.2 Speciale
85.5
Kimi K2.5
86.2
Full benchmark table
| Benchmark | DeepSeek V3.2 Speciale | Kimi K2.5 |
|---|---|---|
Artificial Analysis · Agentic Index | 0.0 | 58.9 |
Artificial Analysis · Coding Index | 37.9 | 39.5 |
Artificial Analysis · Quality Index | 29.4 | 46.8 |
OpenCompass · AIME2025 | 96.0 | 91.9 |
OpenCompass · GPQA-Diamond | 86.7 | 88.1 |
OpenCompass · HLE | 28.6 | 28.6 |
OpenCompass · IFEval | 91.7 | 93.9 |
OpenCompass · LiveCodeBenchV6 | 80.9 | 80.6 |
OpenCompass · MMLU-Pro | 85.5 | 86.2 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.40 | $1.20 | 164K tokens (~82 books) | $6.00 | |
| $0.38 | $1.72 | 262K tokens (~131 books) | $7.17 |
People also compared
DeepSeek V3.2 Speciale vs GPT-5 ChatClaude Mythos Preview vs DeepSeek V3.2 SpecialeDeepSeek V3.2 Speciale vs Qwen3.5 397B A17BClaude Instant vs DeepSeek V3.2 SpecialeDeepSeek V3.2 Speciale vs Step 3.5 FlashDeepSeek-V2 (MoE-236B, May 2024) vs DeepSeek V3.2 SpecialeDeepSeek V3.2 Speciale vs MiMo-V2-FlashDeepSeek V3.2 Speciale vs GPT-5.1-Codex-Max