Compare · ModelsLive · 2 picked · head to head
Qwen3 235B A22B Thinking 2507 vs Kimi K2.5
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Kimi K2.5 wins on 12/14 benchmarks
Kimi K2.5 wins 12 of 14 shared benchmarks. Leads in knowledge · math · language.
Category leads
knowledge·Kimi K2.5math·Kimi K2.5language·Kimi K2.5coding·Kimi K2.5
Hype vs Reality
Attention vs performance
Qwen3 235B A22B Thinking 2507
#66 by perf·no signal
Kimi K2.5
#87 by perf·no signal
Best value
Qwen3 235B A22B Thinking 2507
1.6x better value than Kimi K2.5
Qwen3 235B A22B Thinking 2507
68.0 pts/$
$0.82/M
Kimi K2.5
42.6 pts/$
$1.22/M
Vendor risk
Who is behind the model
Alibaba (Qwen)
$293.0B·Tier 1
moonshotai
private · undisclosed
Head to head
14 benchmarks · 2 models
Qwen3 235B A22B Thinking 2507Kimi K2.5
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Qwen3 235B A22B Thinking 2507
12.0
Kimi K2.5
12.0
Fiction.LiveBench
Kimi K2.5 leads by +11.1
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
Qwen3 235B A22B Thinking 2507
75.0
Kimi K2.5
86.1
FrontierMath-2025-02-28-Private
Kimi K2.5 leads by +19.4
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Qwen3 235B A22B Thinking 2507
8.5
Kimi K2.5
27.9
FrontierMath-Tier-4-2025-07-01-Private
Kimi K2.5 leads by +4.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Qwen3 235B A22B Thinking 2507
0.1
Kimi K2.5
4.2
GPQA diamond
Kimi K2.5 leads by +10.1
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Qwen3 235B A22B Thinking 2507
73.4
Kimi K2.5
83.5
OpenCompass · AIME2025
Kimi K2.5 leads by +1.0
Qwen3 235B A22B Thinking 2507
90.9
Kimi K2.5
91.9
OpenCompass · GPQA-Diamond
Kimi K2.5 leads by +8.3
Qwen3 235B A22B Thinking 2507
79.8
Kimi K2.5
88.1
OpenCompass · HLE
Kimi K2.5 leads by +10.1
Qwen3 235B A22B Thinking 2507
18.5
Kimi K2.5
28.6
OpenCompass · IFEval
Kimi K2.5 leads by +6.1
Qwen3 235B A22B Thinking 2507
87.8
Kimi K2.5
93.9
OpenCompass · LiveCodeBenchV6
Kimi K2.5 leads by +10.0
Qwen3 235B A22B Thinking 2507
70.6
Kimi K2.5
80.6
OpenCompass · MMLU-Pro
Kimi K2.5 leads by +2.7
Qwen3 235B A22B Thinking 2507
83.5
Kimi K2.5
86.2
OTIS Mock AIME 2024-2025
Kimi K2.5 leads by +5.5
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Qwen3 235B A22B Thinking 2507
86.7
Kimi K2.5
92.2
SimpleQA Verified
Qwen3 235B A22B Thinking 2507 leads by +16.2
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Qwen3 235B A22B Thinking 2507
50.1
Kimi K2.5
33.9
WeirdML
Kimi K2.5 leads by +4.6
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Qwen3 235B A22B Thinking 2507
41.0
Kimi K2.5
45.6
Full benchmark table
| Benchmark | Qwen3 235B A22B Thinking 2507 | Kimi K2.5 |
|---|---|---|
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | 12.0 | 12.0 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 75.0 | 86.1 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 8.5 | 27.9 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 0.1 | 4.2 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 73.4 | 83.5 |
OpenCompass · AIME2025 | 90.9 | 91.9 |
OpenCompass · GPQA-Diamond | 79.8 | 88.1 |
OpenCompass · HLE | 18.5 | 28.6 |
OpenCompass · IFEval | 87.8 | 93.9 |
OpenCompass · LiveCodeBenchV6 | 70.6 | 80.6 |
OpenCompass · MMLU-Pro | 83.5 | 86.2 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 86.7 | 92.2 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 50.1 | 33.9 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 41.0 | 45.6 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.15 | $1.50 | 131K tokens (~66 books) | $4.86 | |
| $0.44 | $2.00 | 262K tokens (~131 books) | $8.30 |
People also compared